id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
15592818 | pes2o/s2orc | v3-fos-license | Bidirectional Association between Asthma and Irritable Bowel Syndrome: Two Population-Based Retrospective Cohort Studies
Background There is a demonstrated association between asthma and irritable bowel syndrome (IBS). In this study, we examined the bidirectional association between asthma and IBS using a nationwide database. Methods We conducted two retrospective cohort studies using data obtained from the National Health Insurance of Taiwan. Study 1 included 29,648 asthma patients newly diagnosed between 2000 and 2010. Study 2 included 29,875 IBS patient newly diagnosed between 2000 and 2010. For each study, four subjects without asthma and IBS were selected, respectively, frequency-matched by sex, age, and the diagnosis date. All four cohorts were followed up until the end of 2011 to estimate incident IBS for Study 1 and incident asthma for study 2. Adjusted hazard ratios (aHRs) were estimated using the Cox proportional hazards model after controlling for sex, age and comorbidities. Results The incidence of IBS was 1.89 times higher in the asthma cohort than in the comparison cohort (8.26 vs. 4.36 per 1,000 person-years), with an aHR of 1.57 [95% confidence interval (CI) = 1.47–1.68]. The aHRs remained significant in all subgroups measured by sex, age and the presence of comorbidities. In contrast, the incidence of asthma was 1.76 times higher in the IBS cohort than the comparison cohort (7.09 vs. 4.03 per 1,000 person-years), with an aHR of 1.54 (95% CI = 1.44−1.64). Similarly, aHRs remained significant in all subgroups measured by sex, age and the presence of comorbidities. Conclusion The present study suggests a bidirectional association between asthma and IBS. Atopy could be a shared pathophysiology underlying this association, deserving a further investigation.
Introduction
Asthma is a serious health problem affecting an estimated population of 300 million worldwide of all age groups. Asthma is defined based on characteristic symptoms and variation in expiratory airflow [1]. Patients with asthma suffer from respiratory symptoms and limited daily activities. An acute exacerbation of asthma may need urgent health care. Certain comorbidities commonly present in patients with asthma, such as gastroesophageal reflux disease (GERD), rhinitis, sinusitis, anxiety, and depression [2][3][4][5]. In addition, studies have demonstrated that asthma is associated with functional gastrointestinal disorders (FGIDs) due to the activation of the immune system [6,7].
Irritable bowel syndrome (IBS) is a chronic FGID, which affects 10-15% of the general population, with a higher prevalence in women than in men [8]. The Rome III system is the most widely used criteria for the diagnosis of FGIDs, including for the diagnosis of IBS. Based on the Rome III system, patients fulfilling criteria of IBS for the last 3 months with symptom onset and for at least 6 months prior to diagnosis are diagnosed with IBS. Patients suffer from recurrent abdominal pain or discomfort for at least 3 days in a month in the past 3 months and have been associated with two or more of the following: 1. improvement with defecation, 2. onset associated with a change in frequency of stool, 3. onset associated with a change in form (appearance) of stool [9]. The pathophysiology of IBS is complex, involving the digestive organ dysmotility, bacterial flora alteration, visceral hypersensitivity, dysregulation of mucosal immune, and dysregulation between the central nervous system and enteric nervous system [10].
Immune activation has been associated with both asthma and IBS. The T-helper 2 (TH2)type immune response is well-known predominant in patients with asthma [11]. Disordered TH2 immune responses may also occur in patients with atopy related gastrointestinal disorders, including IBS [12]. Studies have found that the disordered cellular immunity could involve increased intestinal mast cell infiltration in patients with IBS [13,14]. Pearson et al. have recently reported that a patient with severe asthma and IBS treated with anti-immunoglobulin E monoclonal antibody showed improvement of both asthma and IBS symptoms [15]. Therefore, atopy may play an important role in the shared pathophysiology of asthma and IBS.
Studies have suggested that asthma and allergic disorders are associated with IBS [16][17][18][19][20][21][22][23][24][25][26][27]. However, most of these studies are based on small sample size, questionnaire, and cross-sectional or case-control studies. Bidirectional, large-scale, population-based cohort study has never been performed. The present study aimed to use Taiwan's National Health Insurance (NHI) database to determine whether there was a bidirectional association between asthma and IBS. This dataset is a nationwide cohort dataset that has been used for various studies on asthma or IBS [28][29][30][31].
Data source
The Bureau of National Health Insurance (BNHI) of Taiwan has established the single-payer universal insurance system since 1995. The insurance system covers over 99.5% of the 23.74 million citizens in Taiwan (http://www.nhi.gov.tw/english/index.aspx). We used the claims data of the Longitudinal Health Insurance Database (LHID), established by the National Health Research Institutes (NHRI) of Taiwan, to conduct the present study, which included one million insured people randomly selected from all beneficiaries (n = 23.72 million) in the year 2000 registry. The LHID consisted of medical information for reimbursement from 1996 to 2011. All diseases were coded based on the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM). This study was approved by the Research Ethic Committee of China Medical University Hospital in Taiwan (CMUH-104-REC2-115). Patient records/information in the database was anonymized and de-identified prior to analysis.
Study participants
Fig 1 shows the process of identifying relevant study subjects for the two retrospective cohort studies. For Study 1, we identified patients aged 20 years with asthma diagnosis between 2000 and 2010 (ICD-9-CM code 493) for the asthma cohort. Those with asthma diagnosis before 2000 were excluded. To ensure the accuracy of asthma diagnosis, we selected only subjects who had received medications for asthma, including inhaled/systemic bronchodilator or inhaled/systemic corticosteroid into the asthma cohort. We excluded subjects with a diagnosis of IBS (ICD-9-CM code 564.1) before 2000 and those with incomplete medical information. For Study 2, patients aged 20 years with IBS diagnosis between 2000 and 2010 were identified from the same claims data. Those with IBS diagnosis before 2000 were excluded. Patients who had been diagnosed with asthma before 2000 and those with missing medical information were also excluded.
We defined the first diagnosis date as the index date for each patient. For each asthma case and each IBS case identified, four controls were selected separately as comparison cohorts for the asthma cohort and for the IBS cohort, frequency-matched by age (in 5 year spans), sex, and index year, under the same exclusion criteria.
Statistical analysis
For Study 1, the distributions of categorical demographic characteristics and comorbidities were compared between the asthma cohort and the comparison cohort, and the differences were examined using the Chi-square test. The Student's t-test was used to test the difference in mean ages between the two cohorts. We calculated follow-up person-years to assess the incidence density rates of IBS (per 1000 person-years) for each cohort. Univariate and multivariate Cox proportion hazard regression models were used to examine the relationship between asthma and the development of IBS. Hazard ratios (HRs) and 95% confidence intervals (CIs) were calculated. Significant variables identified in the baseline were included in the multivariate models. The proportional hazard model assumption was examined using the test of scaled Schoenfeld residuals. Results of the test revealed a significant relationship between Schoenfeld residuals for asthma and follow-up time (p < 0.01). In the subsequent analyses, we stratified the follow-up duration to deal with the violation of the assumption. The cumulative incidence of IBS was computed using the Kaplan-Meier method, and the differences between both cohorts were examined using the log-rank test. We used Cox proportional hazards regression analysis to measure hazard ratio of IBS by treatment [inhaled corticosteroid (ICS) vs. non-ICS]. We further used the number of emergency room (ER) visits for asthma to analyze the IBS risk associating with asthma control.
Similar data analysis procedures were performed for Study 2, and the proportional hazards model assumption was also examined. Results showed no significant relationship between Schoenfeld residuals for IBS and follow-up time (p = 0.96). All statistical analyses were performed using SAS 9.3 software (SAS Institute, Cary, NC, USA) for Windows. The level of significance level was set at p < 0.05, and the tests were 2-tailed.
Study 1
We identified 29,648 patients in the asthma cohort and 118,591 subjects without asthma (Table 1). There were more women in both cohorts. The asthma and non-asthma cohorts were similar in age distribution; however, the asthma cohort was slightly older based on the mean age (p < 0.001). The patients in the asthma cohort had a higher prevalence of comorbidities than those in the non-asthma cohort (all p < 0.001).
The mean follow-up time was 6.83 (SD = 3.38) years in the asthma cohort and 6.96 (SD = 3.31) years in the non-asthma cohort (data not shown). Fig 2 shows that the cumulative incidence of IBS was 3.93% higher in the asthma cohort than in the non-asthma cohort (p < 0.001) by the end of follow-up. Association between Asthma and IBS Overall, the IBS incidence was 1.9-fold higher in the asthma cohort than in the non-asthma cohort (8.26 vs. 4.36 per 1000 person-years), with a crude HR of 1.89 (95% CI = 1.74−2.01) and an adjusted HR of 1.57 (95% CI = 1.47−1.68) ( Table 2). The age-specific asthma to non-asthma adjusted hazard ratio (aHR) was the greatest for the youngest group: 2.04 (95% CI = 1.64−2.53). The aHR reduced to 1.32 (95% CI = 1.19-1.47) for the oldest group. The incidence of IBS was higher in subjects with comorbidity compared to non-comorbid subjects. The IBS incidence declined during the follow-up period in both cohorts, consistently greater in the asthma cohort than in the comparisons. Table 3 shows the effectiveness of treating. The IBS incidence was lower in patients with ICS treatment than those without the treatment, but not significant (aHR: 0.93, 95% CI = 0.84-1.03). Table 4 shows that the hazard of IBS increased with the frequency of ER visit, to an aHR of 20.7 (95% CI = 15.6-27.4) for those with more than twice a year of ER visits (p for trend < 0.0001), compared with the comparison cohort. Table 5 shows that both the IBS and non-IBS cohorts were dominated by women (52.8%), and 31% of the subjects were aged 35-49 years old. The mean age of the IBS cohort was slightly higher than that of the non-IBS, but significant. Comorbidities were also more prevalent in the IBS cohort (all p < 0.001). After 12 years of follow-up, the cumulative incidence of asthma was 2.83% higher in the IBS cohort than in the non-IBS cohort (p < 0.001, Fig 3). The overall incidence of asthma was 1.8-fold higher in the IBS cohort than in the non-IBS cohort (7.09 vs. 4.03 per 1000 personyears), with an aHR of 1.54 (95% CI = 1.44−1.64) ( Table 6). The sex-specific and age-specific IBS to non-IBS aHRs were all significant for women and men and for all age groups. Comorbidities increased the incidence of asthma in both cohorts, with the aHR (IBS cohort to the non-IBS cohort) stronger for those without comorbidity. The asthma incidence declined over time in both cohorts, but the trend of changes in the aHR (IBS cohort to the non-IBS cohort) was limited.
Discussion
This population-based cohort study demonstrated a bidirectional association between asthma and IBS. We found that there are a significantly higher risk of IBS in patients with asthma than in the general population, and a significantly increased risk of asthma in patients with IBS than in the general population.
In recent decades, several studies have investigated the relationship between asthma and IBS. Kennedy et al. reported earlier an independent association between IBS and bronchial hyper-responsiveness [16]. Subsequently, several small-scale case-control studies reported that [22]. They also found the use of oral steroids in asthma patients could reduce the risk of IBS. Another large-scale study in the US by Cole et al. reported a 20% increase in the incidence of IBS among asthma patients, but they failed to find the effect of oral steroids among these patients [23]. Our study also failed to show a significant effectiveness of ICS treatment in reducing the IBS risk for asthma patients. The inconsistent findings in the ICS medications propose the need for additional investigations.
On the other hand, Yazar et al. found in a case-control study that the prevalence of asthma was much greater in IBS cases than in healthy controls (15.8 vs. 1.45%) based on medical history, clinical features, and the results of pulmonary function test [24]. In another case-control analysis using medical records of 30,000 patients in primary care settings, Jones et al. found patients with IBS were more prevalent with asthma history than non-IBS subjects (15.0 vs. 11.0%) [6]. In a large community survey, Amra et al. also found a near 3-fold higher prevalence of asthma in IBS patients than in non-IBS subjects (9.5% vs. 3.3%) [25]. These findings are consistent with our cohort study finding: IBS patients are at an elevated risk of developing asthma. The mechanisms behind the bidirectional association between asthma and IBS, or concomitant factors existing in these two diseases are largely unknown. Atopy may play an important role in the association. A questionnaire study has found that patients with atopic manifestations, such as allergic rhinitis, allergic eczema, and asthma, are near 3-time more likely to have IBS [7]. Individuals with hypersensitivity to food and pollen may associate with the manifestation of IBS [32,33]. The underlying causes of inflammatory conditions can also produce respiratory and gastrointestinal symptoms, as well as smooth muscle hyperactivity [7,23]. Other shared risks and comorbid conditions, such as smoking, GERD, mood disorders and obesity may also play a role. In addition, socioeconomic level, education, occupation, residence area and nutrition status may potentially confound both diseases, which cannot be totally corrected in this study.
It is important to note that the IBS diagnosis is criteria based, most using the Rome III criteria, which can be challenging due to overlap with other organic conditions [34][35][36]. The potential conditions include celiac disease, chronic small intestinal bacterial overgrowth, bile acid diarrhea, malabsorption because of exocrine pancreatic insufficiency, and inflammatory bowel disease etc. There is considerable heterogeneity in both sensitivity and specificity among studies. The sensitivity and specificity of the IBS diagnosis can be improved by verification with the data of laboratory tests, especially results of screening tests for inflammation and blood in stools [36]. However, the information of laboratory tests was not available, we could not Association between Asthma and IBS perform the validation in the present study. Among these conditions, we found that celiac disease and inflammatory bowel disease were associated with asthma [37][38][39]. Therefore, any misclassifications may influence our results as well.
In Study 1, our findings are compatible with the well-known concept that the prevalence of comorbidities such as COPD, GERD, allergic rhinitis, chronic sinusitis, atopic dermatitis, anxiety, depression, and obesity are significantly higher in patients with asthma than in controls. Asthma patients with comorbidities had a higher incidence of IBS than those without comorbidities and non-asthma subjects with comorbidities. This may be partly explained by the fact that patients with asthma and comorbid conditions may require multiple medical visits and are at a greater risk of receiving an additional diagnosis. In addition, our study revealed that the IBS risk increased proportionately with the number of annual ER visits for asthma. Therefore, a higher incident IBS rate may be partly associated with Berkson's bias [40,41]. Similarly, in Study 2, the prevalence rates of comorbidities, including COPD, GERD, allergic rhinitis, chronic sinusitis, atopic dermatitis, anxiety, depression, and obesity, were also significantly higher in patients with IBS than in the controls. IBS patients with any of these comorbidities had a higher incidence of asthma than those without comorbidities and non-IBS subjects with comorbidities. Thus, a higher incident asthma rate may also be partly associated with Berkson's bias.
The strength of this study is the use of a longitudinal population-based evaluation for the bidirectional relationship between asthma and IBS. It is generally costly to conduct a population-based prospective cohort study, in which loss to follow-up is problematic after years of follow-up. Therefore, using insurance claims data to conduct a retrospective cohort study is a timely economical alternative. However, there are several limitations to be considered about interpreting the study results. First, this study used the ICD-9-CM algorithm to define diseases based the clinical performance of physicians. However, the insurance authority has established an ad hoc committee to monitor the accuracy of claims data to prevent violation. In addition, we selected only subjects with repeated coding to increase the validity and accuracy of the diagnoses. Second, NHIRD does not provide detailed information on occupation, smoking habits, body mass index, diet preference, environmental exposure, or family history, although these are potential confounding factors. Our data analysis used the comorbidity variables of COPD and obesity as part of the controlling variables to substitute smoking and sociodemographic status. In addition, relevant clinical variables, such as pulmonary function tests, serum laboratory data, or imaging results, were unavailable for diagnosis validation. However, the significant bidirectional relationship between asthma and IBS has been approved in our data. The dose response association further show that the relationship is likely real.
Conclusion
This study suggests a bidirectional association between asthma and IBS. The risk of incident IBS for asthma patients is slightly greater than the risk of incident asthma for IBS patients. The association could be clinical and pathophysiological importance. Both asthma and IBS may share a similar pathophysiology underlying this association instead of a causal relationship between the two disorders. Our data suggest that there is a need to monitor asthma patients for the potential of developing IBS, and vice versa. | 2016-05-04T20:20:58.661Z | 2016-04-19T00:00:00.000 | {
"year": 2016,
"sha1": "e4f8c0a71caff4f3545e1f6b676d61b1708437a3",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0153911&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e4f8c0a71caff4f3545e1f6b676d61b1708437a3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234027656 | pes2o/s2orc | v3-fos-license | A Critical Review on Expansive Soils Including the Influence of Hydrocarbon Pollution and the Use of Electrical Resistivity to Evaluate their Properties
This paper reviews the studies on expansive soil with a main focus on failure mechanism, financial losses, mineralogy, determination of swelling parameters and others. Effect of hydrocarbon pollution on geotechnical properties of expansive soil was presented. The paper discussed the assessment of electrical response of contaminated swelling soils. Wide extend of expansive grounds around the world and the serious impact created on infrastructures requires to identify its influential aspects and the appropriate treatments. Also, it was found that petroleum product affect significantly on the basic properties of swelling soils such as gradation, consistency, compaction, swelling and others, and electrical resistivity can be employed to reveal the electrical characteristics of polluted expansive soil.
Introduction
Expansive soil is a soil that significantly exhibits swell potential and shrinkage potential in relation to change in moisture content. Once soil hydrates, it becomes sticky, heavy, and its volume increases considerably as a result to absorbing large quantities of water. However, when it dehydrates it shrinks and becomes very hard causing noticeable cracks near the ground surface with maximum width may be up to 20 mm or more 1-3.Practically, expansive soils can be observed in moist soil where a high plasticity clay can show problematic behaviour or in arid/semi-arid environments where soils have even moderate expansive index can lead to a pronounced damage 3. Influential aspects of such soil that required tobe identified are; failure mechanism, origin and the distribution around the world, structural damage, identification of mineral types and other aspects. However, the behaviour of expansive soil upon oil pollution is still not understood due to the lack in the related studies. In oil stations, soil pollution may occur due to the outflow of oil from broken tube lines or underground storage tanks of fuel or gas. In addition, oil leakage on the ground could be accidently occurred during transportation or during drilling processes in most circumstances 4. Contamination with oil can be considered as a serious problem in geo-environmental engineering due to its effects on the atmosphere, groundwater and soil. Once the oil penetrates through the ground surface, an amount of it is trapped in the unsaturated zone, while the reminder of oil reaches the water table causing water pollution. The trapped amount evaporates to atmosphere and pollutes the air, vegetation and consequently affects the human health 5. Many investigators studied the effect of oil products pollution on the geotechnical properties such as Atterberg limits, Max. dry density, hydraulic conductivity, shears strength, etc. for different types of soils. However, investigations through expansive soil were not considered previously and need more detection. In recent years, Electrical Resistivity is considered as cost effective, non-destructive and quick method to investigate properties of soils. Many researchers have reported resistive response to soil petroleum pollution 6 , but the relation between the degree of pollution and swelling index was not interpreted. The associated studies for polluted expansive soil are very limited and have not received sufficient attention in the literatures. To sum up, the main target of this research is to highlight the lack of knowledge about the behavior of expansive soil upon hydrocarbon pollution and the assessment of electrical resistivity for contaminated soil properties.
Expansive soil.
Expansive soil can cause a structural failure due to its seasonal volume variation and cautions should be taken pre and post construction of foundations and pavements 3,9. The subsequent hardening and softening behaviour represented by the associated shrink-swell soil cycle can cause the structures founded on such soil to fail and damage civil infrastructures such as transportation, water supply, and sewage collection systems, as well as domestic, commercial, and industrial facilities. Damage in such facilities are clearly demonstrated by differential heave in roads and footpaths, inclined cracks in slab of basement and masonry walls, and breakage and fatigue in underground storage tank and buried pipeline 10. It was reported that failure occurs when the volume alteration are irregularly distributed under the foundation, in such condition when the variation in water content of the soil around the edges of a building can lead to swelling pressure to develop beneath the outer border of the building while the water content beneath the center of the building will remain constant. Consequently, this will produce an end lift failure. In reverse, a center lift failure will result if swelling is localized beneath the center of the building or when shrinkage happens under the edges.
The American Society of Civil Engineers (ASCE) have weighed the harm due to expansive soil by 25% of all structures in the United States. The associated financial loss can be greater than that of floods, earthquakes and tornadoes 11. Such the annual cost resulted from the hazardous foundation of the civil engineering structures rested on expansive soils is estimated at $1000 million in the United State, £150 million in the UK, and many billions of pounds worldwide 12,3. Insurance companies in USA spend millions of dollars yearly to repair homes concerned with swelling clay derived from residual soils which can apply uplift pressures reach approximately 5,500 PSF, which can in turn cause significant damages to lightly-loaded wood-frame structures 1. Example on occurred damage under such circumstances includes the residential zone of Akashat Mine near Al-Rutbah city in the west of Iraq where tens of houses undergo from severe destruction due to soil swelling 13. The parent materials that can be related to expansive can be classified into two main groups. The first group includes the igneous rocks in which the feldspar and pyroxene minerals decompose to form other secondary minerals and montmorillonite. The second group is called smectite group consist of the sedimentary rocks that comprise montmorillonite which in its role breakdowns and form expansive soils 14. It is reported that expansive soils in Iraq are belong to the second group 15. Generally, the minerals in the expansive clay have a very weak Van der Waals forces, their cation exchange capacity is very high (80-150 mg/100g) with extraordinary negative surface charge owing to isomorphs replacement, and possess a large specific area ranging between 400 m2/g and 900 m2/g 16. Expansive soils are found throughout many regions of the world, particularly in arid and semi-arid regions, as well as where wet conditions occur after prolonged periods of drought. Their distribution is dependent on geology (parent material), climate, hydrology, geo-morphology and vegetation. Expansive soils occur and incur major construction costs around the world, with notable examples found in USA, Australia, India and South Africa 3. In addition, many examples are found in the literature in the Arab Gulf area such as Al-Rawas in Oman and Al-Mamodi in Saudi Arabia. Al-Obaidy et al., 2016 17 referred to presence of such soil in Iraq through reviewing the soil conditions in this country. Authors pointed that although in much Iraqi published work the existence of expansive clay is confirmed to be in middle and north of the country and some in the south, the most affected area with swelling clay is concentrated on the west.
Identification of mineral types for expansive soil
Several methods were employed to explore the clay mineral type in a natural expansive soil. Some of these methods were summarized as follows:
X-Ray diffraction
This method is an analytical technique which can chiefly provide information on a soil sample in which X-rays assembled at different angles and directed with slow rotation to show the intensity of the collected rays. The technique depends on Z direction of the sample which is influenced by exchange of central cations and the existence and size of the corresponding balancing cation 18.
Thermal inspection
This method is also called differential thermal analysis, in which the sample will be heated at a constant rate up to approximately 10000 C. The endothermic and exothermic effects occurring in the material are usually recorded by a proper device 19.
Chemical inspection.
Despite the developed technique to study clay, chemical analysis is still necessary for identification of clay minerals 20.
Dye adsorption
In this method, the soil sample is treated with acid, then the developed colours by the adsorbed dye are be influenced by the characteristic base exchange capacities of the different clay mineral group available 21.
Scanning electron microscope
It is a powerful technique to study the formation, texture and fabric of the clay tested sample. The high energy electron generates a variety of signals at the surface of the clay sample that expose the morphology, chemical composition, and crystalline structure of the sample 22.
Identification the degree of soil expansivity
The degree of soil expansivity was commonly assessed using a coefficient called the swell potential which is identified in the previous literature according to liquid limit, plasticity index or shrinkage limit. 16 presented tables for each individual characteristic based on associated references. Table 1 represents the degree of expansitivity based on the above parameters. (1) Vd= reading of the graduated cylinder containing distilled water Vk= = reading of the graduated cylinder containing kerosene According to IS: 1948-1970 the FSI can be considered as; low if it is less than 50%, medium if it equals to 50%-100%, high if it ranges between 100% and 200%, and very high if it is more than 200%.
Oedometer test
Oedometer test is a beneficial and common test in the geotechnical engineering. It is used to determine swelling parameters in soil sample such as swell pressure and swelling index. The swell pressure (Ps) can be defined as a required pressure for the swelling soil to consolidate back to its original volume before introducing the water 24 .The parameter Ps can be determined from the void ratio (e)-log effective stress (log') relationship curve of the soil sample corresponding to the loading stage. The swelling index Cs which represents the slope of the unloading portion of the e-log' curve can be calculated as follows: (2) : Change in void ratio log': Change in effective stress
Oil contamination effect on expansive soil properties
Expansive soil is considered as problematic soils that lead to geotechnical and engineering challenges all over the world 26. Behaviour of expansive soil due to oil products pollution is still obscure and unclear. Generally, clayey soils have complicated behaviour in the presence of organic liquids 27 and [37]. When oil spills, it moves down under gravity, spreads horizontally by migration then if find its way to change soil system properties 28. It is necessary for geotechnical engineer to investigate the engineering behaviour of clayey and expansive soils in order to analyse the suitability of such polluted zone for civil engineering constructions in future 25. There is a lack in studies and knowledge about oil contamination effect on expansive soil geotechnical properties. Daka, 2015 24 have tried to fill the voids formed by deficiency in data of effect of oil on soils that contain montmorillonite This research highlights this gap of knowledge within this aspect.
Results of Previous Experimental Works on Contaminated Expansive Soil
Based on other researchers work, the following results have been taken out from their investigations: Harsh et al., 2016 25 studied the impact of oil pollution on kaolinite clay and expansive soil (black cotton) properties. Outcomes of the expansive soil tests revealed that specific gravity decreased as oil content increased because the density of hydrocarbon is much lower than water. Liquid limit decreased as the percentage of oil pollution increased adversely to Daka, 2015 24, whilst plastic limit and shrinkage limit increased due to oil addition. The study also examined the swelling potential variation due to oil contamination. Free swell index which is the important parameter to evaluate swelling potential increased with the addition of oil content. It was inferred that properties of expansive soil is more susceptible to deteriorate due to oil subjection and less reliable for any engineering project in vicinity of contaminated zone. Another study on the performance of petrol and diesel contamination on IOP Publishing doi:10.1088/1757-899X/1076/1/012097 5 black cotton soil which forms about 70% central India was conducted by Pusadkar and Bharambe, 2014 29. It was observed that Atterberg limits of polluted soil were increased. The maximum dry density, specific gravity, CBR, and swelling pressure were reduced. The research reported that the effect of contamination is quite similar to water by increasing inter-particle slippage. Few attempts reported that adding oil product can be used as a technique to stabilize swelling grounds. In Iraq, The hazard of soil contamination by petroleum product increases with the development of exploration, production and transportation 36. Majeed, 2017 28 studied the effect of various petroleum product on the expansive soil properties. The soil was obtained from karkuk city. Petroleum product such as: Kerosene, Gasoil, and Cut-back asphalt (MC-30) were added in different percentage (2, 4, 6, 8 and 10%) in order to improve the properties of expansive soil. It was concluded that for all petroleum products the addition of any percentage reduces the geotechnical properties values such as Atterberg limits, maximum dry unit weigh, free swell index and swell pressure which it in turn reduces the volumetric changes. Kerosene was the best petroleum product for treating expansive soil. Recently, Zeini and Al-Abdaly, 2020 29 evaluated that fuel oil can be used as improvement agent for expansive soil properties. The study was conducted on swelling soils samples collecting from a region in the west desert of Iraq. Outcomes revealed that the (8%) of weight is the optimum percentage to reduce the swelling potential.
Use of electrical resistivity to assess expansive soil properties
Electrical resistivity can be defined as the ability of material to oppose the flow of electricity. It is considered as an attractive tool to monitor the subsurface and identifying changes in soil properties. It depends essentially on degree of saturation, porosity, clay content and temperature. Electrical resistivity property is widely employed recently for assessing the geotechnical parameters of soil, including the swelling index. However, under circumstances of leaking hydrocarbon products into expansive grounds, the usage of such technique in identifying the degree of contamination is still obscure and needs more investigation. Moreover, the effect of oil contamination on the swelling properties through an expansive characterized with high volume change upon drying or wetting with water was not considered. Although few recent attempts reported adding oil products as an improvement technique to stabilize an expansive ground [28], they did not interpret the relation between the degree of pollution and the swelling index for different expansive soils. Thus, the behaviour of such soil upon contamination with hydrocarbon products is still obscure and needs enhanced understanding. On the other hand, those conventional laboratory experiments still, in most circumstances, costly and require much effort and time to be performed. That is why some researchers functionalized the Electrical Resistivity ER method in their studies. Related to this topic, see Liu et al. [30]. The ER is economical, quick, non-destructive, and applicable for different soil types; its measurements can deliver beneficial information about moisture, densification, and salinity of the subsoil [31][32][33]. Moreover, the relationship between the electrical data and oil content can quickly establish. The ER of the petroleum substance in the soil is different in value from that of the water 6]. On the other hand, the ER is very useful in the evaluation of the degree of contamination as the electrical response will be altered concerning oil type and content [30]. Referring to the non-polluted expansive soil, the ER method was identified through previous literature with its high sensitivity to the mineralogical properties including montmorillonite and clay content [34]. Moreover, ER response can be used to correlate to the swell parameters such as swelling potential of expansive soils [35]. However, the electrical response of expansive soil and extending the correlation between ER and the swelling potential to include the percentage in the pollution with hydrocarbon substances was not taken into consideration in previous literature.
Statistics Results and Main Findings
According to the collected data and the associated sources of this study, it can be noticed that the majority of the researches have focused on the performance of various types of soil upon oil products contamination. However, few studies (about 10%) have referred to the influence of oil contamination on the behavior of the expansive soil, see figure. 1-a. Also, in many previous geophysical studies the electrical resistivity technique was factionalized in assessment the electrical response for different soil IOP Publishing doi:10.1088/1757-899X/1076/1/012097 6 types, only 15% of them were dedicated for the electrical response of uncontaminated expansive soil and about only 5% for contaminated expansive soil, see figure. 1-b. Thus the electrical behavior of expansive soil upon addition of oil products is still obscure and need more investigation. The lack in related data highlighted the gap of knowledge within this topic.
(a) Geotechnical studies upon contamination (b) Geophysical studies using ER method Figure 1. Percentages of related studies
Conclusion
Expansive soil is a soil that exhibits swelling and shrinkage potential due to variation in moisture content resulted in structural failure in the civil infrastructures estimated by great financial loss annually. It is extended in many regions worldwide. Several methods were employed to identify mineral types and the degree of expansivity for expansive soil. Montmorillonite mineral is the main component of such soils. However, the effect of hydrocarbon pollution on the geotechnical properties of swelling soils did not receive sufficient attention in previous studies and still need more clarification. Few researches revealed that oil products can be used as improvement agent for expansive soil properties. Moreover, the positive relationship between electrical resistivity and oil content showed that such technique can be used as quick economic tool to investigate properties of contaminated expansive soil although the related researches in this field are very little. Finally, the obtained results of previous studies could be used as a guide in soil remediation concerning polluted ground and contribute to producing a reliable alternative to using costly, time and effort consuming conventional tests. | 2021-05-10T00:04:04.635Z | 2021-02-01T00:00:00.000 | {
"year": 2021,
"sha1": "d3459f13bd6433ccabb79f5374c5b54b74a0f00e",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/1076/1/012097",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "2c5e23d205ad980886ef66a1e81d2b39adaab635",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering",
"Geology"
],
"extfieldsofstudy": [
"Physics",
"Environmental Science"
]
} |
18758633 | pes2o/s2orc | v3-fos-license | Medically unexplained illness and the diagnosis of hysterical conversion reaction (HCR) in women’s medicine wards of Bangladeshi hospitals: a record review and qualitative study
Background Frequent reporting of cases of hysterical conversion reaction (HCR) among hospitalized female medical patients in Bangladesh’s public hospital system led us to explore the prevalence of “HCR” diagnoses within hospitals and the manner in which physicians identify, manage, and perceive patients whom they diagnose with HCR. Methods We reviewed admission records from women’s general medicine wards in two public hospitals to determine how often and at what point during hospitalization patients received diagnoses of HCR. We also interviewed 13 physicians about their practices and perceptions related to HCR. Results Of 2520 women admitted to the selected wards in 2008, 6% received diagnoses of HCR. HCR patients had wide-ranging symptoms including respiratory distress, headaches, chest pain, convulsions, and abdominal complaints. Most doctors diagnosed HCR in patients who had any medically-unexplained physical symptom. According to physician reports, women admitted to medical wards for HCR received brief diagnostic evaluations and initial treatment with short-acting tranquilizers or placebo agents. Some were referred to outpatient psychiatric treatment. Physicians reported that repeated admissions for HCR were common. Physicians noted various social factors associated with HCR, and they described failures of the current system to meet psychosocial needs of HCR patients. Conclusions In these hospital settings, physicians assign HCR diagnoses frequently and based on vague criteria. We recommend providing education to increase general physicians’ awareness, skill, and comfort level when encountering somatization and other common psychiatric issues. Given limited diagnostic capacity for all patients, we raise concern that when HCR is used as a "wastebasket" diagnosis for unexplained symptoms, patients with treatable medical conditions may go unrecognized. We also advocate introducing non-physician hospital personnel to address psychosocial needs of HCR patients, assist with triage in a system where both medical inpatient beds and psychiatric services are scarce commodities, and help ensure appropriate follow up.
Background
Unexplained somatic symptoms such as pain, fatigue, and dizziness are common in primary care and general medicine settings worldwide [1,2]. The frequentlychanging terminology for such ailments includes "functional", "psychogenic", "non-organic", "somatoform", "idiopathic", and "medically-unexplained", along with the largely-historical terms "hysteria", "hysterical neurosis", and "hysterical conversion," and the now-exclusivelyneurological "conversion disorder". Bodily symptoms without identifiable underlying pathology reveal connections between mental and physical health and are often associated with underlying mood or other psychiatric disorders [3,4] which tend to go under-diagnosed [5].
Symptoms of somatization are more commonly reported by women than men, [6] but they are more strongly associated with emotional distress than with gender [7][8][9]. Somatization may increase in settings where physical symptoms are more accepted than emotional or psychological symptoms or where treatment for physical illness is more readily available [10]. Indeed, in southern India, higher concern about and sensitivity to stigma has been shown to correlate with patients reporting more somatization symptoms and fewer depressive symptoms [11]. Medically-unexplained physical symptoms pose unique challenges for health care providers. Patients' symptoms are difficult for clinicians to understand; [12] a suspicion that patients are feigning often leads to patient-provider conflict; [12,13] associated health care utilization and costs are high; [14] and treatment, although often possible, requires time and close patient-provider cooperation [13,15].
A 1975 study from Dhaka, Bangladesh, describes frequent psychogenic symptoms among patients, particularly young females, seeking outpatient medical care [16]. In neighboring India, somatic complaints such as body aches, gynecological symptoms, or weakness and tiredness are the principal descriptors used by women with depression, [17] and incidence rates of "hysteria" or "dissociative conversion" from 0.2% to 3.2% were reported in populations field surveys in West Bengal villages [18]. Outbreaks of mass sociogenic illnesses in Bangladesh, including a widely investigated outbreak among adolescent schoolgirls in 2007 characterized by headaches, weakness, and sensory disturbances, [19] further suggest extensive somatization.
In the course of surveillance for hospital-acquired infections in Bangladeshi teaching hospitals, we noted frequent diagnoses of hysterical conversion disorder (HCR) in the admission logs of women's general medicine wards, prompting us to further investigate HCR within the local medical culture. This article reports the prevalence of HCR diagnoses (as made by their treating clinicians) among adult female patients in two hospitals and describes how physicians identify, manage, and perceive patients whom they diagnose with HCR.
Methods
We conducted our study at two Bangladeshi government-funded medical college hospitals where investigators had ongoing surveillance, in distinct geographic regions. Both have emergency departments, men's, women's, and pediatric general medicine wards, and specialty consultation services including neurology and psychiatry although no psychiatric ward.
For one women's medicine ward at each study hospital, we reviewed all admission logbooks from the 2008 calendar year to identify patients whose admitting, interim, or discharge diagnoses included HCR. For these case patients, we abstracted age, admission and discharge dates, and initial, interim, and final diagnoses. We performed statistical analyses using Intercooled Stata v9.1.
To better understand how physicians conceptualized HCR and how they diagnosed and treated HCR patients, we conducted key informant interviews at the study hospitals in April-May 2009 with a convenience sample of 13 physicians who regularly cared for HCR patients. We aimed to obtain a sample of physicians of both sexes with diverse seniority and specialization. Semistructured interviews, conducted in English or through a Bengali-English interpreter, used open-ended questions focused on physicians' experiences diagnosing and treating HCR patients, and included questions about the criteria they used to make HCR diagnoses, the treatments their patients typically receive, their perceptions about the causes of HCR, and their views of how HCR impacts the hospital system.
Interviews were audio-recorded, transcribed verbatim, and coded manually using codes derived from the original research objectives and from additional emergent themes, ensuring mutually-exclusive and exhaustive coding categories. We analyzed data using content analysis to understand recurrent themes across interviews [20]. Multiple coauthors reviewed the sorted primary data on key themes and reached consensus regarding their valid interpretation. Hospital logbooks provided a source of triangulation regarding the reasons for hospitalization and hospital course of HCR patients. We report illustrative quotations verbatim and otherwise summarize findings.
Institutional review boards at icddr,b and Vanderbilt University approved the study. The Government of Bangladesh and authorities at individual hospitals approved the use of log books. All participants gave written informed consent for participation, and we conducted interviews in private settings. We refer to hospitals as "A" and "B" in the interest of participant confidentiality.
Prevalence of HCR diagnosis
During 2008, Hospital A admitted 2520 patients and Hospital B admitted 5652 patients to the selected women's medical wards. Among these, 171 (7%) of the 2520 Hospital A patients and 277 (5%) of the 5652 Hospital B patients received HCR diagnoses. Because we were specifically interested in the use of the term "HCR", these tallies exclude diagnoses of "conversion disorder" (8 occurrences), "conversion" (3 occurrences), "functional disease," and "a case of a psychogenic problem" which also appeared in logbooks, unless the patient also had HCR listed as a diagnosis. The median age of adult female HCR patients was 25 years (interquartile range (IQR) 19-35 years) (Figure 1). The median length of stay for HCR patients in Hospital A (where~80% of discharge dates were documented) was 2 days (IQR 1-3 days); Hospital B recorded discharge dates too infrequently for analysis.
Sixty percent (267) of the 448 HCR patients retained their HCR diagnoses from admission to discharge (Table 1), and all but 16 of these received the HCR diagnosis exclusively (as opposed to being diagnosed with "HCR and X"). The other 40% of HCR patients had their diagnoses revised to or from a wide range of medical, psychiatric, and syndromic diagnoses ( Table 2). A pattern of revision from HCR to some other medical or psychiatric diagnosis or syndrome predominated in Hospital B (where 78 of 100 diagnostic revisions were from HCR to non-HCR), whereas revisions from a non-HCR diagnosis (typically from a physical symptom or syndromic diagnosis) to HCR predominated in Hospital A (where 20 (26%) of 76 diagnostic revisions were from HCR to non-HCR) ( Table 1). We also noted twelve cases in which an initial HCR diagnosis was retained while a medical diagnosis such as urinary tract infection, pelvic inflammatory disease, peptic ulcer disease, or hypertension was added, and four cases in which HCR was removed in a revision, leaving only an accompanying medical diagnosis (for example, "HCR with PUD [peptic ulcer disease]" became simply "PUD").
Physician informants
Interviews were conducted with 13 physicians: 5 in Hospital A and 8 in Hospital B. These physicians comprised a range of specialties and seniority levels ( Table 3), but few female physicians could be recruited from the predominantly-male hospital staff.
Physicians' definitions of HCR
Physicians characterized HCR as illness with organicseeming symptoms but without discernible organic etiology. Some physicians considered HCR as a possible diagnosis only when patients presented with neurological syndromes, but most, including the neurologist and psychiatrists, considered HCR when patients presented with any medically-unexplained physical complaint. Convulsions and respiratory distress topped most physicians' lists of symptoms seen in HCR, and physicians named women in their 20s as the group most likely to have HCR. A few sought a history of personal conflict as a prerequisite to making an HCR diagnosis.
Physicians had learned about HCR by observing moresenior physicians rather than through formal medical school lessons. Some noted that "HCR" was considered outdated terminology outside of Bangladesh, and most of these physicians equated HCR with conversion disorder, dissociative disorder, or hysterical personality disorder as termed elsewhere. The remaining physicians, however, felt that HCR was a problem either quantitatively or qualitatively unique to Bangladesh, and several expressed dismay at the insufficient published literature or academic interest regarding HCR.
Views about psychological basis of HCR
Physicians considered HCR to occur in reaction to stress and psychological conflict, often as a recurrent and maladaptive response. They named a variety of stressors they felt contributed to HCR. Family and marital disharmony, including upcoming arranged marriages, conflicts with in-laws, and neglect by husbands, were the most commonly cited triggers of HCR. Several senior physicians also specifically cited domestic violence or sexual abuse as potential triggers. Poverty, malnourishment, and financial worries were other commonly-cited stressors. In younger patients, scholastic exam season was observed to trigger an increased frequency of HCR cases, and many physicians also connected HCR with "love affairs," or dating relationships that ran counter to cultural norms or family marriage arrangements. "A young girl," one physician described, "about 18 years old, a college student, was in love with a classmate, but her father wouldn't consent [to the relationship]. He confined her to the house, made sarcastic comments. . . Gradually the patient developed fits and pseudoseizures. Doctors in the medicine department diagnosed the case as HCR. After two or three weeks [of outpatient psychiatric treatment], she was okay" (5).
Physicians perceived HCR symptoms as a nonverbal declaration of problems by young or uneducated individuals who lacked the communication skills, insight, or support network to express their distress directly. HCR illness could be a way of escaping from unpleasant circumstances or of seeking attention, sympathy, or assistance from husbands or other family members.
Physicians disagreed about whether HCR was distinct from malingering. Many felt that HCR patients had no conscious control over their symptoms. Other physicians, however, suspected HCR patients of deliberately seeking attention or respite; in support of this opinion, they observed that HCR symptoms sometimes appeared and disappeared suddenly depending who was present, and that HCR patients sometimes convulsed or fell without hurting themselves, suggesting willful attention seeking.
Evaluation of suspected HCR patients
Young, healthy women with normal vital signs brought to the emergency department after acute onset of severe symptoms frequently received provisional diagnoses of HCR. In some cases, emergency physicians assigned these diagnoses within seconds of the patients' arrival. Some of these HCR patients, especially those with hyperventilation, recovered rapidly and returned home after receiving interventions intended to calm them, such as supplemental face mask oxygen. Typically, however, emergency physicians reported that suspected HCR patients were admitted to the medical ward. Medical reasons for admission included severe symptoms, comorbidities, and diagnostic uncertainty. In addition, demands by patients' families for further treatment of patients' complaints, as well as pressure to vacate emergency room beds for new patients, resulted in additional admissions.
On the medical wards, interns evaluated patients and assigned diagnoses upon admission. Interns estimated that senior physicians later amended their HCR diagnoses about 20% of the time. Senior physicians described how their own history-taking delved into the circumstances surrounding the onset of HCR symptoms, and they repeatedly asked patients or their accompanying relatives or friends about any arguments, disappointments, or stressful situations that could have triggered conversion. A few physicians mentioned that they were vigilant for hints of physical or sexual abuse when talking with HCR patients, although none indicated that they routinely asked explicit questions about abuse.
Physical examinations of suspected HCR patients included basic examinations of the heart, lungs, and abdomen, along with symptom-specific assessments such as palpating rib cartilage for tenderness in patients with chest pain and looking for bite marks on the tongues of patients with convulsions. Additional tactics for exploring whether a symptom was psychogenic included observing whether a limp, supine patient whose hand was released over her allowed it to land on her face, watching for corresponding movement of a "paralyzed" leg when the contralateral leg was moved, and monitoring repetitive convulsive movements for inconsistency. Certain HCR management strategies blurred the line between diagnosis and treatment. Physicians described the threat or actual use of nasogastric tubes, urinary catheters, wide-bore needles, or electric shocks -"punishments" (6), in words of one physicianas a routine method of trying to provoke unresponsive patients to respond.
Physicians reported that they used laboratory or imaging studies in only a minority of HCR cases. In particularly-unclear instances, they might order an electrocardiogram, blood cell count, or chest x-ray in a suspected HCR patient with severe or persistent chest pain, or they might refer a patient with severe headaches, vomiting, visual complaints, or convulsions outside the hospital for computerized tomography. Often, however, they did not expect to obtain diagnostic information even from the few investigations they did perform; although four of the physicians mentioned doing blood tests such as complete blood counts and blood glucose levels for certain symptoms in suspected HCR patients, two of the four clarified that these tests were "only to show the patient that we are trying to find out the cause" (8) or so that "the patient thinks that she is under treatment" (11).
When considering HCR, physicians' differential diagnoses could include asthma, congenital heart disease, depression or anxiety disorders, encephalitis, or stroke. Physicians consistently acknowledged that some patients diagnosed with HCR had unrecognized medical problems, although senior physicians believed that medical diagnoses only rarely escaped their detection. During interviews, physicians cited cardiac arrhythmias and (8). Also, one professor who suggested an association of HCR with dyspareunia and pelvic infections asserted that many doctors evaluate gynecologic symptoms incompletely and assign HCR labels hastily to patients who might have gynecologic disease.
Management of HCR
The majority of physicians reported that within the hospital, they or their medical colleagues often treat HCR patients with "sedatives" or "anxiolytics" such as short-acting benzodiazepines and the combination antipsychotic and tricyclic antidepressant flupentixol/melitracen. For agitated HCR patients, they also might use chlorpromazine or haloperidol. When they discharged patients, some internists provided prescriptions, explaining that "we know that most patients ultimately will not go to the psychiatrist, so we try to replace the role of the psychiatrist" (8). If prescribed, typical regimens included a few months' course of clonazepam together with a selective serotonic reuptake inhibitor (SSRI) or tricyclic antidepressant and sometimes an antipsychotic such as flupentixol or olanzapine. Besides these psychoactive medications, physicians named a variety of treatments that they gave to HCR patients as placebos. They described measures such as intravenous saline, supplemental oxygen, vitamins, and painkillers as being medically unnecessary for HCR but given because patients and their families requested and expected them. "This way the patient thinks that she is under treatment. . . Then, in a very short period, she becomes okay" (11). They reported that even sham treatments such as disconnected oxygen masks or intravenous distilled water "readily cured" (7) some HCR patients. One doctor, pointing to intravenous fluid bags hanging throughout his ward, explained, "intravenous support is psychological support" (6). After HCR patients received medical attention, their symptoms tended to resolve rapidly, and they usually returned home within a day or two.
Many physicians felt that talking with patients was more important than medications for treating HCR and preventing its recurrence. Internists and emergency physicians described how they reassured patients while still acknowledging their suffering. They might tell patients, "Definitely you have pain. The good thing is that you don't have heart disease" (12), or, "Your disease has been diagnosed, you are getting the proper medications, and you will be fine" (7). Sometimes, medical doctors also mediated patients' interpersonal conflicts themselves, talking with the patients' family members to sort out disagreements.
Physicians found it difficult to counsel HCR patients, however. First, busy doctors had little time to talk extensively with hospitalized patients. Second, patients and their families believed that dramatic physical symptoms implied life-threatening medical illnesses, and they distrusted doctors who pronounced them physically well. Third, social stigmatization of psychiatric illness in Bangladesh hampered mental health treatment; physicians shied away from assigning or explaining psychiatric diagnoses, and patients who did receive such diagnoses were reluctant to accept or seek treatment for them them. Finally, scarcity of mental health professionals made appropriate treatment difficult to find.
Follow-up after discharge
Physicians believed that the care psychiatrists could offer -including education about psychogenic symptoms, assistance resolving interpersonal conflicts, and treatment of comorbidities such as anxiety or depression-would in theory help HCR patients, and if any follow-up care was offered to HCR patients. If an appointment with a psychiatrist. Psychiatry referrals were inconsistent, however; a house officer in Hospital B estimated that one third of HCR patients in his hospital received them. Some physicians refused to refer either because psychiatric diagnoses would upset patients or because the physicians doubted the quality of locally-available psychiatric care. Other physicians provided psychiatry referrals but without explaining to the patients what the "HCR" on their discharge certificates meant. These physicians suspected that few of the patients with referrals attended any psychiatry appointments.
Physician's frustrations regarding HCR
Although multiple physicians cited empathy as a critical component of effective interactions with HCR patients, they reported that such patients often provoked negative reactions from doctors. HCR's prevalence, together with how rarely physicians could identify serious medical illness among HCR patients, led physicians to downplay HCR: "It's a usual problem, why take it seriously, why do a test, it's okay," one quoted his colleagues as saying (6). Interns sometimes called HCR patients "disturbing and annoying" and mocked their complaints "like 'pain . . . from head to toe!'" (1). Annoyance at HCR patients was compounded by many doctors' suspicions that HCR patients manufactured their symptoms. "Sometimes because we are overburdened, we think, 'Oh, she is malingering; give her a diazepam injection and forget [about her]'" (8).
Physicians also begrudged the time and space that HCR patients consumed. Many expressed frustration at the way HCR patients' often-melodramatic personalities, together with their initially acute-seeming symptoms, distracted medical staff's attention from other ill patients. Physicians also felt that HCR patients consumed precious hospital space and physical resources. "We are already loaded with patients. . . Beds are occupied and there is no space even on the floor. So they are hampering our [ability to take] care of other patients" (9).
For medical doctors with little training or experience in psychiatric care, the need to step outside of familiar biomedical frameworks in order to effectively manage HCR was an additional source of unease. A perceived lack of concern about HCR in the broader medical community further confused and disheartened physicians.
Physicians' suggested solutions
Physician informants identified both hospital-and societal-level changes that they believed would help reduce the prevalence of HCR within medical wards. First, physicians consistently wanted to increase capacity to treat patients with psychiatric problems. Some wished for expanded psychiatric specialty services, including more training of psychiatrists or creation of inpatient psychiatric facilities. Others favored increased psychiatric training and support, through continuing education and consult-liaison services, for primary care physicians and internists.
Another desired intervention, which would help potential HCR patients before they reached the hospital, was a network of community-or school-based counselors or social workers to assist with domestic conflict resolution or speak with troubled students. Many physicians also suggested that media campaigns to educate the public about the existence of psychogenic illness would be helpful. Finally, some physicians, considering HCR "a social disorder" (13), felt that elevating the status of women in Bangladeshi society was the most critical prevention measure.
Meaning and frequency of HCR diagnosis
Six percent of women hospitalized during 2008 in the medical wards we studied received diagnoses of HCR. Therefore, physicians in each study ward made an HCR diagnosis nearly every day that they admitted patients.
Despite this commonness of the HCR diagnosis, we encountered ambiguity among physicians regarding the definition, etiology, and management of HCR. Bangladeshi physicians applied the HCR diagnosis to a wide range of acute neurological and non-neurological complaints for which they saw no underlying medical cause, or in some instances to any acute complaint in a young women with a "hysterical" personality. They also reported difficulty distinguishing involuntary psychogenic symptoms from malingering, and in some cases from organic symptoms. This caused HCR to function as a wastebasket term for medically-unexplained patient presentations.
Bangladeshi physicians were correct to note that the term "HCR", which is outmoded and imprecise, appears rarely in contemporary medical literature. These physicians' intended meaning, however, places the HCR of Bangladesh within a spectrum of somatoform, non-organic, functional, or medically-unexplained illness that is a worldwide problem. They explained HCR using a psychoanalytic conceptual framework similar to the conversion disorder of psychiatry's current diagnostic manual, [21] although they did not limit HCR's application to syndromes involving neurological deficits.
We doubt that the 6% of patients receiving HCR diagnoses reflect accurately the prevalence of somatoform illness within our study hospitals. Physicians may have over-diagnosed non-organic illness because they lacked resources to fully medically evaluate patients and because they applied vague HCR terminology overlybroadly. Conversely, HCR diagnoses could underrepresent somatoform illness, not only because alternate terminology (e.g., conversion disorder) was used on occasion, but also because of patient and provider discomfort with psychogenic diagnoses. The several final diagnoses of "costochondritis" that replaced "HCR", for example, suggest that physicians were giving medical labels to symptoms such as chest pain which they suspected as functional and for which they had never found a clear organic cause.
Few data on rates of psychogenic illness among medical inpatients are available for comparison. Most studies either are limited to neurological presentations (with prevalences of conversion disorder reported at much less than 1% among US and UK general hospital and emergency ward patients), [22] or they consider somatization in outpatient settings (with prevalences of somatization disorder of approximately 20% in primary care practices at all wealth and development levels in a 1990s WHO survey) [1]. One Danish survey found a 20% prevalence of DSM-IV somatoform disorders among medical inpatients, but many of the disorders they included, such as hypochondriasis and chronic pain, were not the reasons for admission [2].
Psychiatric support
Although medical doctors anecdotally find counseling effective as treatment for HCR, and although HCR patients may often have other psychiatric comorbidities, doctors lack the time, space, and training to consistently perform counseling themselves. No psychiatrists, social workers, or other trained support staff are available in the inpatient setting to assist them. After patients leave the hospital, psychiatric follow up is limited by availability, geographic access, and stigma. Recent prevalence estimates for mental health disorders in Bangladesh range from 16% [23] to 28%, [24] and 5-6% of women consider suicide each month, [25] but mental health services are scarce. Bangladesh has 7 psychiatrists per 10 million people, (mostly concentrated near Dhaka), [26] compared with 20 per 10 million in neighboring India [27] and 1200 per 10 million in the United States [28]. Clinical psychologists are even rarer, and psychiatric training for generalists and internists is also minimal [26]. Furthermore, less than 0.5% of total government healthcare spending goes toward mental healtha low figure even compared with other low-income countries, which spend an average of 2% of their healthcare budgets on mental health [29].
The physicians we spoke with observed stresses in HCR patients' lives ranging from financial or scholastic pressure to domestic conflicts and abuse. The standard medical treatments for HCR, such as sedatives and various minimal-impact interventions chosen to make patients feel like they are being treated, do not address social or psychological factors behind patients' symptoms. The frequent recurrence of HCR that physicians described highlights the inadequacy of such a system. Psychiatric interventions for somaticizing patients do not have to be resource-intensive, however. An American study of patients with medically-unexplained symptoms showed that single counseling sessionsspecifically, intensive short-term dynamic psychotherapy sessions in the emergency departmentcould reduce repeat visits for similar symptoms [30].
Medical illness and HCR
Admission logs and physicians' remarks suggest that undiagnosed medical problems also underlie some HCR cases. Most HCR diagnoses were made at the time of admission before full diagnostic evaluations were performed. Although we cannot independently validate the diagnoses made, admission logs show that physicians revised some HCR diagnoses as various medical illnesses were identified. Physicians described additional cases in which explanatory medical diagnoses were made long after discharge.
By assigning HCR diagnoses based on initial impressions-especially in the context of these hospitals' brief inpatient observation periods, limited availability of diagnostic testing, and overburdened staff-doctors make it easier for themselves to potentially dismiss important medical problems. First, underlying organic causes of patient's presenting symptoms may go unrecognized. Among patients diagnosed with conversion disorder, the fraction with unrecognized underlying neuropathology is estimated at around 4%, [31] and even these imperfect these levels of diagnostic accuracy are attained via thorough specialist evaluations [32] or modern diagnostic tools [33] that were not routinely available to most hospitals in low-income countries such as our study hospitals. Comparable rates of missed medical causes for a broader class of acute somatoform diagnoses, as in our study patients, are unknown. Second, even if a patient does have somatoform illness, this diagnosis may prevent other appropriate medical care. Clinicians' recognition of conversion disorder has been noted to impede the diagnosis of unrelated but coexisting medical conditions [33]. Circumspection is warranted when evaluating a wide array of reported symptoms in patients who seem to fit the hysteria profile.
Labeling patients as "hysterical" seemingly absolves the medical team from further medical work up and patient care. It may also propagate gender disparities in health care when physicians (the vast majority of whom were male in our study hospitals) apply the label mostly to women. Using discord in patients' personal lives as confirmation of HCR diagnoses is similarly fraught with potential to disguise real medical problems. The presence of emotional stress prior to onset of illness has been shown to be a poor differentiator between conversion and organic illness [34]. Not only are interpersonal conflict and "family problems" universal, domestic violence is widespread in Bangladesh, affecting 60% of women at some time in their lives [35]. Coincidence of physical symptoms with arguments at home is likely to frequently occur by chance.
Limitations
Our qualitative, hospital-based study is unable to fully characterize the burden, causes, or outcomes of HCR in Bangladesh's medical system. This study is not intended to determine prevalence of somatoform illness on either a hospital or a population level, and it does not independently verify or refute the diagnoses made by treating physicians. Our qualitative data is limited because of a small number of respondents and only a single round of exploratory interviews, and despite efforts to minimize bias in data collection and interpretion, this study's conclusions reflect the opinions and experience of a few individuals. Because we did not directly evaluate patients, our understanding of the typical features of HCR cases is based on physician recall, which may be biased toward the most memorable cases, and on the minority of HCR cases who had alternate diagnoses listed in logbooks, who may not be representative. Also, the admission logs did not allow us to determine outcomes of case-patients diagnosed with HCR, and apart from reports that many HCR patients later return with the same complaints, the long-term morbidity associated with HCR remains unclear.
Conclusions
Our study suggests that physicians commonly diagnosed young women presenting with shortness of breath, convulsions, pain, and other somatic complaints with HCR for lack of a more definitive diagnosis. Many likely have somatoform disorders, while others may have other medical and psychiatric disease that remains undiagnosed.
Interventions at the levels of medical education and hospital staffing could alleviate harms of the current system to physicians, HCR patients, and other patients in the ward. To help physicians identify and care for somaticizing patients more effectively and to decrease physicians' sense of frustration, we advocate regular psychiatric training for internists and general practitioners. In addition to teaching physicians to recognize and treat common problems such as depression, anxiety, and somatization in primary care and hospital settings, training sessions should encourage them to remove hysteria terminology from their arsenal of diagnoses and to defer somatoform disorder diagnoses until after some initial medical workup has excluded reasonable organic etiologies of illness. Although expanded psychiatric specialty services would also be desirable for Bangladesh, basic guidance for non-psychiatrists is a first step that could have broader impact. Close, longitudinal relationships with a physician are associated with reduced somatization, [36] and promotion of such doctor-patient relationships should be an aim when educating general practitioners about common mental health problems.
Second, both HCR patients and other patients in hospitals with a large burden of HCR diagnoses could benefit from the introduction of a single, trained, nonphysician counselor or social worker into the department of medicine. This individual could speak with patients believed to have somatoform illness, either in the emergency department (as in Abbass et al. above) or during their stay in the medical ward, providing patients with counseling and ensuring appropriate post-discharge follow-up while freeing doctors and nurses to tend to other duties. The existence of such a program could help to legitimize mental health concerns within the hospital environment, promoting more open discussion and better recognition and management of the medical, psychiatric, and social needs of HCR patients. | 2017-09-26T19:42:13.291Z | 2012-10-22T00:00:00.000 | {
"year": 2012,
"sha1": "2c4ea45d846caf077a047869a6141f59feeb2574",
"oa_license": "CCBY",
"oa_url": "https://bmcwomenshealth.biomedcentral.com/track/pdf/10.1186/1472-6874-12-38",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c75994c21067afbc2e73a3d16efbf0590d1edd02",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259140831 | pes2o/s2orc | v3-fos-license | Epidemiology of distal radius fracture: a regional population-based study in Japan
Background Distal radius fracture (DRF) is very common worldwide. In particular, aging countries have numerous patients with DRF, resulting in an urgent need for active preventive measures. As few epidemiological studies have investigated DRF in Japan, we aimed to identify the epidemiological characteristics of patients of all ages with DRF in Japan. Methods This descriptive epidemiologic study analyzed data obtained from clinical information of patients diagnosed with DRF from January 1, 2011, to December 31, 2020, at a prefectural hospital in Hokkaido, Japan. We calculated the crude and age-adjusted annual incidences of DRF and described the age-specific incidence, injury characteristics (injury location and cause, seasonal differences, and fracture classification), and 1- and 5-year mortality rates. Results A total of 258 patients with DRF were identified, of which 190 (73.6%) were female and the mean age (standard deviation) was 67.0 (21.5) years. The crude annual incidence of DRF ranged from 158.0 to 272.6 per 100,000 population/year, and the age-adjusted incidence among female patients demonstrated a significant decreasing trend during 2011–2020 (Poisson regression analysis; p = 0.043). The age-specific incidence differed by sex, with peaks at 10–14 years for males and 75–79 years for females. The most common cause of injury was a simple fall in patients > 15 year of age and sports injuries in patients ≤ 15 years of age. DRFs were most frequently sustained outdoors and were more common in the winter season. In patients > 15 years of age, the proportions of AO/OTA fracture types A, B, and C were 78.7% (184/234), 1.7% (4/234), and 19.6% (46/234), respectively, and 29.1% (68/234) of patients received surgical treatment for DRF. The 1- and 5-year mortality rates were 2.8% and 11.9%, respectively. Conclusions Our findings were mostly consistent with previous global studies. Although the crude annual incidence of DRF was relatively high because of recent population aging, the age-adjusted annual incidence among female patients showed a significant decreasing trend during this decade.
Design, setting, and participants
This descriptive epidemiologic study analyzed data obtained from clinical information of patients diagnosed with DRF from January 1, 2011, to December 31, 2020, at a prefectural hospital in Hokkaido, Japan. Patients diagnosed with DRF were identified by searching for the International Classification of Diseases, 10th revision (ICD-10) codes S52.5 and S52.6 in the medical records database; each identified patient was carefully reviewed and their data were retrieved from the medical records. We excluded patients with ipsilateral re-fracture. This study was approved by the Jichi Medical University Clinical Research Ethics Committee (approval ID: 21-116). The Ethics Committee waived the need for informed consent because of the retrospective observational nature of the study.
Hokkaido is located in the north of Japan (Fig. 1A), and Tomamae district is a northern region in Hokkaido (44°N) (Fig. 1B) with a cold climate and snowfall from January to March and November to December. According to the government statistics of Japan [18], Tomamae district has an aging and declining population; from 2011 to 2021, the population decreased from 13,155 to 10,772, but the proportion of the population over 65 years old increased from 35.7% to 42.2%. We used registry data Fig. 1 Location of Tomamae district, Hokkaido, Japan. A The shaded area shows Hokkaido. B Tomamae district is located in northern Hokkaido. Both maps have north at the top obtained from Hokkaido Prefecture Haboro Hospital, which is located in Tomamae district. Hokkaido Prefecture Haboro Hospital is the only hospital in the region that performs orthopedic surgeries. Therefore, almost all patients with fractures within the region are diagnosed and treated by orthopedic surgeons at Hokkaido Prefecture Haboro Hospital, and so the number of fractures diagnosed at this hospital approximates the total number of fractures incurred in the regional population. Patients were eligible for study inclusion if they were diagnosed with DRF at Hokkaido Haboro Prefectural Hospital during 2011-2020.
Measurements
Demographic information included patient age and sex. Clinical information included injury data (location, cause, and date), fracture data (side, fracture type, use of computed tomography [CT] for diagnosis, and complication of ulnar fracture), treatments, and mortality data. Injury locations were categorized as indoor (e.g., patient's residence, facility) or outdoor (e.g., public space, street). Injury causes were categorized as simple fall, fall from height, traffic accident, crush injury, sports injury, and others. Fracture types were classified according to the Arbeitsgemeinschaft für Osteosynthesefragen Foundation/Orthopaedic Trauma Association (AO/OTA) classification [19], which is the standard classification in orthopedic and injury medicine. Treatment modalities were classified into nonsurgical and surgical treatments. Information on survival and death at 1 year and 5 years after injury was obtained from the medical records.
Statistical analysis
First, we calculated the annual incidence of DRF from 2011 to 2020, using population data obtained from the government statistics in Japan [18]. The incidence rates were calculated by dividing the annual number of DRF cases by the population in the corresponding year and multiplying by 100,000 (i.e., per 100,000 population/ year). Furthermore, to evaluate the incidence of DRF adjusted for the impact of population aging, we calculated age-adjusted annual incidence rates with direct adjustment methods, using the standard population in the year 2000 as the reference. Poisson regression analysis was performed to assess the statistical significance of the annual trends in the age-adjusted incidences of DRF in males and females from 2011 through 2020. Second, the age-specific DRF incidence rates were determined. In this analysis, incidence rates were separately calculated by dividing the 5-year-age-specific number of DRF cases that occurred during 2011-2020 by the corresponding 5-year-age-specific population averaged by summing the populations from 2011 to 2020. Third, we described the distributions of injury location and specific injury causes in accordance with the 5-year age groups. For specific injury causes, we compared the data of patients with DRF aged ≤ 15 years with those aged > 15 years, based on the hypothesis that the injury causes differ between younger and older patients. Fourth, we determined the age and sex distributions by fracture type in patients aged ≤ 15 years compared with patients aged > 15 years. CT use, complications, and treatments were also compared between these groups. Finally, we calculated 1-and 5-year mortality rates using Kaplan-Meier survival curves. Data are presented as the mean and standard deviation (SD) or percentages of each group of patients. The 2.5 and 97.5 percentiles were used to express 95% confidence intervals (CIs). All statistical analyses were performed using EZR software [20].
Results
A total of 280 patients with DRF were identified. After excluding 20 patients who resided outside Tomamae district and two patients with ipsilateral re-fracture, 258 patients were included in the analysis. There were no patients with simultaneous bilateral fractures and three patients with contralateral DRF injuries at different times in this study period. There were 190 (73.6%) female patients, giving a male-to-female ratio of 1:2.8. The mean (SD) age of the total cohort was 67.0 (21.5) years, ranging from 2 to 99 years; the mean (SD) age was 73.0 (12.9) years for females, and 49.9 (30.4) years for males. Of the 258 patients with DRF, 24 (9.3%) and 175 (67.8%) were ≤ 15 years and ≥ 65 years of age, respectively.
Incidence of DRF
The annual incidence of DRF ranged from 158.0 to 272.6 per 100,000 population/year during 2011-2020, as shown in Table 1. The annual incidence was consistently higher in females than males in all examined years (range, 222.0-429.2 per 100,000 population/year in females; range, 74.0-184.6 per 100,000 population/year in males). The age-adjusted incidence of DRF in females demonstrated a significant decreasing trend from 2011 through 2020 (Poisson regression analysis; p = 0.043), indicating that the incidence of DRF reduced even after adjustment for the impact of population aging. In contrast, the ageadjusted incidence of DRF in males showed no significant trend (p = 0.90) (Fig. 2). Figure 3 shows the distribution of the number of DRFs and age-specific incidence in 5-year age groups. There were sex differences in the distributions of both the number of DRFs and the incidence of DRF. The number of DRFs was bimodal, with peaks at 10-14 years of age for males and 75-79 years for females (Fig. 3A). The age-specific incidence in males was higher in younger patients (with the highest rate in teenagers), while the age-specific incidence in females was higher in older adults (with a marked increase in the incidence of DRF in women older than 50 years) (Fig. 3B).
Injury characteristics
DRF was incurred outdoors in 173 of 258 patients (67.1%) (Fig. 4). Among the 85-89 year and older age groups, DRF tended to occur indoors rather than outdoors.
For the > 15 year group (Table 2), 85.3% of DRFs were caused by a simple fall (85.3%), followed by a fall from height (6.9%). However, for the ≤ 15 years group, the most common cause of DRF was sports injuries (50.0%), followed by traffic accidents (33.3%), showing that the injury cause greatly differed between younger and older age groups.
There were seasonal differences in the incidence of DRF (Fig. 5). The number of patients with DRF was highest in the winter months, with an incidence of 14% (36/258) in December, followed by 13.2% (34/258) in January, and 10.9% (28/258) in February (Fig. 5A). DRF more commonly occurred outdoors than indoors throughout the year, but the proportion of outdoor injuries was particularly high in the winter season (Fig. 5B).
Fracture classification and characteristics
The DRF was on the left side in 53% (137/258) of patients and the right in 47% (121/258). Table 3 shows the AO/
Discussion
Using regional population-based data, we identified the following epidemiological characteristics of patients with DRF in Japan. First, the incidence of DRF in our study was similar to that reported in other countries [7,9,11]. Second, the age-adjusted annual incidence of DRF among female patients showed a significant decreasing trend from 2011 to 2020. Third, the incidence of DRF was bimodal with a sex difference; among male patients the incidence was higher in younger patients (with a peak in teenagers), while among female patients the incidence was higher in older adults (and markedly increased after 50 years of age). These findings were also consistent with those reported in other countries [1,3,9,17]. Fourth, DRF was more likely to occur during the snowfall season, which was consistent with the results of studies conducted in Northern Europe and Korea [4,5,8]. Fifth, nearly all patients with DRF were diagnosed with AO/OTA fracture types A and C, and 71% of those were treated conservatively, which may be affected by the large proportion of older patients in our study. Sixth, the 1-and 5-year mortality rates among patients with DRF were similar to those reported in other countries [4,21,22], and were lower than the death rates among patients with other common osteoporotic fractures such as hip and spine fractures [23]. In our study, the age-adjusted incidence rates of DRF showed a significant decreasing trend over a 10-year period among female patients. The findings of previous studies suggest that this decrease may be due to the establishment of standard osteoporosis treatment for older women, which reduces the incidence of osteoporotic fractures [3,9,24]. Another potential reason for the decrease is the effective implementation of public health measures for fall prevention among older people. For example, previous studies have reported that fall prevention measures in the winter season with snow and ice conditions prevent the occurrence of DRF and other osteoporotic fractures among older adults [9,[24][25][26][27]. In Hokkaido, public health measures have been implemented to prevent falls in snow and ice conditions in the winter season, which may have contributed to the declining incidence of DRF in our study [28]. Our results indicated that the age-specific incidence of DRF was bimodal with peaks in 10-15-year-old males and 70-79-year-old females. These bimodal age patterns in DRF incidence are consistent with the findings of studies conducted in other countries and with a study conducted in Sado City in Japan [1,3,9,17]. Our results showed that the incidence of DRF increased with age in women older than 50 years, but decreased after 80 years of age. This may be because patients older than 80 years may find it more difficult to protect themselves with their hands in the event of a fall, resulting in hip fracture or proximal humerus fracture rather than DRF [17,29]. In contrast, the DRF incidence tended to peak in males aged 10-14 years. This is consistent with previous findings and suggests that physical activity such as sports may be associated with DRF in younger patients [3,30].
Previous studies have reported that the incidences of AO/OTA fracture types A, B, and C range from 54 to 67%, 9% to 14%, and 23% to 32%, respectively [4,11,31]. In our study, the proportions of AO/OTA fracture types A, B, and C were 78.7%, 1.7%, and 19.6%, respectively, indicating that a larger proportion of patients had type A fracture while a smaller proportion of patients had fracture types B and C compared with the proportions reported in other studies [4,11,31]. Previous studies have demonstrated that type B and C fractures are more common in younger patients than type A fractures [4,11,31]. Thus, our results were reasonable because our cohort included a small proportion of younger patients. The higher proportion of type A fractures may be affected by the large proportion of older adults who developed DRF by a simple fall.
The present study has some limitations. First, our study had a small sample size compared with previous studies conducted in Japan [12,[15][16][17]. However, our findings were largely consistent with those of previous studies conducted within and outside of Japan [1, 4, 5, 7-9, 11, 17, 21, 22]. Furthermore, the 10-year study period in our study was longer than the study period of previous studies conducted in Japan [12,[15][16][17]. Second, some patients with DRF might have been treated conservatively by non-orthopedic surgeons and judo therapists, or might have been diagnosed and treated in medical facilities located outside of our study setting. Third, our study was a single-center study, and the diagnosis and treatment might differ from other medical facilities. Fourth, this study was conducted in a cold and snowy region of Japan, which may have a higher incidence of DRF in winter than other warmer regions. Furthermore, population aging may also affect the incidence of DRF and the death rates. For these reasons, our results might not represent the general population of Japan. Fifth, we did not have access to information on osteoporosis treatments. Sixth, 5% and 14% of patients were not followed up for 1 and 5 years, respectively (missing cases), which may underestimate or overestimate the mortality rates. Finally, the coronavirus disease 2019 pandemic affected the behavior of people in our study setting by reducing the frequency of outings during 2020, which may have affected our results.
In conclusion, using regional population-based data, we identified the epidemiological characteristics of patients with DRF. The annual incidence of DRF ranged from 158.0 to 272.6 per 100,000 population/year during 2011-2020. The age-adjusted annual incidence of DRF among female patients showed a significant decreasing trend from 2011 through 2020. Osteoporotic management might have contributed to the declining incidence of DRFs in females. Further research is warranted to investigate the effect of osteoporotic treatment on the incidence of DRFs. The incidence of DRF was bimodal and was highest in teenagers in the male population and in older adults in the female population. This suggests the need for public health measures to prevent sports injuries in young males and to prevent falls in older females.
DRF
Distal radius fracture CT Computed tomography AO/OTA Arbeitsgemeinschaft für osteosynthesefragen foundation/orthopaedic trauma association SD Standard deviation CI Confidence interval | 2023-06-13T13:44:50.253Z | 2023-06-13T00:00:00.000 | {
"year": 2023,
"sha1": "3bcd552578a67b08b793ab9a6a3b1eee5c34e329",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "3bcd552578a67b08b793ab9a6a3b1eee5c34e329",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231664133 | pes2o/s2orc | v3-fos-license | Reliability and validity of the Chinese version of the LMC Skills, Confidence & Preparedness Index (SCPI) in patients with type 2 diabetes
A variety of diabetes self-management instruments have been developed but few of them consist of the preparedness for diabetes self-management behavior. The novel psychometric evaluation tool “the LMC Skills, Confidence & Preparedness Index (SCPI)” measures three key aspects of a patient’s diabetes self-management: knowledge of the skill, confidence in being able to perform skill and preparedness to implement the skill. The objective of this study was to translate, adapt and validate the SCPI for use in Chinese adult patients with type 2 diabetes. This study followed the guideline recommended by the American Academy of Orthopaedic Surgeons Evidence Based Medicine Committee (AAOS) to indigenize the scale. Forward and back translation, and cross-cultural language debugging were completed according to the recommended steps. A convenience sample of Chinese patients with type 2 diabetes (n = 375) were recruited from a university-affiliated hospital in Shanghai. The validity (criterion, discriminant validity, and construct validity), reliability (internal consistency and test–retest reliability) and the interpretability of the instrument were examined. The content validity was calculated by experts’ evaluation. The Chinese version of SCPI (C-SCPI) has good internal consistency with a Cronbach’s alpha of 0.92. The ceiling effects of the preparedness subscales is 21%. The criterion validity of three dimensions of C-SCPI was established with significantly moderate correlations between the DKT, DES-SF and SDSCA (p < 0.05). The S-CVI of the whole scale was 0.83. Except for entry 21, the I-CVI values of all entries were greater than 0.78. The C-SCPI has also shown good discriminative validity with statistically significant differences between the patients with good and poor glycemic control. Confirmatory factor analysis showed that modified results indicate that the fitting degree of the model is good, χ2/df = 2.775, RMSEA = 0.069, CFI = 0.903, GFI = 0.873, TLI = 0.889, IFI = 0.904. The test–retest reliability coefficient was 0.61 (p < 0.01). We established a Chinese version of SCPI through translation and cross-cultural adaptation. The C-SCPI is reliable and valid for assessment of the level of self-management in Chinese patients with type 2 diabetes.
Background
Diabetes mellitus is a non-communicable disease and is becoming epidemic worldwide. The latest global diabetes atlas (9th Edition) released by the International Diabetes Federation (IDF) showed that the prevalence of diabetes is increasing rapidly, with an average global growth rate of 51% and the number of diabetics in China ranks first Open Access *Correspondence: Lyu_weibo@shutcm.edu.cn 1 School of Nursing, Shanghai University of Traditional Chinese Medicine, 1200 Cailun Road, Zhangjiang Hi-Tech Park, Pudong New Area, Shanghai 201203, China Full list of author information is available at the end of the article in the world, with a total population of about 116.4 million in 2019 [1], which forms a heavy burden for families, society and the whole country in China [2]. Many experts unanimously recommend improving the self-management level of diabetic patients as the main way to prevent and treat diabetes [3][4][5]. In 1996, IDF proposed that selfmanagement for diabetes should include diet, exercise, medication, diabetes health education and self-glycemic monitoring [6]. The current status of self-management of diabetes is not satisfactory in China [7][8][9].
Accurate assessment of the patient's current selfmanagement level is an indispensable part of health education. Chinese consensus on self-administered prescriptions for type 2 diabetes also describes the importance of a comprehensive, systematic assessment of patients before developing a personalized self-management program [10]. The scientific and standardized assessment tool is the key to assessing the self-management level of patients. Nowadays, diabetes self-management education consists of three parts: the conveying of knowledge, the establishment of health beliefs, and the guidance of behavior change. Particularly, behavior change is considered to be a sign of success in measuring the impact of diabetes education programs [11]. Therefore, diabetes-related self-management assessment tools generally focus on knowledge, psychology and behavioral changes.
The current assessment tools have some limitations. Firstly, many tools are unidimensional. Some focus on knowledge [12][13][14], some on attitude and belief [15][16][17] and some on practice [18][19][20]. The Diabetes Care Profile(DCP) [21], a comprehensive assessment tool has a solid theoretical basis and a rich measurement dimension, but the too many items in this scale impede its practical application. Secondly, the reliability and validity of some scales were only verified at the time of the scale development, limiting their applications in different contexts. Some scales showed bias in reliability and validity [19]. Thirdly, the scales about practice focus most on the pre-existing behaviors [18,20,22], while less attention is paid to the level of preparation for further behavioral changes. For example, in the Diabetes Self-management Knowledge, Attitude, and Behavior Assessment Scale (DSKAB) [22], which was developed by Chinese scholars, patients are asked to recall their behaviors within half a year, a long time span which may result in a memory bias for patients. Researchers [23]have pointed out that accurately assessing patients' ability and readiness before starting self-management behavior is a prerequisite for developing and implementing any patient-centered approach to self-management.
We chose the LMC Skills, Confidence & Preparedness Index (SCPI) [24,25] for translation, adaption and validation in Chinese population, because it is the first "all in one" scale to evaluate three key aspects of diabetes self-management simultaneously. Knowledge of the skills is based on the content of the seven self-care activities of the AADE and the core content of diabetes selfmanagement and Canadian Clinical Practice Guidelines (CPG) [26], including healthy diets, medications, activities, blood glucose monitoring, problem solving, and risk reduction and healthy coping. Confidence in being able to perform the skills is based on the self-efficacy theory (SET) [27]. Self-efficacy is the subjective self-confidence that an individual believes he or she can perform certain behaviors and achieve the desired results. Preparedness to implement the skills is based on the preparation phase originally derived from Transtheoretical Model of Health Behavioral Change which means individuals will take action to change behavior within one month [28]. It measures the motivation for behavioral change and the degree to which patients will make changes next month. This dimension involves diet, exercise, stress relief, prevention of hypoglycemia, and insulin use when necessary. This scale had been developed in LMC diabetes and endocrine clinics in Ontario, Canada and validated in 2 more independent cohorts. Its clinical responsiveness to a diabetes education program intervention was investigated in 51 patients. They were assessed by SCPI before the implementation of health education program, so that educators can quickly identify the existing difficulties of diabetes patients in knowledge, skills, confidence and behavior preparation. Especially in the aspect of behavior, it focuses on the motivation of behavior change, through the evaluation of "behavior preparation" stage of diabetic patients, we can understand the needs of patients or provide basis for health educators to take corresponding strategies, so that medical staff can know "what to teach first". On the basis of education project, more personalized education is provided for patients. After 3 months, patients' glycosylated hemoglobin level have been significantly improved (9.3 ± 1.0% vs 8.2 ± 0.9%, p < 0.001) [25].
SCPI has been well validated in Canada, which has an important role in evaluating the status quo of self-management and behavior preparation of diabetic patients, and it can play a certain role in prompting health educators to carry out education for patients. The application in China is worth further exploration. Thus, the objective of this study was to adapt the "The LMC Skills, Confidence & Preparedness Index" (SCPI) into Chinese with type 2 diabetes and validate its psychometric properties.
Methods
The research was approved by the developer of the questionnaire. This is a two-phase study. In phase one, we performed a trans-language adaption of SCPI. In phase two, the psychometric properties of the Chinese version of SCPI were validated.
Phase one: trans-language adaption of SCPI
The cross-cultural adaption process of the scale is a process of examining the equivalence between the indigenized scale and the original scale. In this phase we followed a systematic process from the guideline recommended by American Academy of Orthopaedic Surgeons Evidence Based Medicine Committee (AAOS) [29] which included forward-translation, synthesis of the translations, back-translation, expert committee and testing of the translated version.
(1) Forward translation The English version of each of the SCPI questions was translated into Chinese by two independent translators who are native speakers of Chinese and they used plain language to express original meaning to the greatest extent. The Chinese version of the scale was finalized based on the further revision following the interview process. The general information of the experts and the whole process of trans-language adaption of SCPI were shown in the Additional file 1.
Phase two: assessment of the reliability and validity evaluation of SCPI
In this phase, the psychometric properties of the SCPI were tested. The target population was patients with diabetes. A convenience sampling was performed in a university-affiliated hospital in Shanghai, China between June 2018 and December 2019. Eligible patients are those who are 18 years or older and in line with WHO diagnostic criteria for diabetes (1999) [30], type 2 with a course over 6 months. Those with serious complications such as cardiac function (NYHA3 or above), renal function (CKD4 or above), cardiovascular and cerebrovascular diseases or organ function damage were excluded. Per rule of thumb, it is highly recommended to use at least 10 subjects per item of the instrument for general psychometric approaches. If there is a plan to use confirmatory factor analysis to test the factor structure of the instrument, the recommendation per rule of thumb is approximately 300-500 subjects per item of the instrument [31]. SCPI has 23 items in total, the sample size of this study needs at least 230 subjects. In addition, this study needs confirmatory factor analysis, and the sample needs at least 300 subjects. Considering an invalid response rate of 20%, the sample size was finally determined to be at least 360.
The LMC Skills, Confidence & Preparedness Index (SCPI)
LMC is a multidisciplinary, regional community-based sites providing comprehensive specialist-level care for patients with diabetes in Canada. SCPI [25] includes 23 items in total, which measure three key aspects of a patient's diabetes self-management: knowledge of the skill (9 items), confidence in being able to perform the skill (7 items), and preparedness to implement the skill (7 items). The responses to each item were recorded on a seven-point Likert scale (range: 1 = strongly disagree, 7 = strongly agree).
Diabetes Knowledge Test (DKT)
DKT [12] was developed by members of Michigan diabetes research and training center in the United States. DKT has 23 items, including knowledge on diet, blood glucose monitoring, exercise, prevention and treatment of complications, insulin and other knowledge. The test is divided into two parts. The first part includes 14 items, which are applicable to adult patients with type 1 and type 2 diabetes. The other nine items constitute the insulin use subscale. The higher the score, the better the patient's disease knowledge. Chen [32] translated and used it among Chinese people. DKT is used to validate the criterion validity of the knowledge and skills dimension of SCPI [24].
Diabetes Empowerment Scale (DES-SF)
DES [17] was developed by Anderson in 1991 to measure the social and psychological self-efficacy of diabetic patients. In order to make the evaluation more convenient, Anderson reduced the scale to a simplified 8-item version named DES-SF. Hu [33] introduced and translated DES-SF in Chinese. A higher score indicates a better patient's self-efficacy. DES-SF is used to validate the criterion validity of the confidence dimension of SCPI [24].
The Summary of Diabetes Self-Care Activities Measure (SDSCA)
SDSCA [18] is widely used in the world and has shown good reliability and validity in China. In Hua's study, Cronbach's α of the Chinese version of SDSCA was 0.918 [34]. The scale consists of 11 items, including general and special activities in diet, exercise, blood glucose monitoring, foot care, and medication. A higher score indicates a better self-management behavior. SDSCA is used to validate the criterion validity of the preparedness dimension of SCPI. The preparedness part of SCPI reflects the degree of behavioral preparation of patients in the next month, and the SDSCA reflects the level of diabetic self-management through the frequency of self-care activities of patients in the first 7 days before reporting. Therefore, one month after the completion of the SCPI, some patients were investigated again with the SDSCA to verify whether the patient's behavioral preparation was related to the subsequent self-management activities.
Sociodemographic data such as gender, age, education level, monthly income and whether there is health education for diabetes were self-reported by the participants. Clinical data of the participants such as HbA1c were collected from the hospital's electronic medical records. We explained the purpose and the contents that need to be cooperated clearly to the patients, and promise to protect their privacy before the investigation. The patients check the corresponding options in accordance with their own daily self-management of diabetes on the paper by themselves. We would guide patients to fill in the questionnaire if they were unable to read.
Statistical analysis
The general characteristics of the subjects were presented using mean and standard deviation for continuous variables, and frequency and percentage for category variables. The Kolmogorov-Smirnov Test was used to examine the normality of data distribution. To assess models' goodness of fit, confirmatory factor analysis (CFA) was performed with the following indices: goodness-of-fit index (GFI), comparative fit index (CFI), incremental fit index (IFI), Tucker-Lewis index (TLI) and root mean square error of approximation (RMSEA). An acceptable model should have a χ 2 /df < 3, RMSEA < 0.08, and GFI, CFI, IFI and TLI > 0.9 [35]. The validity tests also include content validity, criterion validity, and discriminative validity. The content validity of the Chinese version of the SCPI was evaluated using the content validity index (CVI), which includes I-CVI (content validity of individual items, i.e. proportion of experts giving a rating of either 3 or 4) and S-CVI (content validity of the overall scale, i.e. proportion of items in a scale that achieves a relevance rating of 3 or 4 by all the experts) [36]. Correlational analysis of the Chinese version of SCPI and DKT, DES-SF & SDSCA was applied to examine the criterion validity of the SCPI. We used Pearson's correlation analysis for normally distributed data, and Spearman's non-parametric correlation for data not normally distributed. Discriminative validity of the SCPI was tested using a nonparametric test (Mann-Whitney U test) to compare the SCPI scores between patients with satisfactory blood glucose control (HbA1c ≤ 7) and patients with poor blood glucose control (HbA1c > 7). Cronbach's α was used to measure the internal consistency reliability. The SCPI was repeated after 2 weeks to evaluate its test-retest reliability by 23 participants. Distributional methods look at the statistical distribution of the instruments values. The standard error of measurement (SEM) and one-half of standard deviation (SD) of the measure of interest are most widely accepted to represent minimal clinically important difference (MCID) values [37]. SEM was calculated using SD at baseline of SCPI score × (1-reliability of the validated Chinese version of SCPI) 1/2 . Statistical tests were performed using the SPSS 24.0 and Amos 24.0 for Windows (IBM). A two-sided p value of < 0.05 was considered statistical significance.
Phase one: trans-language validation of SCPI
The translation process led to a Chinese version of the SCPI that was linguistically validated and conceptually equivalent to the original version. In the process of synthesis, one of the two translations was selected for 8 (34.8%) items (3 from the first and 5 from the second translator, respectively), and a combination of translations from the both translators was use for 15 (65.2%) items. The result of back translation was similar to the original English version.
Eight changes were made after the expert committee. In the expert ratings, the average score of all items in the translation process is larger than 3 points, and 74% of the items have score larger than 4 points, which is very consistent with the original text. In the back-translation section, all items have points above 3, and 61% above 4 points. In addition to language modification, the change of "sickness" to "physical discomfort"in item 6 could make patients pay more attention to the subtle changes of their bodies. In item 21, two experts pointed out that patients should not be encouraged to adjust their insulin doses by themselves because insulin dosage adjustment should be considered according to the patient's condition and the type of insulin used. In light of the clinical situation that patients should adjust the dosage of insulin under the guidance of doctors in China, the panel decided to change "I will start adjusting my insulin doses on my own" to "I will start adjusting my insulin doses on my own as recommended by my doctor. " In the test for the final translated version, after 15 patients with type 2 diabetes completed the questionnaire, we conducted interviews with them. Nine males and six females, aged from 28 to 70 years, had diabetes for 10.37 ± 8.57 years. Two individuals had a primary school education, nine had high school education, two completed college diploma, and two had a bachelor's degree.
Six additional modifications (see the Additional file 1) were made after interviews with the patients. Three patients did not understand the meaning of carbohydrates. Following the principle of experimental equivalence [29], we explained the meaning of carbohydrates in detail and interpreted carbohydrates as foods containing starch/sugar. Some were not sure about the scope of self-management of diabetes, therefore we elaborated the scope of self-management of diabetes, such as diet, medication, exercise, blood sugar monitoring. And seven patients were doubtful about some expressions like "blood sugar pattern", "keep my diabetes on track", and "stress management". We changed the "blood sugar pattern" to "change of blood sugar" and explained it (such as the cause of hyperglycemia or hypoglycemia). Patients had also expressed that "keep my diabetes on track" was not in line with Chinese language habits, so we changed it to "control blood sugar within the target range". We interpreted "stress management" as "a way to relieve stress".
Phase two: assessment of the internal consistency reliability and construct validity of the SCPI
Characteristics of the convenience sample at baseline are reported in Table 1. A total of 375 participants completed the SCPI. All patients were type 2 diabetes patients with a mean HbA1c of 8.5 ± 1.9% (excluding 75 without HbA1c measurement), and 218 of them were insulin users. The mean age is 57.2 ± 12.7 years, and mean duration of diabetes is 11.5 ± 8.0 years. Most patients (65.6%) have received health education on diabetes.
Factor structure of the Chinese version of SCPI
In the process of trans-language validation, no items were deleted. And based on the conceptual framework developed by the author of the original scale, CFA was performed to identify the underlying factor structure of the Chinese version of SCPI. All factor loadings based on a three-factor model of the 23 items were higher than the general standard 0.4. Initially, the model's goodness of fit was unacceptable: χ 2 /df = 4.050, RMSEA = 0.090, CFI = 0.829, GFI = 0.820, TLI = 0.809, IFI = 0.830 (Fig. 1). The modification indices indicated that further improvements were possible by including more covariance parameters. On the basis of the original model and variable content, when the modification indices are higher than 4, it needs to be corrected to increase the path between the residuals to reduce the chi square value. Therefore, six covariance correlations were added to the pre-set model, and each fitting index was in line with its own acceptable reference: χ 2 /df = 2.775, RMSEA = 0.069, CFI = 0.903, GFI = 0.873, TLI = 0.889, IFI = 0.904 (Fig. 2).
Content validity
In this study, six experts were invited to evaluate the correlation between items and their dimensions, and between items and self-management. The S-CVI value is 0.83. Except for item 21, the I-CVI values of all the other items are greater than 0.78, and 19 items have an I-CVI value of 1. The results indicate that the Chinese version of SCPI has good content validity and could better reflect the evaluation of self-management of diabetic patients. For item 21, we further modified it according to the experts' opinions.
Criterion validity and discriminative validity
There are positive correlations between the knowledge and skill part of SCPI and the total score of DKT in diabetic patients without insulin (r = 0.284, p < 0.001), and the total score of DKT in diabetic patients using insulin (r = 0.351, p < 0.001). The confidence part of SCPI has a good correlation with DES-SF (r = 0.376, p < 0.001). A positive correlation between the preparedness of SCPI and the total score of SDSCA (r = 0.465, p = 0.025) was also observed ( Table 2).
HbA1c was divided into better control group (HbA1c ≤ 7) and poor control group (HbA1c > 7). Compared with the patients with poor blood glucose control, the patients with better control had higher scores of overall self-management and self-confidence (p < 0.05). There was no statistically significant difference found in scores of knowledge skills and self-management of behavior preparedness between the two groups (Table 3).
Internal consistency, test retest reliability and interpretability of the Chinese version of SCPI
The internal consistency of the Chinese version of SCPI is good with a Cronbach's α of 0.92 for the total scale, and 0.81-0.88 for the each scale. Test-retest reliability conducted in 23 participants after 2 weeks is acceptable (r = 0.61; p = 0.002). No floor effects (> 15% of patients with a score of 1) or ceiling effects (> 15% of patients with a score of 7) were observed for the total score and the knowledge & confidence subscale scores, the ceiling effects of the preparedness subscales is 21%. The MCID values for the SCPI were 0.37 (0.5 SD), 0.21(SEM) using Cronbach's α value.
Discussion
Self-management is crucial for glycemic control in the diabetic patients. The SCPI focuses not only on the knowledge and self-confidence of self-management of diabetes mellitus, but also on the preparedness of individuals with diabetes to make a behavior change. In clinical practice, the scale reflects the self-management status of patients with diabetes quickly and provides the clues for healthy provider to formulate health education programs for people with diabetes.
In the phase one, the study presented the creation of a Chinese version of the SCPI (C-SCPI), which was translated and adapted from the original instrument through a systematic and rigorous process. We utilized the AAOS guideline [29] for cultural adaptation to Chinese. In the process of translation, although one of the translators has a B.A. in nursing, the other has completed postgraduate studies in English without a background in clinical care, which may eliminate any bias of current healthcare teaching and may better reflect the language used by the general population. There were some differences in the translations between the two translators, which might be due to the difference between their interpretations of natural expressions.
In the process of trans-language validation, experts' guidance on the content of the scale is indispensable. They were asked to modify or provide appropriate wording when necessary [38]. In addition to language modification like change "how my diabetes medications (pills, injectables and/or insulin) work in my body" to "how diabetes medications (pills, injectables and/or insulin) reduce blood sugar in my body" to make semantic expressions clearer. At the same time, they also gave professional opinions on insulin dosage adjustment in light of the clinical practice in China. We also consulted the source scale authors about the autonomy of Canadian diabetic patients in insulin dose adjustment. The patients usually have specific education on dose adjustment when initiation insulin treatment. The patients can adjust their insulin dose on their own. The case is different in China clinical practice. We modified the expression of the item as a reminder to patients that they should adjust the insulin under the doctor's advice.
The purpose of the interview was to use cognitive theory to understand how the respondents comprehend and answer the questionnaire items, find out the potential problems, and correct them [39]. The patients were able to provide us with practical insights. Most of the items of SCPI were well comprehended by the patients, but we also identified several items that were not well understood by the participants during the interview. The patients could not understand some professional words like "carbohydrates", because they rarely hear "carbohydrates" in their daily life. Thus, we interpreted them to make them experiential equivalent. During the interview, some patients thought that the content of diabetes self-management was only "eating less and exercising more", while ignoring medication and daily blood glucose monitoring. So we have adjusted some expressions to reduce patients' doubts and the vagueness of diabetes management in order to adapt the hospital settings in China and to be understood easily. Feedback from patients is crucial and may led to linguistic changes that improve the acceptability of the final scales.
The results of criterion validity demonstrated the selfconfidence dimension of C-SCPI correlated well with the DES-SF, which has been extensively used as an empowerment in diabetes instrument throughout the world, which is similar to the results of Mbuagbaw [24], and indicates that patients with higher levels of empowerment have higher confidence in self-management of diabetes. The results showed that there was a positive correlation between the behavioral preparation part of SCPI and the total score of SDSCA (r = 0.465, p = 0.025), indicating that the predictive validity of the preparedness part of SCPI was good, which could better reflect the behavioral preparation of patients in the next month, so the medical staff would provide more accurate health education. Tools for measuring self-care behavior in diabetic patients should be able to distinguish between patients with good and poor blood glucose control. The study shows that SCPI could effectively distinguish the selfmanagement behavior of patients with different blood glucose control outcomes. In terms of patients' selfconfidence level, it is consistent with previous relevant research results [40,41], which indicate that the patients with better glycation control and self-management level have higher empowerment ability and self-management confidence in disease. In addition, 47.7% of the 300 patients with glycosylation records in this study were diabetic patients aged over 60 years. In the 143 patients over 60 years old, 65% of them had poor blood glucose control. Another study has showed that the level of diabetes knowledge is negatively with age [42]. Elderly should be the focus of health education. There is no statistically significant difference between the two groups in the preparedness part of SCPI, which may be related to the potential change of glycosylated hemoglobin in the future. Moreover, the subjects of this survey were all inpatients, and the patients who have received hospitalization treatment and health education may have higher scores for the preparedness in the next month after discharge. In general, the C-SCPI is a reliable tool to evaluate self-management level of diabetic patients, and it also suggests that helping diabetic patients improve their self-management level will improve the outcome of blood glucose control.
The model could be specified to be even more theoretically consistent by allowing more pathways between the items [43]. The study added six covariance coefficients to the preset model, which may be related to the same potential latent. Knowledge, attitude and practice of diabetes self-management may be related. Although knowledge and skills, self-confidence and behavior preparedness belong to three different dimensions in this scale, the factors of the three dimensions may be potentially correlated. For example, Q14 and Q18 measures patients' daily physical exercise with Q14 focusing on patients' confidence in exercise and Q18 on patients' preparation for exercise (see Fig. 2). If a patient has the confidence to exercise, he may put exercise into action next month, so there may be a potential correlation between Q14 and Q18. Similarly, Q5 and Q12, Q5 and Q20 measure hypoglycemia prevention; Q11 and Q19 measure the patient's regulation of stress; Q10 and Q17 focus on the diet of patients. Taking into account the theoretical underpinnings of the SCPI, the statistical significance of all the items in the model, and the fitting index has been greatly improved after the data has been modified, the C-SCPI with three factor structure is acceptable. The internal consistency of the C-SCPI is satisfying with a Cronbach's α of 0.92 for the total scales [44]. This finding corresponds well with those reported in the original English version [45]. Interpretability measures the capacity of a questionnaire to be interpreted from quantitative scores or change in scores to a qualitative meaning. MCID value is a minimum change score at or above which the change can be considered (by some definition) to be important [46,47]. When the change value of SCPI score exceeded MCID, the self-management ability of diabetes mellitus changed. The ceiling effects of the preparedness subscales is 21% may be due to the awareness of the serious harm of diabetes during hospitalization, which indicates that these patients are well prepared for behavior change.
The test-retest reliability coefficient of the scale is just above the non-acceptable level may be due to the knowledge & skill dimension. After completing the scale for the first time, the patients may consulted the unclear knowledge points with professionals, then they had their own thinking and understanding of diabetes and mastered the relevant knowledge of diabetes. The test-retest reliability may be unstable because of the results of the first measurement, further investigation is needed to strictly verify test-retest reliability of the SCPI. It also showed that SCPI has an educational effect on the self-management of diabetic patients.
There are some limitations to the current assessment that should be acknowledged. The original SCPI scale was developed and validated in both type 1 and type 2 diabetic patients. However, only type 2 diabetic patients were investigated in the current study. The application of the scale in type 1 diabetic patients in China needs further study and discussion. Because the samples in the study mainly came from the inpatients of a universityaffiliated hospital, the applicability of the samples in the outpatient and community diabetes patients needs further investigated in the future. The number of patients who had test and retest and patients who completed SDSCA in this study is quite small, which needs to be further verified in future studies. Nevertheless, the SCPI can be further applied in health education projects to test the impact of the scale on improving blood glucose level and self-management behavior of patients.
In the future, the MCID values of each dimension can be further calculated and verified by the Anchor-based approaches. And we expect SCPI can be used in the "cloud platform" to improve the self-management and monitoring system of diabetic patients in the future.
Conclusion
Our study followed the strict guidelines of cross-cultural adaption of the scale. After the initial version of the scale was formed, the reliability and validity of the SCPI scale were verified. The C-SCPI has good internal consistency and satisfied criterion validity and discriminative validity. It provides an effective measurement tool and theoretical basis for the investigation of self-management level and behavioral preparation of diabetic patients. | 2021-01-20T15:18:00.759Z | 2021-01-20T00:00:00.000 | {
"year": 2021,
"sha1": "077aa91b9c99817474c7b57c24b38f1c9fc540b3",
"oa_license": "CCBY",
"oa_url": "https://hqlo.biomedcentral.com/track/pdf/10.1186/s12955-020-01664-x",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "077aa91b9c99817474c7b57c24b38f1c9fc540b3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
73417672 | pes2o/s2orc | v3-fos-license | Decision Support Systems in Oncology
Precision medicine is the future of health care: please watch the animation at https://vimeo.com/241154708. As a technology-intensive and -dependent medical discipline, oncology will be at the vanguard of this impending change. However, to bring about precision medicine, a fundamental conundrum must be solved: Human cognitive capacity, typically constrained to five variables for decision making in the context of the increasing number of available biomarkers and therapeutic options, is a limiting factor to the realization of precision medicine. Given this level of complexity and the restriction of human decision making, current methods are untenable. A solution to this challenge is multifactorial decision support systems (DSSs), continuously learning artificial intelligence platforms that integrate all available data—clinical, imaging, biologic, genetic, cost—to produce validated predictive models. DSSs compare the personalized probable outcomes—toxicity, tumor control, quality of life, cost effectiveness—of various care pathway decisions to ensure optimal efficacy and economy. DSSs can be integrated into the workflows both strategically (at the multidisciplinary tumor board level to support treatment choice, eg, surgery or radiotherapy) and tactically (at the specialist level to support treatment technique, eg, prostate spacer or not). In some countries, the reimbursement of certain treatments, such as proton therapy, is already conditional on the basis that a DSS is used. DSSs have many stakeholders—clinicians, medical directors, medical insurers, patient advocacy groups—and are a natural consequence of big data in health care. Here, we provide an overview of DSSs, their challenges, opportunities, and capacity to improve clinical decision making, with an emphasis on the utility in oncology.
INTRODUCTION
Decision support systems (DSSs; assistive technology for clinicians, who have limited time and are facing ever-increasing complexity) are hailed as a possible solution to the onerous cognitive burden currently placed on clinicians. However, the potential of DSSs is constrained by rapid-learning health care (RLHC; technology for researchers to collect data across health care networks to facilitate learning and generate knowledge) and artificial intelligence (AI; a computational process to distil actionable insight from data). In simple terms, RLHC can be considered a data mine-an infrastructure from which raw material is obtained for use. AI can be considered a data mill-an apparatus in which raw material is refined for purpose. DSSs are one of the greatest potential benefits of a digital health care ecosystem. Nevertheless, clinically relevant DSSs have been limited in utility and implementation. 1 This article describes the challenge, the opportunity, and the capacity of DSSs to advance clinical decision making, with a focus on oncology.
Human Cognitive Capacity and Increasing Complexity
The primary challenge, as a consequence of the recent data deluge, is the threat of cognitive overload 1 ; A glut of raw data, rather than refined information, confounds the distillation of knowledge and obfuscates decision making (Fig 1). 2 A study to investigate the limits of human cognitive capacity probed the conceptual complexity of decision making by requesting participants to interpret graphically displayed statistical interactions. In such decisions, all independent variables had be considered together so that decomposition into smaller subtasks was constrained; thus, the order of the interaction directly determined conceptual complexity. As the order of the interaction increased, the number of variables increased. Results showed a large decline in accuracy and speed of solution from three-way to four-way interactions. Furthermore, performance on a five-way interaction was at the chance level. 3 These findings suggest that a decision based on five variables is the limit of human cognitive capacity. However, the human ability to synthesize information by memory recall/experience to inform intuition is nontrivial for machines to replicate/learn through data capture and should not be overlooked. Nevertheless, this limit must be regarded in the context of precision medicine 4 (the right treatment, for the right patient, at the right time), a bold new research effort to revolutionize how we improve health and treat disease. 5 Precision medicine relies on validated biomarkers 6 (a characteristic that is measured as an indicator of normal biologic processes, pathogenic processes, or responses to an exposure or intervention, including therapeutic interventions 7 ) that are integral to the routine management of disease in patients and are used extensively in cancer research and drug development. 8 Anticancer agents are increasingly being combined with a biomarker to determine which patients are the most likely to benefit from the therapy. 9 This increase in complexity, coupled with the limits of human cognitive capacity, poses a major challenge for the oncology community.
Rapid-Learning Health Care
The threat from data deluge is simultaneously a huge opportunity, because a data-driven RLHC ecosystem will progressively distill and deliver appropriate knowledge to appropriate users within the workflow process, which provides a validated DSS. RLHC is the (re)use of health care data from routine clinical practice and/or clinical trials to support decision making with respect to health care delivery and research. 10 Issues in RLHC include data representation, standardized nomenclature, data formats and standards, federated data access, data mining and evidence synthesis approaches, evidence retrieval, reporting, and feedback on use of evidence. 11 Solutions to all of these issues exist and have been implemented in many industries (eg, aviation, automotive, financial) to create global networks and introduce the concept of the Internet of things. 12 The key to transformation of health care is strategic coordination and facilitation of interoperable approaches to fully realize the innate potential of RLHC. 13 We must embrace this vision or risk collectively drowning in fragmented data lakes.
The Cycle
RLHC constitutes four consecutive, infinitely repeated steps 11 that continuously develop and validate models for DSSs in health care. 14 The first step is data, which tackles the mining of data (ie, the extraction, transformation, and loading of data, eg, clinical, imaging, biologic, genetic, costs). Procuring data of adequate quality is the greatest opportunity in RLHC. The health care ecosystem must establish a patient-centric, data-driven, knowledge-sharing philosophy across institutional and national borders to benefit from this opportunity. The next step is knowledge, which uses artificial intelligence to distil knowledge from the data (ie, extraction of actionable insight). With AI, machinelearning algorithms analyze data and yield knowledge that can support decisions about new unseen data. Algorithms trained, tuned, and tested on retrospective/prospective data can be used to predict the outcomes (eg, survival, quality of life, toxicity) of various treatments on the basis of data from a new unseen patient. The next step is application, which leverages this knowledge to enhance decision making. The data collected are distilled into knowledge and applied in holistic multifactorial DSSs, intended to support clinicians and patients as they decide the most appropriate course of action (DSSs are neither intended nor suited as a replacement to clinicians in the wider health care context). DSSs must be seamlessly integrated into the clinical workflow to improve efficiency, diminish mistakes, and deliver objectives. The last step of the cycle is evaluation, which measures DSS performance (ie, the sensitivity and specificity of prediction for toxicity, tumor control, quality of life, cost effectiveness). The cycle is repeated perpetually. The essence of the RLHC cycle is that the application of knowledge distilled from data provides deep insight and therefore certainty of decision consequences, which suggests that outcomes can be improved both in terms of effectiveness (realization of the desired result) and efficiency (the resources required to realize the result). Continuous evaluation of RLHC is vital, and the importance of this cannot be overstated. Evaluation should focus on metrics for the questions, "Is the outcome of the treatment as predicted, and, if so, how does this compare with consensus evidencebased guideline knowledge?" Evaluation should be conducted with (a meta-analysis of) robust high-quality data and should be independently interpreted by relevant stakeholders.
The Five Vs of Big Data
From a scientific perspective, the four Vs (veracity, velocity, variety, and volume) 15 of big data must to be optimized to fully realize RLHC. The veracity of data is essential to the level of certainty that can be attributed to the knowledge distilled, whereas the velocity of data determines how rapidly and continuously knowledge distillation occurs. Variety of data (in terms of information, not format eg, computed tomography/positron emission tomography/ magnetic resonance DICOM imaging [ to enable learning, whereas, in a distributed data and learning approach, multiple centers link their systems to enable learning. A key aspect of a distributed approach is that it is privacy-by-design construction (ie, data remains at the source), whereas a key aspect of the centralized approach is that data can be directly accessed and scrutinized. These are the two competing tradeoffs between the approaches. DSS, decision support system; EMR, electronic medical record; PACS, picture archiving and communication system.
Decision Support Systems in Oncology to transmit, store, retrieve, print, process, and display medical imaging information]) enables support of decision making (eg, if all patients are treated radically, you cannot know which patients are overtreated). The volume of data is influential in terms of power (ie, the quality of knowledge distilled from investigations is correlated with the number of patients from whom data were obtained), comprehensiveness (ie, a larger data volume permits the use of more variables in the knowledge step), and exhaustiveness (ie, knowledge related to patients with rare diseases intrinsically requires voluminous data).
From an economic perspective, the fifth V (value) of big data must also be considered. That is, if you are going to invest in the infrastructure required to collect and interpret on a system-wide scale, it is important to ensure that the generated insights are based on accurate data and lead to measurable improvements.
The Data Disconnect
For RLHC to succeed, data of suitable quality with respect to the five Vs must be procured. Therefore, a motivation exists toward embrace of a data-connected future. 16 to overcome, are demonstrably solvable. Two outstanding initiatives to realize the goal of RLHC are CancerLinQ (a centralized data approach 18 ) and worldCAT (a distributed data approach 19 ; Fig 2). Common efficient solutions via innovative information communication technologies, such as the creation of semantically interoperable data, 20 which harmonizes local terms to concepts of well-defined ontologies, 21 are fundamental to the sustained realization of RLHC. Ontology terms act as a collective reference for all data sources, allow a unified process for knowledge distillation from semantically interoperable data, and encourage standardized data management (eg, disease-specific umbrella protocols). 22 AI AI-the mimicking of human cognition by computers-is a reality in medicine. 23 AI is an amalgamation of mathematics, computer science, and engineering that implements novel concepts to resolve complex challenges. Machine learning is a subset of AI and has found numerous applications in health care because of the ever-increasing rise in health care complexity. 24 Recently, deep learning 25 (in turn a subset of machine learning) has substantially enhanced state-of-the-art speech recognition, language translation, visual object detection, and many other domains, including genomics and drug discovery. 26 Deep learning discovers complex relationships in data sets through the back-propagation algorithm to guide how a deep neural network (a machine learning model) ought to update its internal parameters that are activated to compute the representation in each layer from the representation in the previous layer. There is a growing consensus that AI (machine learning and deep learning) will be involved more and more in clinical decision making. Therefore, broad implementation of AI algorithms in health care could lead to clinically actionable insight and revolutionize how patients are classified, treatments are developed, diseases are studied, and decisions are made. In oncology, five data sources and four outcomes are typically of interest (Fig 3).
Multidisciplinary Board Specialists
To hasten the maturity of AI, clinical and research communities must cultivate an interdisciplinary shared vision of precision medicine. Data must be acquired, curated, standardized, linked, and stored in interoperable and interrogatable databases to realize the extraordinary potential for RLHC that routine standard-of-care data represent.
Strategic and Tactical Implementation
DSSs can be built into the workflow strategically (multidisciplinary tumor board level to support treatment choice, eg, surgery or radiotherapy) and tactically (specialist level to support treatment technique, eg, prostate spacer or not; Fig 4). Some nations already condition reimbursement (eg, proton therapy in the Netherlands) on the use of DSSs.
Stakeholders
The integration of RLHC DSSs into the workflow must be continuously (re-)evaluated by all stakeholders (Fig 5). This evaluation should be performed with (a meta-analysis of) robust data that are independently interpreted by each of the stakeholders and combined into a consensus statement. The guiding light for the stakeholders should be the question, "Is the outcome of treatment as expected, and, if so, how does this relate to consensus and/or evidencebased guideline knowledge?"
Acceptance and Agency
For DSSs to be widely accepted, frameworks must be created that garner trust from stakeholders. 27 An important factor for adoption of technologies is ensuring that stakeholders are empowered (ie, the agency to inform, adjust, or reject the DSS) and that their concerns are addressed (eg, for clinicians and patient advocacy groups, increased quality of care and decreased medical errors; for medical directors and insurers, reduced costs and facilitated reimbursement).
Perception and Provenance
The perception (understanding and inclination) of DSSs by stakeholders is important. Stakeholders should easily Tumor registry data and machinelearning produce robust classifiers. The model can readily predict which high-risk patients benefit from adjuvant therapy. The model yields individualized, clinically relevant estimates of outcomes to assist clinicians in treatment planning.
Steele 30
Evolutionary strategy to develop learning-based decision systems Breast/ liver 2,458 An appropriate hierarchy of the component algorithms was established on the basis of a statistically built fitness measure. A synergetic decision-making process, on the basis of a weighted voting system, involved the collaboration between the selected algorithms to make the final decision.
The proposed method has been tested on five medical data sets with stateof-the-art techniques, and testing showed its efficiency to support the medical decision-making process.
Gorunescu 31
Patient-specific early death and long-term survival prediction after SRS Brain 495 The resulting classification model predicts early death in patients with brain metastases with higher discriminative performance than the existing models.
The nomograms predicted early death and long-term survival more accurately than commonly used prognostic scores after SRS for a limited number of brain metastases of NSCLC. Moreover, these nomograms enable individualized probability assessment and are easy into use in routine clinical practice. 27 In addition, the origin of information immensely influences perception. Stakeholders must have sufficient transparency.
Shared Decision Making
Health care is shifting toward a more participative, patientcentered approach-an interactive process in which stakeholders collaborate in the selection of health care according to the best available evidence. 10 DSSs can help patients and clinicians communicate more effectively by providing information and a platform to encourage substantial interaction. DSSs can help patients recognize and clarify their personal values without promotion of one choice over another. This will genuinely deliver personalized and participative therapy that supports both clinicians and patients.
Translational Potential of DSSs
In the past 5 years (as a result of advances in hardware and software), DSS research has advanced dramatically, which has revealed the potential of this approach to substantially improve clinical care. The information presented in Table 1 provides a nonexhaustive overview of the literature.
DISCUSSION
Human intelligence is vastly superior to AI in general terms (contextualization, association, and reasoning). AI has yet to mature, so DSSs foreseeably will be appropriate for specific tasks only. The role of clinicians will adapt (similar to pilots) as they ally with DSSs, provide expert knowledge, annotate data, and manage performance/efficacy. The users of DSSs must comprehend the benefits and risks. AI can be powerful (ie, automatic detection, localization, classification, interpretation, recommendation, reporting) but also fallible (ie, support of improper decisions caused by presentation of data beyond the training/tuning/testing). Consider the following example: A DSS performs flawlessly after deployment. The department later upgrades hardware and software; what safeguards exist to ensure that the AI does not subsequently produce erroneous assistance, and who is responsible for this? 36 Another issue is the absence of human intuition about how specific decisions are determined by AI, which leads to unease among many with some declarations that AI is a black box. (However, tools like TensorBoard for Ten-sorFlow 37 exist to provide transparency.) This deficiency of comprehension hinders adoption by various stakeholders concerned with the ethical/responsible clinical utility of DSSs. To mitigate this, clinicians must actively engage with researchers (academic and industrial) to ensure that the solutions developed yield maximum clinical benefit. Residency programs must adopt AI into curriculums. Clinicians and researchers must work with policymakers on the complexities of DSSs and the consequences of errors (clinical and legal). From a regulatory perspective, despite the perplexity, approval of DSSs by the US Food and Drug Administration and notified bodies within the European Union is happening, notwithstanding the ambiguous working mechanisms. Precedent and parallels to this approach are found in pharmacology: many safe and effective approved drugs have unknown mechanisms of action. 38 The limit of human cognitive capacity constrains the realization of precision medicine. However, the combination of RLHC and AI to produce DSSs represents a profound opportunity to make precision medicine a reality. DSSs will form part of the future infrastructure and workflow of oncology and will compare the personalized probable outcomes-toxicity, tumor control, quality of life, cost effectiveness-of various care pathway decisions to ensure optimal efficacy and economy. DSSs will strategically and tactically aid all stakeholders. Abbreviations: CNN, convolutional neural network; DCA, decision curve analysis; DSS, decision support system; HPV, human papillomavirus; mpMRI, multiparametric magnetic resonance imaging; NSCLC, non-small-cell lung cancer; PIRADS, Prostate Imaging Reporting and Data System; ROC, receiver operating characteristic: SRS, stereotactic radiosurgery.
Decision Support Systems in Oncology | 2019-03-08T14:11:28.892Z | 2019-02-07T00:00:00.000 | {
"year": 2019,
"sha1": "b92a33e312569be5b9eb22bb6798ea42163ec36a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1200/cci.18.00001",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "ea9cfab68012c229302e46908b19837846604b16",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Medicine"
]
} |
59335480 | pes2o/s2orc | v3-fos-license | Established liked versus disliked brands: Brain activity, implicit associations and explicit responses
Abstract Consumers’ attitudes towards established brands were tested using implicit and explicit measures. In particular, late positive potential (LPP) effects were assessed as an implicit neurophysiological measure of motivational significance. The Implicit Association Test (IAT) was used as an implicit behavioural measure of valence-related aspects (affective content) of brand attitude. We constructed individualised stimulus lists of liked and disliked brand types from participants’ subjective pre-assessment. Participants then re-rated these visually presented brands whilst brain potential changes were recorded via electroencephalography (EEG). First, self-report measures during the test confirmed pre-assessed attitudes underlining consistent explicit rating performance. Second, liked brands elicited significantly more positive going waveforms (LPPs) than disliked brands over right parietal cortical areas starting at about 800 ms post stimulus onset (reaching statistical significance at around 1,000 ms) and lasting until the end of the recording epoch (2,000 ms). In accordance to the literature, this finding is interpreted as reflecting positive affect-related motivational aspects of liked brands. Finally, the IAT revealed that both liked and disliked brands indeed are associated with affect-related valence. The increased levels of motivation associated with liked brands is interpreted as potentially reflecting increased purchasing intention, but this is of course only speculation at this stage.
ABOUT THE AUTHORS Shannon S. Bosshard completed this paper as a part of his PhD under the supervision of Peter Walla, an expert in Neurobiology (focus on nonconscious brain processes and human behaviour). Shannon currently studies at the University of Newcastle, Australia. He is interested in consumer behaviour and more specifically, the role that non-conscious processes play in consumer decisions. Peter is a professor of Psychology at the Webster Vienna Private University and head of the Psychology Department, while also running the CanBeLab (Cognitive and Affective Neuroscience & Behavior). He is a conjoint professor at the Newcastle University and senior research fellow at the University of Vienna. Besides their purely academic efforts they also offer neuroconsulting services, which is highly appreciated by various industries.
PUBLIC INTEREST STATEMENT
We are often confronted with well-established brands, some liked others disliked, a result of individual attitude. Traditional market research takes explicit responses (conscious and thoughtful) to measure brand attitude, but recent empirical evidence highlights the fact that implicit (rather unconscious) responses often don't match with conscious decisions.
We compare three different kinds of responses to brand name presentations, two unconscious and one conscious. We found that unconscious measures (brain activity and a reaction timebased measure, the Implicit Association Test) match with conscious responses. It is concluded that established like and dislike are indeed established on various levels of information processing in the brain. Future studies will test whether attitude changes can vary as a function of processing level. This is of great interest to marketers and advertisers. The brain knows more than it admits to consciousness and getting access to unconscious knowledge increases our understanding of human behaviour.
Background
Everyday we are presented with stimuli that require evaluation. Until recent years, the majority of attitude research was conducted within traditional social psychological studies. However, as competition between businesses grew, and the need for product differentiation became a necessity, emphasis was placed on investigating attitudes within consumer contexts. When making consumer-based decisions, our attitudes towards a brand play a major contributing role regarding whether we make a purchase or not. As a result, attitudes have recently received a large amount of interest within the field of consumer neuroscience. This field has progressively integrated novel methods of assessing attitudes in various consumer contexts (Morin, 2011).
Whether a company is trying to introduce a new brand or promote an existing brand, they are faced with the question of how to assess consumers' attitudes, especially as a consequence of utilising marketing strategies to modify attitudes. Current marketing literature refers to brand attachment when attempting to identify consumers' attitude towards a brand. Brand attachment refers to the strength of the bond between the consumer and the specific brand/product (Park, MacInnis, Priester, Eisingerich, & Iacobucci, 2010). The strength of this bond is said to act as a good indicator of the brands' profitability and the customers' perceived value of the brand (Thomson, MacInnis, & Park, 2005).
It is crucial to use a multidimensional approach and use as many measures as possible to quantify the various aspects of brand attitude as brands themselves are considered to be multidimensional concepts (Aaker, 1997). This approach will complete traditional approaches that rely on surveys and other methodologies that require explicit responses only. The most familiar measures of attitudes are those traditionally used within marketing studies. Generally referred to as traditional measures, or explicit measures, these provide an insight into explicit attitudes, which are deliberate and contemplative evaluations formulated through reasoning (Gawronski & Bodenhausen, 2006). The act of reasoning has the potential to result in a form of cognitive pollution. Cognitive pollution is the process whereby an explicit response becomes polluted as a result of conscious evaluation of a stimulus (Walla, Brenner, & Koller, 2011;Walla & Panksepp, 2013). In order to overcome the effects of cognitive pollution, the use of implicit measures of attitude are suggested as they instead measure implicit attitudes. In contrast to explicit attitudes, implicit attitudes are associations that are automatically activated in the presence of relevant stimuli without any conscious awareness of evaluation (Cunningham, Raye, & Johnson, 2004).
The lack of acknowledgement of implicit factors consistently produced discrepant findings (for review, see De Houwer, Thomas, & Baeyens, 2001). Various recent cases demonstrate discrepancies between explicit and implicit measures (Geiser & Walla, 2011;Grahl, Greiner, & Walla, 2012;Walla, Rosser, Scharfenberger, Duregger, & Bosshard, 2013) and as a result, there has been a recent turn towards implicit measures of attitudes, which are able to provide an insight into non-conscious affective processing, whilst also providing researchers and practitioners with a more complete picture related to brand attitude. For instance, Geiser and Walla (2011) showed that virtually walking through urban environments can result in different effects depending on explicit or implicit measures; Dunning, Auriemmo, Castille, and Hajcak (2010) found a non-linear relationship between the intensity of angry faces and non-conscious, physiological measures. More specifically, Dunning et al. reported that although participants in their study explicitly stated that images of angry faces were increasingly angry, implicit measures (startle amplitude) were only exhibited when the faces presented were maximally angry. Similarly, Grahl et al. (2012) reported that even specific bottle shapes can elicit a non-conscious affective change, whilst explicit ratings remain constant. In case implicit and explicit measures match up, the complete picture represents strong assurance, and if they don't match up, there is reason to suggest that this discrepancy reflects differences between conscious and non-conscious processing. Those differences could be useful to help shape products and/or marketing strategies.
More recent research presented by Calvert and Brammer (2012) has suggested that attitudes are in many ways, driven by non-conscious processes, thus more comprehensive measures are needed. In contrast to explicit attitudes, implicit attitudes are evaluative associations automatically activated in the presence of a relevant stimulus, regardless of conscious intentionality for evaluation (Cunningham, Espinet, DeYoung, & Zelazo, 2005). This means that both positive and negative evaluations can occur without conscious awareness (Devine, 1989). This automatic nature of implicit evaluations reinforces their conceptualisation as non-conscious processes (Dijksterhuis, 2004). Furthermore, implicit attitudes are shown to be considerably robust (Petty, Tormala, Briñol, & Jarvis, 2006) and better predictors of spontaneous behaviour (Gawronski & Bodenhausen, 2012). With regard to spontaneous behaviour, Wilson et al. (1993), showed that when choosing one of two posters, participants that were asked to provide reasoning for their decisions not only showed different preferences, but also reported being less satisfied with their selection three weeks after the study. Again, such findings reiterate the implication of cognitive pollution during consumer decision-making and the importance of including implicit approaches to consumer research.
Implicit measurements
Of the behavioural (non-physiological) implicit measures, the Implicit Association Test (IAT; see Greenwald, McGhee, & Schwartz, 1998) is arguably the most popular and effective response latencybased implicit measure. The IAT has been used primarily as a tool within social psychology to determine implicit attitudes and stereotypes of social constructs including race (ecomorphological group) and gender Banaji & Hardin, 1996;Dovidio, Kawakami, & Gaertner, 2002;Fazio, Jackson, Dunton, & Williams, 1995;Greenwald & Farnham, 2000;Greenwald et al., 1998Greenwald et al., , 2002. In recent times, however, the use of the IAT has extended into fields including marketing research (Brunel, Tietje, & Greenwald, 2004;Maison, Greenwald, & Bruin, 2001). Nevertheless, it has to be mentioned that the IAT has been met with a number of criticisms regarding legitimacy as a reliable and valid index of implicit attitudes (De Houwer, 2006;De Houwer, Beckers, & Moors, 2007;Fiedler, Messner, & Bluemke, 2006;Hofmann, Gawronski, Gschwendner, Le, & Schmitt, 2005). According to Rothermund and Wentura (2004), rather than the IAT measuring implicit associations, it may instead provide an indication of differences in salience between the two groups of target stimuli. Similarly, Mitchell (2004) found that when completing the IAT, participants sort the stimuli into two categories; one that is accepted and another that is rejected. From these findings, it is possible that the IAT does not measure attitudinal aspects of a stimulus, but instead reflects the means by which participants have sorted the stimuli. Electroencephalography (EEG) has been demonstrated as a useful physiological technique for obtaining implicit information through a number of approaches. For example, non-conscious verbal memory traces have been shown (e.g. Rugg et al., 1998). Although a limited number of papers have investigated attitudes using EEG, even few of these papers are related to consumer neuroscience (for review see, Wang & Minor, 2008). Of the few papers that are seen to investigate attitudes using EEG within consumer contexts, many have proposed that EEG can differentiate between brand-related stimuli containing either a positive or negative valence. Handy, Smilek, Geiger, Liu, and Schooler (2010) found that when participants rated unfamiliar logos as positive, these stimuli elicited more activity than those that were rated as negative across frontal and parietal regions as late as 600 ms. Further evidence of EEG as suitable means in determining differences between positive and negative stimuli within marketing contexts was put forth by Vecchiato et al. (2010). Rather than investigating brain activity related to positive and negative logos, Vecchiato et al. investigated brain activity in relation to TV commercials. Their research revealed that TV commercials that were rated as pleasant resulted in increased levels of activity than those rated as unpleasant (Vecchiato et al., 2010). Again, it was reported that frontal and parietal areas were largely involved in the processing of the commercials. Although the literature is scarce, it is clear that EEG reveals some insight into an individual's attitudes and motivation. Through the analysis of asymmetrical activity across the prefrontal cortex, Davidson, Schwartz, Saron, Bennett, and Coleman (1979) suggested that greater activity across the left frontal hemisphere is associated with positive emotions, whereas greater activity across the right frontal hemisphere is associated with more negative emotions. Since this report, motivational components have also been identified with relative increased left and right activity being associated with approach and avoidance systems, respectively (Harmon-Jones, 2004). The asymmetry model has recently proved informative in numerous consumer contexts (e.g. Brown, Randolph, & Burkhalter, 2012;Ohme, Reykowska, Wiener, & Choromanska, 2010;Ohme et al., 2010;Ravaja, Somervuori, & Salminen, 2013;Solnais, Andreu, Sánchez-Fernández, & Andréu-Abela, 2013). For instance, Ravaja et al. revealed that asymmetry over the prefrontal cortex predicts purchase decision when brand and price are varied with greater left frontal activation indicating greater intent to engage in a purchase. In addition, Brown et al. found that when presented with several beverages, participants explicitly stated a preference for one in particular; however brain activity showed no asymmetry effect across left frontal electrode sites, thus, suggesting they were processed as neutral. Brown et al. showed that participants who processed the brands as neutral were more likely to willingly switch from their explicitly stated brand preference when faced with a cheaper alternative.
From these findings, it can be inferred that through the use of EEG, we may be able to identify a link between brain activity and consumer brand attitude. Of most interest for the present study, the most empirically valid EEG approach as an index of motivation and affect has been a distinct eventrelated potential (ERP) component, the Late Positive Potential (LPP). It has not only been implemented in an expansive volume of research, but also recently received psychometric endorsement which revealed that the LPP demonstrated good to excellent reliability as a measure of emotion/ affective processing (see Moran, Jendrusina, & Moser, 2013). According to the literature, stimuli that are emotionally arousing produce an enhanced LPP compared to neutral stimuli (Cacioppo, Crites, Berntson, & Coles, 1993;Cacioppo, Petty, Losch, & Crites, 1994;Cuthbert, Schupp, Bradley, Birbaumer, & Lang, 2000) and those with greater motivational significance produce larger LPPs (Lang, Simons, & Balaban, 1997). An overall greater LPP sensitivity has been found in the right hemisphere during evaluative tasks .
The present study
The rationale for the present study was to use the IAT to test whether explicitly rated brands that are liked are indeed associated with positive affect and disliked brands with negative affect. In addition, via EEG recordings we aimed at testing whether or not liked and disliked brands are further associated with different motivational aspects. The present study also extends upon the study by in that it adds further implicit measures (specifically, EEG and the IAT) to measure brand attitude. They too investigated brand attitude, but focused on startle reflex modulation, heart rate and skin conductance. No studies addressing the sensitivity of ERPs as a measure of brand attitude were expressed in this paper, and to our knowledge remain absent in the current existing literature. Furthermore, in contrast to much of the existing literature, the current study focuses on individual's perceptions of highly familiar brands. We used an online survey to produce individual lists of liked and disliked brands and then invited eligible participants to record brain potentials and take IAT measures. We first hypothesised that self-reported measures during physiological recording would strongly reflect explicit pre-assessment ratings. Following the existing literature, we expected the LPP component to vary as a function of brand attitude allowing us to make inferences about affectbased motivational aspects. Finally, we expected IAT data to also support differences between liked and disliked brands and thus demonstrate its reliably as a measure of brand attitude.
Participants
Initial recruitment for the study involved 27 participants, 3 of whom were excluded following preassessment of brand attitudes. The mean age of the remaining 24 participants (12 females) was 23.58 (SD = 2.39). All participants were tertiary education students recruited by word of mouth. They volunteered and gave their written informed consent. Participants were right handed, had normal or corrected to normal vision, were free of central nervous system affecting medications and had no history of neuropathology. They were also asked to not drink any alcohol or coffee and to not smoke for at least 24 h before the experiment. Participants were financially reimbursed for their time and travel. The study was approved by the Newcastle University Ethics Committee.
Stimuli
The initial stimulus list for pre-assessment comprised 300 subjectively chosen common brands names, familiar to people from Australia (see Appendix A for list of presented brand names). Using an online survey, participants provided a subjective rating of like or dislike for each brand name on a 21-point Likert scale, ranging from −10 (Strong Dislike) to +10 (Strong Like). Upon initiation of the experiment, we created individualised stimulus lists using the subjective ratings obtained from the online survey. Each stimulus list comprised 200 brand names, including the participant's 30 most liked brand names, 30 most disliked brand names, 60 neutral brand names and 80 non-target (filler) brand names. This accumulated 120 target brand names across three types; positive, negative and neutral. Brand names were presented in capital white letters, Tahoma font and on a black background (no logos were presented). In the frame of this paper only measures related to liked and disliked brands are further analysed.
Individual pre-assessment of brand attitudes
Participants subjectively rated 300 brand names using an online survey (via www.limesurvey.com), prior to entering the lab. We required participants to read each brand name and indicate their attitude towards it using a mouse/track pad on the provided slider. Participants were explicitly instructed to not adjust the slider if they were unfamiliar with a particular brand. Rating a brand as neutral required the participant to manually click "0". This phase of the experiment occurred at a time of the participant's choosing, with choice of computer also left to their discretion. The survey took on average 15-20 min to complete. Participants who demonstrated adequate familiarity and attitude scope were eligible for the experimental phase of the study. That is, participants who were either unfamiliar with the majority of the brands, or did not have a large spread of attitudes (ranging from strongly liked to strongly disliked) were excluded from the experiment. This came as a result of not being able to construct a stimulus list with discernable positive and negative target items. Three participants were unable to further participate due to such inadequate brand pre-assessment.
Lab experiment
Following completion of pre-assessment, we invited eligible participants individually into the lab. Participants were encouraged to attend the lab within three days of having completed the online survey. During their visit, we collected all explicit and implicit measures of attitudes towards brand names. Explicit measurement involved subjective self report, whilst implicit measures were collected using EEG and the IAT. Upon entering the lab, participants were seated comfortably in front of a 32 inch LED television (screen resolution of 1,024 × 768 pixels). We connected participants to a BioSemi ActiveTwo EEG system (BioSemi, Amsterdam, the Netherlands) and measured potential changes using 64 cranial electrodes, as well as 8 external reference electrodes placed lateral ocularly, supraocularly, infraocularly and on the mastoids.
We used the computer program Presentation (NeuroBehavioral Systems, Albany, United States) to visually present the appropriate instructions and individualised stimulus lists. The presentation of stimuli in addition to neurophysiological signal recording was conducted from a separate room. We commenced testing with the participant by themselves in a dimly lit room to ensure adequate focus on the stimuli. A white fixation-cross appeared on a black background for 500 ms, followed by a brand name for 5 s. Participants provided a self-reported rating of 1 (Strong Dislike) to 9 (Strong Like) for the brand using a standard keyboard, whilst it was on screen. Brain potential changes and selfreport were collected for the 120 target brands. To reduce fatigue effects, participants were provided a break halfway through this stage. Overall, it took approximately 30 min to complete. At this stage, participants had the EEG recording cap removed and were then asked to complete five rounds of the IAT (see Figure 1 for modified IAT).
Self-report and Implicit Association Test
For self-report data, mean ratings of liked and disliked brands were compared using paired-sampled t-tests. These analyses were completed at both the pre-and post assessment phases. As for the IAT, we used a modified version of the original test (Greenwald et al., 1998), which consisted of 5 separate discrimination tasks each with 30 visual presentations to be classified as either a target or nontarget stimulus. Although the structure and administration of the IAT remained identical to the original IAT, rather than using stimuli that fall under the guise of social psychology (e.g. Faces of different races; Greenwald et al.,1998), we instead used brand names. In task 1 (initial target concept) study participants were asked to discriminate between a non-target brand (previously rated as neutral) and a target brand (individually rated liked or disliked brands). Study participants were required to press the "A" key for target brand and the "L" key for non-target brand. In task 2 (associated attribute) participants were visually presented with valenced words and asked to press the "A" key for pleasant words (e.g. beautiful, healthy, happy and perfect) and the "L" key for unpleasant words (e.g. frighten, angry, sad and worthless). In task 3 (initial combined task) tasks 1 and 2 were combined. Study participants were asked to press the "A" key in case of target brand or pleasant words and the "L" key when presented with a negative word or a non-target brand. Task 4 (reversed target concept) was similar to task 1, however participants were asked to press the "A" key for non-target brands and the "L" key for target brands. Finally, task 5 (reversed combined task) was a combination of task 2 and task 4. Participants were required to press the "A" key in case of non-target brands and pleasant words and the "L" key when presented with a negative word or a non-target brand. In accordance with the existing literature (De Houwer et al., 2001), a comparative analysis was made between reaction times of participants during task 3 and task 5. During each of the blocks, stimuli were presented for 300 ms; however, participants were given 1,500 ms to respond during each trial. Between each stimulus, a fixation cross was presented for 300 ms and between the fixation cross and the following stimulus, was another 700 ms gap. For a pictorial explanation of how the IAT was implemented (see Figure 1). Participants completed one IAT which included a liked brand as a target brand and second IAT which incorporated a disliked brand as a target brand. For a pictorial explanation of how the IAT was implemented (see Figure 1).
Figure 1. Modified version of the original IAT.
Notes: Filled black circles on the left of the stimulus indicate left button presses and vice versa. Task 3 = congruent, Task 5 = Incongruent condition.Source: Adapted from Greenwald et al. (1998)
Event related potentials
We recorded EEG at a rate of 2,048 samples/s using a 64-channel Bio Semi Active Two system and ActiView software (BioSemi, Amsterdam, the Netherlands). Data-sets were processed individually using EEG-Display (version 6.3.13; Fulham, Newcastle, Australia). During processing, we reduced the sampling rate to 256 samples/s and applied a band pass filter of 0.1-30 Hz. Blink artefacts were corrected by referencing to the supraocular external electrode (excluding two sets referenced to Fpz due to unclean external signals). In order to eliminate noise generated by eye movements, we conducted horizontal, vertical and radial eye movement corrections (see Croft & Barry, 1999). The data was coded to brand type (i.e. liked and disliked). We established epochs from −100 ms prior to stimulus onset (a baseline), to 2,000 ms following stimulus onset. The resultant epochs were baseline corrected and an average was generated across single trials for each condition. The individual datasets were then re-referenced to a mastoid electrode. Grand averaged ERPs were generated to display brain activity differences. Grand averaged ERPs were then analysed in 200 ms (between 20 and 1,800 ms) blocks using t-tests to compare mean activity during these periods (200 ms-400 ms, 400 ms-600 ms, 600 ms-800 ms etc.)
Self-report at pre-testing
To analyse the self-report data, the responses towards participants' most liked and most disliked brands were collated. We then conducted a paired t-test on these two conditions and found that on average, the mean of self-reported liked brands (the top 30 most liked) was 9.44 (SD = 2.49) and the mean of disliked brands (30 least liked brands) was −4.56 (SD = 5.41; see Figure 2). As expected, this effect was seen to be highly significant (t = 25.765, df = 118, p < 0.001, two-tailed; d = 3.54).
Self-report during the lab experiment
In order to assess self-report responses towards liked and disliked brands during the lab experiment, we collated all responses towards participants most liked and most disliked brands. We then conducted a paired t-test to assess the sensitivity of self-report to pre-assessed explicit brand attitudes. Consistent with predictions, self-report measures differed significantly according to brand type also during physiological recording (t = 21.721, df = 118, p < 0.001, two-tailed; d = 3.03). As expected, liked brands (M = 7.39, SD = 0.98) were rated significantly higher than disliked brands (M = 3.39, SD = 2.03; see Figure 3).
Event-related potentials
We produced averaged ERP figures to broadly assess effects of brand type over the entire epoch of interest. Visual inspection of overlaid ERPs revealed strongest LPP differences between liked and disliked brands at frontal site AF7 and parietal sites P7 and P8 (see Figure 4). We then conducted paired t-tests on all above-mentioned electrode sites to compare brand effects. Note: Ratings are based on a scale from −10 (maximum disliked) to +10 (maximum liked).
Unexpectedly, we saw no significant effect across left frontal electrode site AF7 for the entire duration of the epoch, however we did see a pattern emerging which saw greatest difference at about 1,400 ms (t = −1.773; df = 23; p = 0.089; two-tailed; d = 0.51). In contrast, parietal site P8 saw liked brands evoke more positive activity throughout majority of the ERP. This effect was seen to begin at around 1,000 ms (t = −1.578; df = 23; p = 0.019; two-tailed; d = 0.59) and remain until 1,800 ms, reaching greatest significance at around 1,400 ms (t = 3.110; df = 23; p = 0.005; two-tailed; d = 0.66). Analysis on left parietal site P7 revealed no significant brand effect with greatest significance achieved at around 1,200 ms (t = −1.421; df = 23; p = 0.169; two-tailed; d = 0.26). Figures 4 and 5 show the dominant LPP effect over the right parietal area in relation to liked brands.
Implicit Association Test
During analysis of the IAT responses, we compiled all participants' responses and found the mean reaction time for each phase. We then removed all responses that were provided either too quickly or too slowly. All responses that fell under three standard deviations (calculated in milliseconds) from the overall mean reaction time of each phase were removed. We also removed all incorrect responses. We then analysed the data regarding participants most liked brands (see Figure 6). We conducted a paired t-test and consistent with predictions found that there was a significant difference in reaction time between the congruent condition (M = 607.47 ms, SD = 117.95) and the incongruent condition (M = 677.70 ms, SD = 186.96)(t = −6.457; df = 344; p < 0.001; two-tailed; d = 0.46). We then proceeded to conduct an analysis of participants" responses towards disliked brands (see Figure 6). We again, as expected, found a significant difference between the congruent condition Notes: 30 most liked and 30 most disliked brand names. Ratings are based on a scale from 1 (maximum disliked) to 9 (maximum liked).
Figure 4. Grand averaged ERPs related to disliked and liked brands.
Note: At P8 liked brands elicited a more positive going potential compared to disliked brands.
Discussion
The findings of our study are twofold. Firstly, through the observation of the self-report ratings as well as the late onset of the LPP, we provide evidence that like and dislike as in brand attitude are indeed associated with deep positive and negative affect. Secondly, we demonstrate that liked brands are implicitly associated with increased motivational aspects compared to disliked brands. Although purely speculative at this stage it might be reasonable to believe that this is reflective of increased purchasing intentions related to liked brands.
Self-report and IAT
Congruent with our predictions, self-reported measures during the lab experiment strongly reflected those obtained during pre-assessment even though the contexts in which both sets of data were collected varied considerably. This indicates the consistent nature of explicitly rated brand like and dislike in the frame of our study. Prior to entering the lab, participants were required to rate brand names using a 21-point scale and not under any time constraints, while participants were only allowed a few seconds to respond using a 9-point scale during neurophysiological recording. Cunningham and Zelazo (2007) state that explicit attitudes are ultimately influenced by two competing motivational drives, to reduce error and reduce cognitive demand. As individuals are allowed to take more time to make decisions, their accuracy is said to increase, however the cognitive load also increases. In contrast, when under time constraints, participants are able to reduce cognitive load, however the chance of errors increase, respectively. With regard to the current study, the preassessment phase saw participants take more time to respond, thus their responses were thought to have been more accurate and, in turn, require an increased cognitive load. In contrast, during the physiological recording phase, where participants only had a limited time to respond, the cognitive load was less, but room for error increased. Our results may indicate a trade-off between these two motivations and this may have contributed to the congruent ratings. Such considerations are Note: The implicit nature of the IAT might be useful in the future to test evaluative conditioning effects without requiring explicit responses.
important when comparing explicit attitudes obtained over different contexts (Stafleu, de Graaf, van Staveren, & de Jong, 1994). However, most importantly we could confirm that explicit rating performance revealed same results when compared across two different measurement times.
In principle, the IAT has been developed as a measure of a person's automatic and thus rather implicit association between valence-related information and stored mental representations of any content or concept (Greenwald et al., 1998). In our study, the IAT was used to test whether or not implicit associations between positive valence and liked brands and negative valence and disliked brands exist. The results strongly support this hypothesis. Given that like and dislike in our study is reflective of brand attitude, the current research provides further support that the IAT is a suitable means of distinguishing between positive and negative attitudes on a rather non-conscious level, which is consistent with previous research (e.g. Brunel et al., 2004). The results show that reaction time is significantly reduced when participants responded to a liked brand that preceded a pleasant word and also when a disliked brand preceded an unpleasant word (congruent condition). In contrast, the results also show that there is a significant increase in participant's reaction time when responding to liked brands in that preceded a negative word and also for negative brands that preceded a positive word (incongruent condition) indicating a lack of association between those two information. However, it should be noted that our data does not support (or refute) the assumption that the IAT directly measures implicit attitudes, even though we strongly believe that this is the case.
As previously mentioned, the IAT has been met with criticisms regarding its ability to measure implicit attitudes (see De Houwer, 2006) and, although it may be useful as an implicit measure within consumer research, it should be used cautiously. According to Boysen, Vogel, and Madon (2006) people may be able to influence their responses on the IAT and, as a result, alter the outcome of this supposed automatic, implicit task. Therefore, the authors of the current paper suggest that the IAT be used in conjunction with other implicit measures. Further research is needed to define the value of the IAT.
Event-related potentials
Within social psychological studies, negative and positive stimuli are considered to be more inherently affective (i.e. out-group prejudices, etc.) and are often evolutionary-based mechanisms (i.e. detecting threats; Brewer, 1999) that are both associated with increased motivational levels. In our study, we found evidence that liked brands elicit significantly greater levels of motivation compared to disliked brands, which is interesting. Brand name attitudes are entirely learned and highly semantic (Stuart, Shimp, & Engle, 2001). This is supported by findings that brand attitudes can be derived and shaped without the individual actually having any direct experience with the brand (Ahluwalia, Burnkrant, & Unnava, 2000;Sweldens, Van Osselaer, Janiszewski, & Janiszewski, 2010). This might be a reason for the discrepancy in level of motivation.
Although the lateralised dominance of an enlarged LPP for liked brands to the right hemisphere is in contrast to numerous studies on social attitudes which suggest that the left hemisphere displays a greater LPP for positive attitudes, other research has demonstrated that the right hemisphere is generally more sensitive to LPP effects (Cacioppo, Crites, & Gardner, 1996). There is considerable consensus that this right hemisphere bias in evaluative processing is modulated by the level of motivational significance of the stimulus (Cacioppo et al., 1994Cunningham et al., 2005;Cuthbert et al., 2000;Gable & Harmon-Jones, 2013). This understanding of the LPP is very much in line with our own view and we interpret our findings to infer that liked brands, although generating greater activity, implicitly, may not have been perceived as more affective than disliked brands. Instead, liked brands may have been more motivationally arousing. More research into these findings is necessary before clearer conclusions can be drawn.
The considerably late onset of the LPP in our study further supports the suggestion that perhaps; the processing of brands requires a large amount of cognitive and affective processing. A number of studies have shown significant motivational discrepancies using the LPP as early as 300 ms to 400 ms (Olofsson, Nordin, Sequeira, & Polich, 2008;Pastor et al., 2008). The LPP onset of roughly 1,000 ms in our study infers that considerably more processing occurred before the stimuli were distinguished as either liked or disliked (see Falkenstein, Hohnsbein, & Hoormann, 1994). This late onset could also be a reflection of the use of well-known brands rather than those which are fictitious (as seen in Handy et al., 2010).
Finally, it has to be mentioned that our data regarding frontal sites, although only a trend and not significant, supports existing literature (Davidson et al., 1979;Harmon-Jones, 2004) that liked or positive stimuli evoke greater potentials than disliked or negative stimuli across the left prefrontal cortex. From this finding, we can infer that like other affective stimuli, brands that are liked or more motivationally arousing result in increased potentials across the left prefrontal cortex more so than do disliked or aversive brands; and that this greater level of activity may give an indication of a participant's purchase intention. Although this is only speculation at this stage, it helps forming new hypotheses for future studies with a strong applied aspect.
Although the LPP has been explored in consumer contexts, to our knowledge previous studies have used only novel stimuli (Handy et al., 2010). Our study increased external validity by assessing brand attitudes previously formed in everyday life. The pre-assessment phase further increased the utility of this approach by ensuring strength of subjective participant attitudes. We acknowledge that experimental control is important and more easily obtained using unfamiliar stimuli. However, attitude formation and change does not occur in a vacuum and translatability of research is of particular importance in consumer neuroscience. We therefore recommend further use of established brand stimuli such as those used in the present study. To further expand on the use of existing brands, we also suggest assessment of stimuli such as familiar brand logos and products. These have shown to strongly activate neural systems of familiarity in functional magnetic resonance imaging paradigms (Schaefer, Berens, Heinze, & Rotte, 2006;Tusche, Bode, & Haynes, 2010) and may also demonstrate effects unique from brand names. Moreover, we emphasise the requirement of ensuring appropriate procedures during pre-assessment, such as controlling for factors that influence evaluative error and cognitive demand.
The IAT is a cognitive index of implicit attitudes further higher order than ERP, to the point of being susceptible to cognitive bias (De Houwer, 2006). Given its popularity for attitude assessment (De Houwer, 2006;Gattol, Sääksjärvi, & Carbon, 2011;Hofmann et al., 2005), it may prove useful to consolidate this traditional response-latency measure with such contemporary ERP techniques for a broader scope of attitudes.
Conclusions
In the present study, self-report, ERP measures and the IAT were demonstrated to be sensitive to pre-assessed brand attitudes. The effects observed using ERP specifically affirms higher order motivational processes as potentially underlying contributors to our explicit results. A larger LPP effect over the right parietal cortex for liked brands inferred greater motivational significance for liked compared to disliked brands. The IAT results suggest that brand attitude is indeed associated with deep affective content. In summary, even though both liked and disliked brands are associated with affective content, liked brands elicited significantly higher levels of motivation levels, which might be reflective of increased purchasing intentions related to liked brands.
Further research expounding the different mechanisms involved in evaluative processes should likewise prove beneficial for understanding attitudes generally and in applied contexts. Broadly, the implications of our own, and prospective related research may also provide clinical insight into severe consumer behaviours such as gambling and substance abuse and dependence (Foxall, 2008).
In conclusion, the present study demonstrates that as the field of behavioural sciences progresses, there is a dire need for the field of marketing research to keep up. Given the constant reports of discrepancies between traditional, self-report data and newer, implicit approaches (such as those | 2018-12-21T12:33:45.727Z | 2016-05-25T00:00:00.000 | {
"year": 2016,
"sha1": "24fcf65dec7a43a4fef7f90e686eec0d3cf1c3be",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1080/23311908.2016.1176691",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "d5a39e8c18436d4a06f5df6e2c9f6c5ecf11063b",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Psychology"
]
} |
257921677 | pes2o/s2orc | v3-fos-license | q-Partitioning Valuations: Exploring the Space Between Subadditive and Fractionally Subadditive Valuations
For a set $M$ of $m$ elements, we define a decreasing chain of classes of normalized monotone-increasing valuation functions from $2^M$ to $\mathbb{R}_{\geq 0}$, parameterized by an integer $q \in [2,m]$. For a given $q$, we refer to the class as $q$-partitioning. A valuation function is subadditive if and only if it is $2$-partitioning, and fractionally subadditive if and only if it is $m$-partitioning. Thus, our chain establishes an interpolation between subadditive and fractionally subadditive valuations. We show that this interpolation is smooth, interpretable , and non-trivial. We interpolate prior results that separate subadditive and fractionally subadditive for all $q \in \{2,\ldots, m\}$. Two highlights are the following:(i) An $\Omega \left(\frac{\log \log q}{\log \log m}\right)$-competitive posted price mechanism for $q$-partitioning valuations. Note that this matches asymptotically the state-of-the-art for both subadditive ($q=2$) [DKL20], and fractionally subadditive ($q=m$) [FGL15]. (ii)Two upper-tail concentration inequalities on $1$-Lipschitz, $q$-partitioning valuations over independent items. One extends the state-of-the-art for $q=m$ to $q2$. Our concentration inequalities imply several corollaries that interpolate between subadditive and fractionally subadditive, for example: $\mathbb{E}[v(S)]\le (1 + 1/\log q)\text{Median}[v(S)] + O(\log q)$. To prove this, we develop a new isoperimetric inequality using Talagrand's method of control by $q$ points, which may be of independent interest. We also discuss other probabilistic inequalities and game-theoretic applications of $q$-partitioning valuations, and connections to subadditive MPH-$k$ valuations [EFNTW19].
Motivation
Functions of the form f : 2 M −→ R are a fundamental object of study in the fields of Algorithmic Game Theory and Combinatorial Optimization. For example, when M is a set of items in an auction, f (S) could indicate the value that an agent obtains from receiving the bundle S (see more about combinatorial auctions in [Nis+07,Chapter 11]). When M is a set of agents, f (S) could indicate the cost that agents S need to pay in order to purchase a given service together (see more about cost sharing in [Nis+07,Chapter 15]).
As set functions 1 are motivated by real world processes -auctions, cost sharing, and job scheduling among others -the mathematical study of such functions usually assumes that they satisfy certain natural properties. Throughout the paper, we assume that all valuations satisfy the following two simple technical properties: they are monotone (f (S) ≤ f (T ) whenever S ⊆ T ) and normalized (f (∅) = 0). Economic considerations give rise to more complex conditions on set functions. For example, frequently imposed is the condition of diminishing marginal values, also known as submodularity. Another condition motivated by economics is complement-freeness in the values that an agent obtains from bundles of items, which is also known as subadditivity. Finally, one could be interested in the existence of prices which incentivize cooperation among agents when purchasing a given service; this turns out to be equivalent to the fractionally subadditive property (see Section 3.1). 2 In this paper, we focus our attention on fractionally subadditive and subadditive set functions. Trivially, fractionally subadditive functions are a smaller class strictly contained in the class of subadditive functions. Something stronger turns out to be true -Bhawalkar and Roughgarden show the existence of subadditive functions which are very far from being fractionally subadditive in a precise quantitative sense [BR11]. This difference between fractionally subadditive and subadditive valuations is not purely theoretical and has important implications. For example, in the context of combinatorial auctions, there exists a posted price mechanism that gives a (1/2)-approximation to the optimal welfare when all players have fractionally subadditive valuations [FGL15], but the best known approximation ratio for subadditive valuations is Ω( 1 log log m ), where m is the number of items [DKL20] (moreover, the [FGL15] framework providing a (1/2)-approximation for XOS provably cannot beat O(log m) for subadditive, and the [DKL20] framework provably cannot beat O(log log m)). Similarly, in the context of concentration inequalities, a fractionally subadditive valuation v has E[v]-subgaussian lower tails (see [Von10, Corollary 3.2]), but such a strong dimension-free concentration provably does not hold for subadditive valuations (see [Von10, Section 4]).
What if a set function is "somewhere in between being subadditive and being fractionally subadditive"? On the one hand, as it is not fractionally subadditive, one cannot use the strong guarantees of fractional subadditivity (such as in posted price mechanisms or subgaussian concentration) when analyzing it. On the other hand, as the set function could be significantly more structured than an arbitrary subadditive function, it is perhaps inefficient to simply use the much weaker properties guaranteed by subadditivity (especially, those that provably cannot be improved for all subadditive functions). In this paper, we construct a smooth interpolation between fractional subadditivity and subadditivity. Explicitly, we define a chain of function classes that starts with fractionally subadditive set functions and expands to subadditive set functions. Our goal is to understand how the behaviour of these function classes changes along the chain. We focus on several setups in which subadditive and fractionally subadditive valuations have received significant attention in the literature, and in which strong claims for fractionally subadditive valuations provably don't hold for all subadditive valuations.
Results Part I: Defining q-partitioning valuations
Our chain of classes is parametrized by a positive integer parameter q ranging between q = |M | (which corresponds to the fractionally subadditive case) and q = 2 (which corresponds to the subadditive case).
The number q corresponds to the complexity of fractional covers under which the valuation function is non-diminishing. We call the respective classes q-partitioning and the resulting interpolation the partitioning interpolation. We give a formal definition in Definition 3.0.1. We then establish that the partitioning interpolation satisfies several desirable properties: • Interpretability: In Section 3.1, we present an economic interpretation of q-partitioning via the core of a cost-sharing gameà la the Bondareva-Shapley theorem which characterizes fractionally subadditive valuations [Bon63;Sha67] -See also [Nis+07,Theorem 15.6]. In slightly more detail, say there is a service that can be acquired by set S of players if they together pay c(S). One can then ask, for any subset T ⊆ [m] whether or not there exist non-negative prices {p i } i∈T such that: a) i∈T p i = c(T ) (service is purchased for T ) and b) for all S ⊆ T , i∈S p i ≤ c(S) (no set S ⊆ T wishes to deviate and purchase the service just for themselves). The Bondareva-Shapley theorem, applied to monotone normalizd cost functions, states that such prices exist for all T if and only if c(·) is fractionally subadditive.
Consider instead modifying the game so that players are grouped into q fully-cooperative cities (that is, cities will always act as a coherent unit, and will act in the best interest of the entire city). One can then ask, for any subset T ⊆ [m] and any partitioning of T into q cities T 1 , . . . , T q , do there exist non-negative prices {p i } i∈ [q] such that: a) i∈ [q] p i = c(T ) (service is purchased for T ) and b) for all S ⊆ [q], i∈S p i ≤ c(∪ i∈S T i ) (no set S of cities wishes to deviate and purchase the service just for themselves). Proposition 3.1.4 establishes that such prices exist for all T and all partitionings of T into at most q cities if and only if c(·) is q-partitioning. • Smoothness of The Interpolation: In Theorem 3.0.5, we show that our chain of classes is smooth in the sense that every q-partitioning valuation is almost (q+1)-partitioning. Formally, Theorem 3.0.5 establishes that the class of q-partitioning valuations is (1−1/q)-close to the class of (q +1)-partitioning valuations. We provide a formal definition of closeness in Definition 3.0.4, but note briefly here that it is the natural extension of closeness to XOS valuation functions from [BR11] extended to q < m. • Existence of Classes: In Proposition 3.0.2, we show that for each m = |M | and 2 ≤ q ≤ m, there exist q-partitioning valuations over M that are not (q + 1)-partitioning. In other words, none of the m − 1 classes "collapses" to a lower level.
Results Part II: Posted price mechanisms and concentration inequalities
Our main results apply the partitioning interpolation to two canonical problems where subadditive and fractionally subadditive valuations are "far apart." Our main results provide analyses that smoothly degrade from fractionally subadditive to subadditve as q decreases -this enables stronger guarantees for wide classes of structured subadditive functions which (provably) cannot be obtained for all subadditive functions.
Posted Price Mechanisms. Posted price mechanisms are a core objective of study within Algorithmic Game Theory, including multi-dimensional mechanism design [Cha+10], single-dimensional mechanism design [Yan11; Ala+15], and the price of anarchy [FGL15;Düt+20]. Posted price mechanisms list a price p i for each item i ∈ [m], then visit the bidders one at a time and offer them to purchase any remaining set S of items at price i∈S p i (and these items become unavailable for all future bidders). Of course, strategic players will pick the remaining set S that maximizes v i (S) − i∈S p i .
Of key importance to multiple of these agendas is the following basic question: to what extent can posted price mechanisms optimize welfare in Bayesian settings? Specifically, assume that each bidder i's valuation function v i (·) is drawn independently from a known distribution D i over valuations in some class V. The optimal expected welfare is simply When strategic players participate in a posted-price mechanism with prices p, some other partition of items is selected, guaranteeing some other expected welfare. What is the maximum number α(V) such that for all D = × i D i supported on V, there exists a posted-price mechanism that results in expected welfare at least an α(V)-fraction of the optimal welfare? Besides being the main question of study in works such as [FGL15; Düt+20; DKL20], resolving this question has downstream implications for revenue-maximization in multi-dimensional settings due to [CZ17].
For the class of fractionally subadditive valuations, [FGL15] establish a 1/2-approximation, which also implies a 1/ log 2 (m)-approximation for subadditive valuations. However, their techniques provably cannot yield stronger guarantees for subadditive valuations [BR11;Düt+20]. Recent breakthrough work of [DKL20] designs a new framework for subadditive valuations that yields an Ω(1/ log 2 log 2 (m))-approximation, but aspects of their framework also provably cannot provide stronger guarantees. In this sense, there is a strong separation between the state-of-the-art guarantees on posted price mechanisms for fractionally subadditive and subadditive valuations (and also, there is a permanent separation between what can be achieved within the aforementioned frameworks).
Main Result I: Our first main result provides an Ω( log log q log log m )-competitive posted price mechanism when all distributions are supported on q-partitioning valuations. This is stated in Theorem 4.0.1.
Note that this guarantee matches both the constant factor approximation in the fractionally subadditive case (setting q = m) and the Ω( 1 log log m ) factor in the subadditive case (setting q = 2) and interpolates between the two approximation factors when q is in between. In particular, note that this matches the state-of-the-art in both extremes, and matches the best guarantees achievable by the [DKL20] approach in both extremes.
Concentration Inequalities.
Consider a function f , and a set S selected by randomly including each item i independently (not necessarily with the same probability). It is often of interest to provide upper tail bounds on the distribution of f (S) compared to E[f (S)]. McDiarmid's inequality is one such example when f is 1-Lipschitz. 3 It is further the case that when f (·) is subadditive or fractionally subadditive, even stronger upper tail bounds are possible [Von10].
For example, if f is both 1-Lipschitz and subadditive, Schechtman's inequality implies that the probability that f (S) exceeds twice its median plus x decays exponentially in x [Sch03]. 4 Importantly, Schectman's inequality provably cannot "kick in" arbitrarily close to the median [Von10].
Main Result II: Theorem 5.0.2 improves Schectman's inequality across the partitioning interpolation. In particular, our improvement implies that for all 1-Lipschitz and q-partitioning f , the probability that f ([m]) exceeds (1 + log 2 (q)) times its median plus x decays exponentially in x. This is stated in Theorem 5.0.2 . In particular, Theorem 5.0.2 makes use of a new isoperimetric inequality that may be of independent interest, and that is stated in Theorem 5.1. 1. 5 Similarly, if f is both 1-Lipschitz and fractionally subadditive, [Von10] establishes that f is self-bounding. [ Schectman's inequality is more general than this, but this is one common implication. See Eq. (2) for the general statement. 5 Note that this result, and that of [Sch03] applies in a more general setting where there is a collection of independent random variables X1, . . . , Xm, that parameterize a function f X : 2 [m] → R, which is subadditive for all X. Like [Von10], we provide proofs in the canonical setting referenced in the text for simplicity of exposition. 6 A random variable X is σ 2 -subgaussian if the following inequality holds. The log-moment generating function defined by ))] exists for all real numbers t and, furthermore, satisfies ψX (λ) ≤ λ 2 σ 2 2 . It is well known that if X is σ 2 -subgaussian, then results extend both across the partitioning interpolation, but neither of the two approaches yields "tight" results at both ends -our extension of [Sch03] gives sharper results for small q, and our extension of [Von10] gives sharper results for larger q. This is to be expected, as the two methods are genuinely distinct. . That is, the simplest level of the hierarchy is (fractionally) subadditive valuations, the second level of the hierarchy already contains functions that are not subadditive, and the final level of the hierarchy contains all monotone functions. These works are distinct from ours in that they explore the space between (fractionally) subadditive valuations and arbitrary monotone valuations, whereas our work explores the space between fractionally subadditive and subadditive valuations.
To the best of our knowledge, the only prior work exploring the space between fractionally subadditive and subadditive valuations is [Ezr+19]. Their main results concern the communication complexity of two-player combinatorial auctions for subadditive valuations, but they also provide improved parameterized guarantees for valuations that are subadditive and also MPH-k [Fei+15]. A detailed comparison to our work is therefore merited: • The partitioning interpolation follows from a first-principles definition (Section 3.1). On the other hand, the MPH hierarchy explores the space between fractionally subadditive and arbitrary monotone valuations, and [Ezr+19] restrict attention to the portion of this space that is also subadditive. • Our main results consider posted price mechanisms and concentration inequalities, neither of which are studied in [Ezr+19]. [Ezr+19] study the communication complexity of combinatorial auctions (where the gap between fractionally subadditive and subadditive is only constant), which is not studied in our work. • We show (Proposition 3.2.3) that all q-partitioning valuations are also MPH-⌈m/q⌉. Therefore, we can conclude a (1/2 + 1/ log 2 (⌈m/q⌉))-approximation algorithm for two-player combinatorial auctions with q-partitioning valuations using [Ezr+19] (this is the only result of their paper concerning functions between fractionally subadditive and subadditive). • We further show that q-partitioning admits a dual definition (Definition 3.2.2, similar to the duality between XOS and fractionally subadditive). A particular feasible dual solution implies a witness that q-partitioning valuations are MPH-⌈m/q⌉. This suggests that our dual definition is perhaps "the right" modification of subadditive MPH-k so that a dual definition exists. . Most relevant to our work is the study of posted price mechanisms in Bayesian settings for welfare, where the state-of-the-art is a (1/2)-approximation for fractionally subadditive valuations [FGL15], and a Ω(1/ log 2 log 2 (m))-approximation for subadditive valuations [DKL20]. These results further imply approximation guarantees of the same asymptotics for multi-dimensional mechanism design via [CZ17], and it is considered a major open problem whether improved guarantees are possible for subadditive valuations. Our work provide improved guarantees across the partitioning interpolation (of Ω(log 2 log 2 (q)/ log 2 log 2 (m))), which matches the state-of-the-art at both endpoints (and moreover, is provably tight at both endpoints for the approach of [DKL20]).
Concentration Inequalities. Concentration inequalities on functions of independent random variables are a core tool across many branches of Computer Science. For example, they are widely used in Bayesian mechanism design [RW18; CM16; CZ17; Kot+19], learning theory [BH11;FV13], and discrete optimization [FKS21]. Vondák's wonderful note on concentration inequalities of this form gives the state-of-the-art when f is fractionally subadditive and subadditive, and mentions other applications [Von10]. Our results extend both the state-of-the-art for subadditive and fractionally subadditive across the partitioning interpolation. In addition, we provide a new isoperimetric inequality based on Talagrand's method of control by q points.
Summary and Roadmap
Section 2 immediately follows with formal definitions. Section 3 defines the partitioning interpolation, and provides several basic properties (including an interpretation via cost-sharing, and a dual formulation). Section 4 overviews our first main result: an Ω( log log q log log m )-approximate posted-price mechanism for q-partitioning valuations. Section 5 overviews our main results on concentration inequalities. Section 6 concludes.
The appendices contain all omitted proofs, along with some additional facts about the partitioning hierarchy. For example, Appendix G discusses the distance of subadditive functions to q-partitioning functions.
Preliminaries
Throughout the entire paper, we assume that valuations f : 2 M −→ R + are normalized, meaning that f (∅) = 0, and increasing monotone, meaning that f (S) ≤ f (T ) whenever S ⊆ T.
Standard Valuation Classes. A valuation function
f is XOS if there exists a collection A of non-negative additive functions 7 such that for all S, f (S) = max v∈A {v(S)}. f is fractionally subadditive if for any S and any fractional cover α(·) such that for all j ∈ S T ∋j α(T ) ≥ 1 and α(T ) ≥ 0 for all T, it holds that f (S) ≤ T α(T )f (T ). It is well-known that f is XOS if and only if it is fractionally subadditive via LP duality [Fei09].
Subadditive MPH-k (CFMPH-k) if v is simultaneously subadditive and MPH-k.
Note that PH-1 valuations are exactly the class of additive valuations, so the class of MPH-1 valuations is exactly the class of XOS valuations. Note also that PH-2 valuations need not be subadditive (and therefore, MPH-2 valuations need not be subadditive either). MPH-m contains all monotone valuation functions, and all subadditive functions are MPH-m/2 [Ezr+19]. We establish a connection between q-partitioning valuations and valuations that are MPH-⌈m/q⌉ and subadditive in Proposition 3.2.3.
The Partitioning Interpolation
Here, we present our main definition. We give its more intuitive "primal form" as the main definition, and establish a "dual form" in Section 3.2.
and any partition (S 1 , S 2 , . . . , S q ) of S into q (possibly empty) disjoint parts, and any fractional covering α of [q] (that is, any non-negative α(·) such that for all j ∈ We refer to the class of q-partitioning valuations over The intuition behind our definition is that q captures the complexity of non-negative fractional covers under which the value of v(·) is non-diminishing. Subadditive valuations are only non-diminishing under very simple covers (covering S ∪ T by S and T ), while XOS valuations are non-diminishing under arbitrarily complex fractional covers. The parameter q captures the desired complexity in between. We now establish a few basic properties of q-partitioning valuations.
We begin with the following nearly-trivial observations: First, for any fixed q and m, the class Q(q, [m]) is closed under conic combinations. 8 This has implications for oblivious rounding of linear relaxations [FFT16]. Furthermore, for any fixed q and m, the class Q(q, [m]) is closed under taking pointwise suprema, which means that one can use the "lower envelope technique" when approximating functions by q-partitioning functions [Fei+15, Section 3.1]. Now, we establish the three promised properties from Section 1. We begin by confirming that indeed the partitioning interpolation interpolates between fractionally subadditive and subadditive valuations.
We provide a complete proof of Proposition 3.0.2 in Appendix A. It is reasonably straight-forward to see ). It is also straightforward to see the inclusions in the chain (any partition with q parts is also a partition with q + 1 by adding an empty partition). We show that each inclusion is strict via the following proposition, whose complete proof appears in Appendix A.
) in a precise sense. Note that Definition 3.0.4 applied to q = m is exactly the notion of closeness used in [BR11].
if for any g ∈ G, any S ⊆ [m], any partition (S 1 , S 2 , . . . , S q ) of S into q parts, and any fractional cover α of [q], it is the case that We will see a further interpretation of Definition 3.0.4 in Proposition 3.1.5. For now, we simply present the following "smoothness" claim.
The proof of Theorem 3.0.5 appears in Appendix B. Finally, we provide our first-principles definition of q-partitioning via a cost-sharing game. This aspect is more involved, so we overview the setup in Section 3.1.
Recap: characterizing XOS via cost-sharing
Consider a set [m] of players who are interested in receiving some service. There is a cost for this service described by a monotone increasing normalized cost function c : 2 [m] −→ R. Here, c(S) is the cost that players S need to pay together so that each of them receives the service. A natural question to ask is: When is it the case that one can allocate the cost of the service between the community such that no subset of players T ⊆ [m] is better off by forming a coalition and receiving the service on their own? Formally, this question asks whether the core of the game is nonempty. The core is the set of all non-negative cost- We'll refer to the game parameterized by cost function c(·) restricted to players in S as GAME(c, S). This question is answered by the Bondareva-Shapley Theorem (see [Nis+07,Theorem 15.6]). Applied to monotone normalized cost functions c, the theorem states: An immediate generalization of this theorem, which appears in [Fei09, Section 1.1], is:
) is non-empty for all S if and only if c is fractionally subadditive.
An interpretation of the above statement is the following. No matter what subset S ⊆ [m] of players are interested in the service, we can always design a cost allocation vector (which vector can depend on S) such that all players in S are better off by purchasing the service together rather than deviating and forming coalitions. Since finding cores might be impossible (unless c is fractionally subadditive), the following relaxation of a core appears in coalitional game theory literature. A non-negative vector p is in the γ-core of the game if and only if it Definition 15.7]. Again, one has equivalent statements to Theorems 3.1.1 and 3.1.2 using a γ-core. We only state the analogous statement for Theorem 3.1.2: Theorem 3.1.3. The γ-core of GAME(c, S) is non-empty for all S if and only if c is γ-close to XOS.
q-partitioning via cost-sharing
Consider instead a partition of players into q (possibly empty) cities S 1 , . . . , S q . We think of each city as a fully-cooperative entity that takes a single action. 9 The question of interest is whether citycore(S 1 , S 2 , . . . , S q ) of the game is non-empty. citycore(S 1 , S 2 , . . . , S q ) is the set of non-negative cost-allocation vectors p = (p 1 , p 2 , . . . , p q ) that satisfy i∈T p i ≤ c( i∈T S i ) for all T ⊆ [q], and i∈[q] p i = c( i∈[q] S i ). Note that a vector in the citycore will incentivize cooperation as each subset of cities needs to pay at least as much if they choose to form a coalition. We parallel the theorems in the previous section with the following propositions. We'll refer to the above game as GAME(c, S, S 1 , . . . , S q ) when the normalized monotone cost function is c, players in S are participating, and they are partitioned into cities S 1 , . . . , S q .
Proposition 3.1.4. The citycore of GAME(c, S, S 1 , . . . , S q ) is non-empty for all S, S 1 , . . . , S q if and only if c is q-partitioning.
Again, the interpretation is simple. No matter which people are interested in the service and how they are distributed between cities, we can design a cost allocation vector such that all cities are better off by purchasing the service together rather than forming coalitions. Finally, one can also relax the concept of a citycore to a γ-citycore as follows. This is the set of non-negative cost-allocation vectors p = (p 1 , p 2 , . . . , p q ) that We can then also conclude: Proposition 3.1.5. The γ-citycore of GAME(c, S, S 1 , . . . , S q ) is non-empty for all S, S 1 , . . . , S q if and only if c is γ-close to q-partitioning.
The Dual Definition and Relation to MPH Hierarchy
Finally, we provide a dual view of the q-partitioning property (as in XOS vs. fractionally subadditive), and relate q-partitioning to valuations that are MPH-⌈m/q⌉. First, we observe that the q-partitioning property can be reinterpreted as a claim about a linear program, opening the possibility of a dual definition. (1) The proof of Observation 3.2.1 is fairly immediate by observing that feasible solutions to the LP are exactly fractional covers, and that the objective function is exactly the bound on v(S) implied by that fractional cover. We now state a "dual" definition of q-partitioning valuations. The equivalence with Definition 3.0.1 is a simple application of linear programming, which we present in Appendix C.
and any partition (S 1 , S 2 , . . . , S q ) of S into q disjoint parts, the following linear program has value at least v(S) :
(Dual Definition)
This dual definition allows us to establish the useful relationship between the partitioning and MPH hierarchies given in Proposition 3.2.3. Proof. To prove this statement, for each S ⊆ [m], we will create a clause w S containing hyperedges of size at most ⌈ m q ⌉, which takes value v(S) at S and for any T = S, w S (T ) ≤ v(S). This will be clearly enough as we can take the maximum over clauses w S . Take an arbitrary set S and partition it into q subsets S 1 , S 2 , . . . , S q of almost equal size such that each subset has at most ⌈ m q ⌉ elements. We will construct a clause of the form where p i is the weight of set S i for each i. Note that the weights p 1 , p 2 , . . . , p q must satisfy The existence of such weights is guaranteed by Definition 3.2.2, which completes the proof.
is MPH-k and subadditive. However, it is simple to show that it is not q-partitioning. Split [m] into q sets of almost equal size S 1 , S 2 , . . . , S q and consider the fractional cover α over [q] assigning weight 1 q−1 to all subsets of [q] of size q − 1. A simple calculation shows that v and α do not satisfy the q-partitioning property.
Main result I: Posted Price Mechanisms
We consider the setup of [FGL15]. Namely, there are n buyers interested in a set of items [m]. The buyers' valuations come from a product distribution D = D 1 × D 2 · · · × D n , known to the seller. The optimal expected welfare is then OPT(D) The goal of the seller is to fix prices p 1 , . . . , p m so that the following procedure guarantees welfare at least c · OPT in expectation: • Let A denote the set of available items. Initially A = [m].
• Visit the buyers one at a time in adversarial order. When visiting buyer i, they will purchase the set Note that this result matches asymptotically the best known competitive ratios for XOS (when q = m, a constant ratio mechanism was proven in [FGL15]) and CF valuations (when q = 2, a Ω( 1 log log m )-competitive posted price mechanism was proven in [DKL20]) and interpolates smoothly when q is in between.
Like [DKL20], we first give a proof in the case when each D i is a point-mass, as this captures the key ideas. A complete proof in the general case appears in Appendix E.
Our proof will follow the same framework as [DKL20]. To this end, let p ∈ [0, 1] be a real number. Denote by ∆(p) the set of distributions over 2 [m] such that P S←λ [i ∈ S] ≤ p holds for all λ ∈ ∆(p) and all i ∈ [m]. The framework of [DKL20] establishes the following: then there exists an α-competitive posted price mechanism when all players have valuations in G.
[DKL20] then show that when G is the class of all subadditive functions, such a p exists for α = Θ( 1 log log m ), but no better. In the rest of this section, we will show that when G is the set of qpartitioning valuations, the conditions of Lemma 4.0.2 hold with α = Ω log log q log log m . It is clear that (the deterministic case of) Theorem 4.0.1 follows immediately from Lemma 4.0.2 and Proposition 4.0.3. It is worth noting that, while we leverage Lemma 4.0.2 exactly as in [DKL20], the proof of Proposition 4.0.3 for general q is quite novel in comparison to the q = 2 (subadditive) case.
11 Unless explicitly indicated, logarithms have base 2 throughout the rest of this section. and let λ p be a maximizing distribution in arg max Without loss of generality, assume that q is a perfect power of 2, i.e. q = 2 r (we can always decrease q to a power of 2 without changing the asymptotics of Ω( log log q log log m )). Now, fix some p ∈ 0, 1 16 . We will show that g(p) ≥ 1 8 f (p) − f (p r 2 ) . The first step is the obvious bound which follows from the fact that we can choose λ = λ p . Now, we want to bound E S←λ p ,T ←µ [v(S\T )]. Fix a distribution µ. Let S be drawn according to λ p and T 1 , T 2 , . . . , T r be r independent sets drawn according to µ. Then, Now, we will use the q-partitioning property. Note that the sets T 1 , T 2 . . . , T r define a partitioning of S into 2 r = q subsets. That is, for any v ∈ {0, 1} r , we can define (Partitioning with r sets) According to this partitioning, define A 0 , A 1 , A 2 , . . . , A r as follows: Then, by the q-partitioning property, we know that Indeed, that is the case for the following reason.
If v is such that 1 T v < 7r 8 , then S v belongs to at least of the sets S\T i , so it is "fractionally covered" by the term 8 r (v(S\T 1 ) + v(S\T 2 ) + · · · + v(S\T r )) . If, on the other hand, v is such that 1 T v ≥ 7r 8 , then it is fractionally covered by the term v( j≥ 7r 8 A j ). Now, let A = j≥ 7r 8 A j . We claim that each element j ∈ [m] belongs to A with probability at most p r/2 (over the randomness in drawing S ← λ p and T 1 , . . . , T r ← µ, then defining A as above). To prove this, we will use the classical Chernoff bound Theorem D.0.1 as follows. Let Y i be the indicator that j ∈ T i . Then, 8p − 1 and let µ = rp. Then, by Theorem D.0.1, where the last inequality follows since p ≤ 1 16 .
All of this together shows that where the last inequality holds as each element appears in A with probability at most p r/2 . Using the same telescoping trick as in [DKL20], we conclude as follows. Let s = ⌈log r 2 log 16 m 2 ⌉. Then, ) since any distribution which takes each element with probability at most 1 m 2 is non-empty with probability at most 1 m and, furthermore, v is normalized monotone. All together, this shows that For all large enough m (say m > 32), there exists exists some p ′ such that which finishes the proof.
Main Result II: Concentration Inequalities
In this section, we present our concentration inequalities for the partitioning interpolation. We begin by overviewing our results and their context in further detail, highlighting some proofs. We provide complete proofs in the subsequent section and Appendix F. .
[Von10] establishes that when v is XOS, the random variable v(S) has E[v(S)]-subgaussian lower tails. The proof follows by establishing that 1-Lipschitz XOS functions of independent random variables are self bounding, and applying a concentration inequality of [BLM00]. We begin by showing that 1-Lipschitz q-partitioning functions are (⌈m/q⌉, 0)-self bounding, and applying a concentration inequality of [MR06; BLM09] to yield the following: Theorem 5.0.1. Any 1-Lipschitz q-partitioning valuation v over [m] satisfies the following inequalities We prove Theorem 5.0.1 in Appendix F. Theorem 5.0.1 matches [Von10] at q = m, and provides nontrivial tail bounds for q = ω(1). One should note, however, the the above inequality is useless when q is constant, as it only implies O(mE[v(S)])-subgaussian behaviour, but it is well known that any 1-Lipschitz set function is m-subgaussian via McDiarmid's inequality. Our next inequality considers an alternate approach, based on state-of-the-art concentration inequalities for subadditve functions.
[Sch03] proves 12 that whenever v is normalized 1-Lipschitz subadditive, the following inequality holds for any real numbers a > 0, k > 0, and integer q ≥ 2, In particular, setting a = Med[v(S)], q = 2, and integrating over k = 0 to ∞ allows one to conclude the bound E[v(S)] ≤ 2Med[v(S)] + O(1), which has proven useful, for example in [RW18]. While this inequality establishes a very rapid exponential decay, the decay only begins at 2a as we need q ≥ 2. 13 What if we seek an upper tail bound of the form P[v(S) ≥ 1.1a + k] or, even more strongly, something of the form P[v(S) ≥ (1 + o m (1))a + k]? Our next inequality accomplishes this: is a random set in which each element appears independently. Then the following inequality holds for any a ≥ 0, k ≥ 0, and integers 1 ≤ r < s ≤ log 2 q.
The interesting extension for q-partitioning valuations via Theorem 5.0.2 is that one may take s = log 2 q − 1, r = log 2 q. From here, for example, one can again take a to be the median of v(S), In the very special case of q = m, we can also replace r s with any real 1 + δ > 0 and obtain the , which is the same extremely strong relationship implied by the E[v(S)]subgaussian behaviour. We prove these simple corollaries of Theorem 5.0.2 in Appendix F and now proceed to a proof of Theorem 5.0.2. To do so, we need to make a detour and generalize Talagrand's work on the method of "control by q-points".
A Probabilistic Detour: A New Isoperimetric Concentration Inequality
Suppose that we have a product probability space Ω = N i=1 Ω i with product probability measure P. Throughout, in order to highlight our new techniques instead of dealing with issues of measurability, we will assume that the probability spaces are discrete and are equipped with the discrete sigma algebra. These conditions are not necessary and can be significantly relaxed (see [Tal01, Section 2.1]).
When A = A 1 , A 2 , . . . , A q , the function f s (A 1 , A 2 , . . . , A q ; x) intuitively defines a "distance" from x to A. 15 The definition of the function f s is motivated by and generalizes previous work of Talagrand [Tal01;Tal96]. Our main technical result, the proof of which is deferred to Appendix F, is the following. In it, A 1 , A 2 , . . . , A q are fixed while x is random, distributed according to P, which is the aforementioned product distribution over Ω.
Theorem 5.1.1. Suppose that α ≥ 1 s is a real number and t(α, q, s) is the larger root of the equation Using this fact, we are ready to prove Theorem 5.0.2. : v(y) ≤ a}. We claim that whenever v(x) = v(S) ≥ r s a + k, it must be the case that f s (A, A, . . . , A r ; x) ≥ k. Indeed, suppose that this were not the case. Then, there must exist some In particular, this means that there exist r vectors y 1 , y 2 , . . . , y r in A such that i ∈ [m] : x i appears less than s times in the multiset {y 1 i , y 2 i , . . . , y r i } < k.
Now, let T be the set corresponding to the characteristic vector of x and T i to y i . We also denote the following sets: : x i appears less than s times in the multiset {y 1 i , y 2 i , . . . , y r i } , Now, observe that each element of T \M appears in at least s of the r sets M 1 , M 2 , . . . , M r . Furthermore, as log q ≥ r and v is q-partitioning, for the same reason as in Eq. (Partitioning with r sets), we know that By the choice of y 1 , y 2 , . . . , y r , we know that |M | < k. As all marginal values of v are in [0, 1], it follows that v(M ) < k. By subadditivity, we reach the contradiction Therefore, whenever v(x) = v(S) ≥ r s a + k, it must be the case that f s (A, A, . . . , A r ; x) ≥ k. The statement follows from Theorem 5.1.1.
Conclusion
We introduce the partitioning interpolation to interpolate between fractionally subadditive and subadditive valuations. We provide an interpretation of the definition via a cost-sharing game (as in [Bon63;Sha67] for fractionally subadditive), and also show a relation to the subadditive MPH-k hierarchy via a dual definition. We apply our definition in canonical domains (posted price mechanisms and concentration inequalities) where the fractionally subadditive and subadditive valuations are provably "far apart", and use the partitioning interpolation to interpolate between them. One technical nugget worth highlighting is Equation (Partitioning with r sets), which appears in the proofs of both Proposition 4.0.3 and Theorem 5.0.2this idea may be valuable in future work involving the partitioning interpolation. We overview several possible directions for future work below. Similarly, it is interesting to understand for which β, the class Q(q, [m]) pointwise β-approximates Q(q + 1, [m]). 16 Interestingly, a function is β-close to XOS if and only if it is pointwise β-approximated by XOS. But, it is not clear whether these two properties are identical for q = m. Determining the precise relationship between the two properties is itself interesting. We note that one direction is easy. For any q, being pointwise β-approximated by Q(q, [m]) implies being β-close to Q(q, [m]). So the open question is whether the converse is true.
We also note that [BR11] resolves asymptotically the closeness of Q(2, [m]) to Q(m, [m]) at Θ(log m). In Appendix G, we show that simple modifications of their arguments imply that Q(2, [m]) is Θ(log 2 q)close to Q(q, [m]) for any q.
Constructing "hard" q-Partitioning valuations. Our two main results, Theorems 4.0.1 and 5.0.2, both establish the existence of desirable properties for q-partitioning valuations. However, to demonstrate tightness, one would need "hard" constructions of q-partitioning valuations that are not (q + 1)-partitioning (recall that we have given constructions of valuation functions in Q(q, [m]) \ Q(q + 1, [m]), but they are not "hard"). It does not appear at all straightforward to adapt constructions of "hard" valuations that are subadditive but not fractionally subadditive (e.g. [BR11]) to create a valuation function in Q(q, [m]) \ Q(q + 1, [m]). Indeed, there is no previous construction of valuations that are subadditive MPH-k but not subadditive MPH-(k − 1) (even without restricting to "hard" constructions). Constructing such functions (for which, e.g., the arguments made in Sections 4 and 5 are tight) is therefore an important open direction.
Applications in Algorithmic Game Theory. Guarantees of posted-price mechanisms are perhaps the most notable domain within algorithmic game theory where fractionally subadditive and subadditive functions are far apart. Still, there are other settings where they are separated. For example, in best-response dynamics in combinatorial auctions, a dynamics leading to a constant fraction of the optimal welfare exists in the fractionally subadditive case, but there is a O( log log m log m ) impossibility result in the subadditive case [DK22]. There are also constant-factor gaps between approximations achievable in polynomial communication complexity for combinatorial auctions [DNS05; Fei09; Ezr+19], and for the price of anarchy of simple auctions [RST16]. Note that in the context of communication complexity of combinatorial auctions, Proposition 3.2.3 combined with the results for two players in [Ezr+19] already imply improved communication protocols across the interpolation, but no lower bounds stronger than what can be inherited from fractionally subadditive valuations are known.
Concentration Inequalities. In the proof of Theorem 5.1.1, we have followed the approach of Talagrand in [Tal96]. Dembo provides an alternative proof of the special case of this theorem for s = 1 using a more systematic approach for proving isoperimetry based on information inequalities in [Dem97]. It is interesting to understand to what extent the inequality in Theorem 5.1.1 can be recovered using the information inequalities framework in [Dem97]. Moreover, the state-of-the-art provides two different approaches to concentration inequalities at the two extremes of the partitioning interpolation. This results in one approach yielding sharper guarantees near q = m, and the other near q = 2. It is important to understand the (asymptotically) optimal tail bounds across the partitioning interpolation, and it is interesting to understand whether there is a unified approach that yields (asymptotically) optimal tail bounds across a broad range of {2, . . . , m}.
A Strict Inclusion of Classes
Proof of Proposition 3.0.3. First, partition [m] arbitrarily into q nonempty parts S 1 , S 2 , . . . , S q . Then, by choosing the fractional cover α of q for which α(I) = 1 q−1 whenever |I| = q − 1 and α(I) = 0 otherwise, We also need to prove that when v([m]) = 1 + 1 q−1 , the valuation v actually satisfies the q-partitioning property. To do so, take any S ⊆ [m], any partition of S into q disjoint subsets S 1 , S 2 , . . . , S q , and any fractional cover α of [q]. We need to show that the inequality in Definition 3.0.1 is satisfied. First, note that unless S = [m], the inequality is trivial. Furthermore, we can assume that all q subsets S i are nonempty. Indeed, if exactly r ≤ q of the sets are nonempty, then we essentially need to prove that v satisfies the rpartitioning property. This follows by induction on q as 1 q−1 ≤ 1 r−1 . Finally, we can assume that α([q]) = 0. Otherwise, there are two cases. Either α([q]) ≥ 1, which is trivial. Or, we can modify α to α ′ by setting α ′ ([q]) = 0 and α ′ (T ) = Now, using that β is a fractional cover of [q], we conclude that as desired.
B Smoothness of The Partitioning Interpolation
Here, we prove the smoothness property of the q-partitioning interpolation given in Theorem 3.0.5.
Proof of Theorem 3.0.5. Take any q-partitioning valuation v, subset S ⊆ [m], partitioning S 1 , S 2 , . . . , S q+1 of S, and fractional cover α of [q + 1]. We want to show that We proceed as follows: (q + 1) The last inequality holds since β J defined as β J (T ) := I⊆[q+1]:I∩J=T α(I) is a fractional cover of J, the valuation v is q-partitioning, and |J| = q. Now, note that for any {i, j} ⊆ [q + 1], we have: as given by the partition S i ∪ S j and S k for k ∈ {i, j} of S into q parts. Taking the sum over all pairs i = j, we conclude that Therefore, from which the conclusion follows.
Remark B.0.1. We can show that the ratio q−1 q in Theorem 3.0.5 cannot be improved beyond q 2 −1 q 2 using the q-partitioning valuation constructed in Proposition 3.0.3. Namely, take v so that v(∅) = 0, v(I) = 1 whenever 0 < |I| < m and v([m]) = q q−1 . Now, take any q + 1 disjoint non-empty sets S 1 , S 2 , . . . , S q+1 and consider the fractional covering α of [q + 1] assigning weight 1 q to all J ⊆ [q + 1] of size q. Then, the fractional value is an interesting open problem.
C Equivalence of the Dual and Primal Definitions
The proof that
E Incomplete Information Posted Price Mechanism
Here, we tackle the true Bayesian setting, when the posted price mechanism is not restricted to point-mass distributions. To do so, we first introduce some further notation from [DKL20]. Write v for the valuation vector coming from distribution D = D 1 ×D 2 · · · ×D n . Respectively, write OPT(v) for (an) optimal allocation with valuations v, where OPT(v) = (OPT 1 (v), OPT 2 (v), . . . , OPT n (v)). Denote the social welfare under these valuations by v(OPT(v)). Phrased in these terms, we need to design a posted price mechanism with expected social welfare Ω log log q The set ∆ is defined as in Section 4. We also need the following extra notation, which extends ∆ to n-tuples of distributions.
then there exists an α-competitive posted price mechanism for the product distribution D.
As in Section 4, we prove that when G is the set of q-partitioning valuations, the above lemma holds with α = Ω log log q log log m . The incomplete information case, however, is more technically challenging, so we break the proof into several lemmas. First, we need some more notation. Denote for any valuation vector v and Again, we assume that q > 4 is a perfect power of 2 and q = 2 r , s = ⌈log r Proof. Note that f v (16 −1 ) ≥ 1 16 v(OPT(v)). Indeed, the vector of independent distributions (λ i ) n i=1 such that S i ←− λ i satisfies S i = OPT i (v) with probability 1/16 and S i = ∅ with probability 15/16 is in Γ(16 −1 ). This is true because the sets OPT i (v) are disjoint. Similarly, f v ( 1 m 2 ) ≤ 1 m holds as shown in [DKL20, Lemma A.2]. Therefore, using the telescoping trick in Proposition 4.0.3, it is enough to show that holds for all ℓ ∈ L. In fact, we will prove something stronger. For any p ≤ 1 16 and any λ, ∈ Γ(p), µ ∈ ∆(p), there exists some σ ∈ Γ(p r 2 ) such that Taking suprema on both sides yields Eq. (4). We prove this fact in similar manner to Proposition 4.0.3. Let T i,u for 1 ≤ u ≤ r be rn independent sets from the distribution µ. Then, for any i ∈ [n], Now, let A i be the set of elements of S i that appear in more than 7r 8 of the sets T i,1 , T i,2 , . . . , T i,r . Let σ i be the distribution of A i . The same reasoning as in Proposition 4.0.3 shows that Thus, summing over i, we conclude that To conclude, we simply need to show that σ ∈ Γ(p r 2 ). This holds for the following reason. Take some j ∈ [m]. Define p i = P S i ←−λ i [j ∈ S i ]. By the definition of Γ(p), we know that n i=1 p i ≤ p. On the other hand, as each p i satisfies p i ≤ p ≤ 1 16 , as in Proposition 4.0.3, we conclude that Thus, all we need to show is that This, however, is simple. The map x −→ x r/2 is convex. Thus, its maximum on the simplex defined by 0 ≤ p i ∀i ∈ [n], n i=1 p i ≤ p is attained at a vertex. This exactly corresponds to The statement we want is a simple corollary of Lemma E.0.2.
Lemma E.0.3. If D is a product distribution over q-partitioning valuations, there exists some deterministic p ∈ [0, 1] (potentially depending on D ) such that Proof. Using the uniform distribution U nif (L) over L, we conclude from Lemma E.0.2 that Taking the expectation over v and interchanging order of expectations, we obtain Therefore, as g v is clearly non-negative, the moment method implies the existence of some ℓ 1 such that Using that 1 s = Ω log log q log log m , we complete the proof.
The statement of Theorem 4.0.1 for the incomplete information case follows from Lemma E.0.1 and Lemma E.0.3.
F Concentration Inequalities
We discuss in full detail the concentration inequalities in Section 5 and their more general versions. The setup that we consider throughout the rest of this section is the following. A valuation v over [m] is given.
is a random set such that each item j ∈ [m] appears independently in S (different items might appear with different probabilities). We study how concentrated v(S) is around a point of interest such as its mean or median. Before delving into the main content, we make one important note. Concentration inequalities and tail bounds depend on the scale of the valuations. That is, the valuation 2v has a "weaker" concentration than the valuation v (at least when considering an additive deviation from the mean of the form Throughout the rest of the section, by abuse of notation, we will write v(S) and v(x 1 , x 2 , . . . , x m ) interchangeably where (x 1 , x 2 , . . . , x m ) is the characteristic vector of S in {0, 1} m .
F.1 Concentration via Self-Bounding Functions
Vondrak shows that XOS and submodular functions exhibit strong concentration via self-bounding functions [Von10, Corollary 3.2]. We adopt this approach and show how the bounds generalize to MPH-k valuations with bounded marginal values and, in particular, to q-partitioning valuations (naturally, the concentration becomes weaker as q decreases). This approach relies on the method of self-bounding functions.
3 , and X 1 , X 2 , . . . , X m are independent, then for Z = f (X 1 , X 2 , . . . , X m ) and c = 3a−1 6 : Vondrak [Von10, Lemma 2.2] shows that XOS valuations (also m-partitioning or MPH-1) are (1, 0)-selfbounding. We generalize as follows. So, can we derive any tail bounds for small values of q (in particular, subadditive valuations) which are better than what is already true for any 1-Lipschitz valuations over [m]? It turns out that the answer to this question is "yes" and this is the subject of the next section. Before turning to it, however, we note that we cannot hope to do much better than Theorem 5.0.1 using the method of self-bounding functions. , and v([m]) = q q−1 . Suppose, for the sake of contradiction, that v is (a, 0)-self-bounding for some a < m q and v i . Then, for any S with characteristic vector x and i ∈ [m], it must be the case that v i (x (i) ) ≤ f (S\{i}) ≤ 1 by the first inequality in Definition F.1.1. As a result, when we set x to be the all-ones vector in the second inequality, we conclude that a q q−1 ≥ m q−1 . Thus, a ≥ m q , which is a contradiction.
F.2 Tail Bounds via an Isoperimetric Inequality
We now take a deeper look into the stated isoperimetric inequalities in Section 5. To do so, we first present some background on Theorem 5.1.1 and the proof of the statement.
F.2.1 The Isoperimetric Inequality
We consider the setup introduced in Section 5.1. In [Tal01, Section 3.1.1] the author considers the special case s = 1, and in [Tal96, Section 5.7] he considers the special case s = q − 1. In particular, the inequalities he proves are the following.
Theorem F.2.1 ([Tal01, Section 3.2]). Suppose that α > 0 is a real number and z(q, α) is the larger root of the equation z + qαz − 1 α = 1 + qα. Then In particular, setting In particular, setting A = A 1 = A 2 = . . . = A q one has In this section, we provide a uniform view on Theorem F.
Furthermore, t(α, q, s) is the largest t with this property. We use the convention that 0 −α = +∞.
As the proof of Lemma F.2.3 provides no insight and is just a computation, we postpone it to Appendix F.3. We also remark that a version of Theorem 5.1.1 holds for α < 1 s as well, but the definition of t(α, q, s) becomes even more complicated (see Remark F.3.1). We now turn to the proof of Theorem 5.1.1.
Proof of Theorem 5.1.1. We follow closely the proof of Theorem F.2.2 due to Talagrand in [Tal96]. We proceed by induction over N, the dimension of the product space. Base: When N = 1, let g i be the indicator function of A i for 1 ≤ i ≤ q. Observe that Ω t(α, q, s) f s (A 1 ,A 2 ,...,Aq;x) dP(x) = Ω min t(α, q, s), min Indeed, this is true because f s (A 1 , A 2 , . . . , A q ; x) = 0 when there exist s sets A i 1 , A i 2 , . . . , A is to which x belongs and f s (A 1 , A 2 , . . . , A q ; x) = 1 otherwise simply by the definition f s . Using Lemma F.2.3, we conclude that Ω min t(α, q, s), min Now, we will use twice the well known inequality 1 + log x ≤ x as follows with which the base case is completed.
Inductive
Step: Now, let A 1 , A 2 , . . . , A q all belong to Ω = Ω ′ × Ω N +1 , where Ω ′ = N i=1 Ω i . For each i ∈ [q], w ∈ Ω N +1 define the following sets: Fix some w ∈ Ω N +1 . For I ⊆ [q] with |I| = s, denote C I i = A i (w) whenever i ∈ I and C I i = B i whenever i ∈ I. Then, we can make the following observations: with |I| = s. Indeed, the first inequality follows from the following fact. If (b 1 , b 2 as the only extra coordinate that may appear less than s times in the required multiset in Eq. (3) is the N +1'th coordinate w. The second inequality follows from the same fact except that we choose a i = (b i , w) whenever i ∈ I.
Having those two inequalities, we can fix w and compute: Now, we just use the inductive hypothesis as each B i and C I i is in the space Ω ′ , which is a product of N spaces. We bound the above expression by where in the last inequality we used Lemma F.2.3 since P[A i (w)] P[B i ] ≤ 1 for each i. However, as we are dealing with a product measure, by Tonelli's theorem for non-negative functions, it follows that Ω t(α, q, s) f s (A 1 ,A 2 ,...,Aq;(x,w)) dP(x, w) = Using the same approach as in the base case, but this time for the functions g i (w) : as desired.
F.2.2 Tail Bounds and Median-Mean Inequalities for q-Partitioning Valuations
We first begin with a generalization of Theorem 5.0.2, which has the same proof.
Theorem F.2.4. Suppose that v is a q-partitioning valuation over [m], and S ⊆ [m] is a random set in which each element appears independently. Then the following inequality holds for any a ≥ 0, k ≥ 0, s, r ∈ N such that 1 ≤ s < r ≤ log 2 q, and α ≥ 1 s : In particular, choosing a to be the median, α = 1 s , t(α, r, s) = r s , we recover Theorem 5.0.2.
Note that in the proof of Theorem 5.0.2, we only needed r ≤ log 2 q to ensure that the sets M 1 , M 2 , . . . , M r "split" [m] into at most q parts. This assumption, however, is unnecessary if v is XOS (or q = m) as one cannot "split" [m] into more than m parts. This gives rise to even more "fine-grained" inequalities in the XOS case. To present them, however, we will make a slight change of notation. Before, we defined t(α, r, s) as the larger root of t + αrt − 1 αs = αr + 1. We can take r and s to be arbitrary integers satisfying r > s when v is XOS as discussed. Thus, the ratio r s can approximate an arbitrary real number 1 + δ > 1. With this in mind, we make the following twist. Denote by ξ(ψ, δ) the larger root of the equation ξ + ψξ − 1+δ ψ = ψ + 1 for some ψ ≥ 1 + δ > 1. This is essentially the same equation after the substitution ψ = αr, 1 + δ = r s . The condition α ≥ 1 s is equivalent to ψ ≥ 1 + δ. We have: In particular, choosing a to be the median, ψ = 1 + δ, ξ = 1 + δ the inequality becomes Proof. The proof is analogous to the one of Theorem F.2.4, except that this time we have as v is β-close to being XOS.
We end with a discussion of median-mean inequalities, which are part of the motivation for our endeavour in this section. Namely, in [RW18], the authors use the following crucial property of 1-Lipschitz subadditive valuations. Using Schechtman's bound (see Section 5), they obtain E[v(S)] ≤ 3Med[v(S)] + O(1). We generalize this as follows.
Proposition F.2.6. Suppose that the non-negative random variable Z satisfies the following inequality for some 0 < δ ≤ 1, and any k > 0.
Proof. Let k be some non-negative real number that we will chose later. Then, Choosing k = 1 ln(1+δ) and using the inequality 0 < δ ≤ 1, which also implies δ 2 ≤ 1 ln(1+δ) ≤ δ, we bound the above expression by Applying this statement to q-partitioning valuations, we obtain the following two corollaries.
F.3 Technical Details
We omitted the proof of Lemma F.2.3. We give this proof here. First, however, we need to show that the equation t + αqt − 1 αs = αq + 1 has two positive roots and one of them is larger than 1. Clearly, t = 1 is a root. Denote f (t) := t + αqt − 1 αs − αq − 1. Note that f ′ (t) = 1 − q s t −1− 1 αs . As q > s, f ′ has a single root t 0 = q s αs αs+1 and this root is larger than 1 (but smaller than q s ). Thus, f ′ is decreasing in (0, t 0 ) and increasing in [t 0 , +∞). Therefore, f (t 0 ) < f (1) = 0. Since lim t−→+∞ f (t) = +∞, f has just one more root and this root is larger than 1. We can now proceed to the proof of Lemma F.2.3.
Similarly note that we can assume that x 1 = x 2 = · · · = x q−s+1 . Now, keeping x q−s+1 fixed and x q−s+1 x q−s+2 · · · x q fixed, note that the sum q i=q−s+1 x i is maximized when there exists some 0 ≤ r ≤ s such that This is indeed the case since when we keep the product of two numbers a ≤ b fixed, their sum increases as they get further apart. Formally, aγ + bγ −1 ≥ a + b for any 0 < γ < 1. Under these assumptions, denote y = x q−r and x q−r−1 = x g−r−2 = · · · = x q−s+1 = x. Using this notation, we want to maximize h(x, y) = (yx s−r−1 ) −α + α(r + y + (s − r − 1)x) in the set K = {(x, y) > 0 : (yx s−r−1 ) −α ≤ t(α, q, s) and 0 ≤ x ≤ y ≤ 1}. Note that K is compact and h is continuous. Therefore, there exists a maximizer. We can easily see that this maximizer is not in the interior of K as ∇ y h = α − αy −α−1 x −α(s−r−1) < 0 in the interior. Thus, the gradient is non-zero and by moving in the direction of the gradient, the value of h will increase. Thus, all the maximizers are on the boundary. There are three cases to consider: Case 1) y = x. Then, we need to prove that x −α(s−r) + α(r + (q − r)x) ≤ αq + 1 whenever 0 ≤ r ≤ s and x −α(s−r) ≤ t(α, q, s). First, note that if s = r, the inequality is trivial. For that reason, we assume that 0 ≤ r < s − 1 now on. Consider the function g(x) = x −α(s−r) + α(r + (q − r)x) − αq − 1.
To prove this, we simply show that the function is decreasing. Equivalently, we want to show that its logarithm k(x) = x ln b+x a+b+x is decreasing. This, however, is simple as as ln(y) + 1 − y ≤ 0 for all y. With this, case 1 is complete.
Case 2) y = 1. This is the same case as 1 except that we replace r with r − 1.
Finally, we want to prove that the choice of t is optimal. This follows simply by taking x 1 = x 2 = · · · = x q = t − 1 αs .
With this in mind, the same proofs show that Lemma F.2.3 holds with the value t min (α, q, s) instead of t(α, q, s) for any α > 0. Similarly, Theorem 5.1.1 holds with the value t min (α, q, s) instead of t(α, q, s) for any α > 0. We did not state the result in this more general form earlier as we specifically wanted to derive the inequalities for α = 1 s , t(α, q, s) = q s . .
Proof. Let q ′ be the largest integer such that q ′ ≤ q and q ′ = 2 a − 1 for some natural number a. Note that q ′ ≥ q 2 . Then, one can construct as in [BR11, Appendix C] a subadditive valuation g over [q ′ ] that is not γ-close to being XOS for any γ > 2 log 2 q 2 as follows.
Construction in [BR11,Appendix C]: Identify [q ′ ] with the set V of 2 a − 1 non-zero vectors over the finite vector space F a 2 . For each v ∈ V, set S v = {u ∈ V : v · u ≡ 1 (mod 2)}. Define g as the set-cover function over V. On the one hand, we can observe that g(V) ≥ a. Indeed, for any r < a vectors v 1 , v 2 , . . . , v r , we can find some u such that v i · u = 0 (mod 2) holds for all i ∈ [r] simply because the matrix (v 1 v 2 · · · v r ) T is not of full rank. On the other hand, note that for each v, the set S v contains 2 a−1 elements and each u ∈ V belongs to 2 a−1 sets of the form S v . Since g(S v ) = 1, the fractional cover α assigning weight 1 2 a−1 to each set S v satisfies I⊆V α(I)g(I) = v∈V α(S v )g(S v ) = (2 a − 1) × 1 2 a−1 ≤ 2, which shows that g is not γ-close to being XOS for γ > 2 a ≥ 2 log 2 q 2 . Now, we go back to the problem statement. Clearly, q ′ ≤ q ≤ m. First, we extend g to a subadditive valuation g ′ on [m] by setting g ′ (S) := g(S ∩ [q ′ ]) for any S ⊆ [m]. It trivially follows that g ′ is not γ-close to any q ′ -partitioning valuation for any γ > 2 log 2 q 2 ′ . As q ′ ≤ q, meaning that any q-partitioning valuation is also q ′ -partitioning, the result follows. | 2023-04-05T01:16:25.306Z | 2023-04-04T00:00:00.000 | {
"year": 2023,
"sha1": "5efd87fb158918a67076a9f5b36e77737ac876cd",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "5efd87fb158918a67076a9f5b36e77737ac876cd",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
18543441 | pes2o/s2orc | v3-fos-license | Utilization of Oncotype DX in an Inner City Population: Race or Place?
Oncotype DX, a 21-gene-array analysis, can guide chemotherapy treatment decisions for women with ER+ tumors. Of 225 ER+ women participating in a patient assistance trial, 23% underwent Oncotype DX testing: 31% of whites, 21% of blacks, and 14% of Hispanics (P = 0.04) were tested. Only 3 white women were treated at municipal hospitals and none was tested. 3% of women treated in municipal hospital as compared to 30% treated at tertiary referral centers were tested (P = 0.001). Within tertiary referral centers, there was no racial difference in testing: 32% of whites, 29% of blacks, and 19% of Hispanics (P = 0.25). Multivariate analysis (model c-statistic = 0.76; P < 0.0001) revealed that women who underwent testing were more likely to have stage 1B (RR = 1.70; 95% CI: 1.45–1.85) and to be treated after 2007 (RR = 1.34; 95% CI: 1.01–1.65) and less likely to be treated at a municipal hospital (RR = 0.20; 95% CI: 0.04–0.94). Women treated at municipal hospitals were less likely to undergo testing resulting in a misleading racial disparity that is driven by site of care. As Oncotype DX can reduce overuse of chemotherapy, it is imperative to expand testing to those who could benefit from yet experience underuse of this test, namely, women treated at safety net hospitals. This trial is registered with NCT00233077.
Introduction
While women with early stage breast cancer frequently receive adjuvant chemotherapy to prevent recurrence, not all patients benefit from and require it.
Oncotype DX (ODX) (Genomic Health, Inc., Redwood City, CA) is a validated genomic predictor of outcome and response to chemotherapy in estrogen-receptor-(ER-) positive and node-negative breast cancer. ODX analyzes the expression of 21 genes within a tumor to determine a recurrence score that corresponds to a specific likelihood of breast cancer recurrence within 10 years of the initial diagnosis, as well as response to adjuvant treatment. Results are reported as a numeric recurrence score (RS) divided into low, intermediate, and high risk groups. Patients with a high RS derive additional benefit from chemotherapy to hormonal therapy, while those with a low score do not. Thus, the assay has the potential to enable women to avoid unnecessary chemotherapy.
The assay, costing approximately $4500, has been available since 2004, with Medicaid coverage becoming available in 2007. The American Society for Clinical Oncology and the National Comprehensive Cancer Network include the 21gene signature assay in their guidelines for the management of lymph node-negative and ER-positive breast cancer [1,2] The cost effectiveness of this assay in directing adjuvant systemic therapy has been established as well [3].
Most reports on ODX have examined the influence of recurrence score results on the use of adjuvant chemotherapy [3]. There is little data available on its use in minority and economically disadvantaged women. Ethnic differences in ODX have also not been well elucidated. This study describes our experience with the use of ODX in an inner city population.
Materials and Methods
374 women, with an early stage breast cancer surgically treated at 4 municipal hospitals and 4 tertiary referral centers in NYC between 2006 and 2009, participated in a randomized-controlled trial of patient assistance to reduce disparities in treatment [4,5]. Their charts were abstracted to obtain pathology and treatment data. Of the 374 women, 225 had ER+ tumors stage 1B or greater, eligibile to undergo ODX testing. Bivariate comparisons were done with chi-square/ttests and logistic regression models for multivariable comparisons. Race and municipal hospital were highly correlated, as only 3 white women went to municipal hospitals. Because there was no racial difference in testing within tertiary centers and only one woman in municipal hospitals got tested, we used hospital type rather than race in the final model.
Results
23% (52/225) of women underwent ODX. Women who got tested had earlier stage cancer and higher income and were white and treated at tertiary referral centers (see Table 1). There was no racial or insurance differences in receipt of ODX among women treated at tertiary referral centers. Of note, black and Hispanic women as compared to white women were more likely to be treated at municipal hospitals (29% versus 32% versus 3%; < 0.0001). Clinical factors such as age and comorbidity and demographic factors such as insurance did not affect the likelihood of testing.
Multivariate analysis revealed that tested women were more likely to have stage 1B as compared to stage 2 (RR = 1. Of women tested, 24/52 (46%) had a low recurrence risk, 19/52 (37%) had an intermediate score, and 9/52 (17%) had a high recurrence risk score, with no racial difference. All 9 patients at high risk (100%) were treated with chemotherapy; 58% at intermediate risk and only 4% at low risk were treated with chemotherapy.
Discussion
Despite advances in breast cancer care, disparities persist. Differences in tumor biology, health care access, and disease management all play a role [6]. For example, in a study of Medicare beneficiaries, Bach reported that black patients and white patients are to a large extent treated by different physicians, and the physicians treating black patients reported greater difficulties in obtaining access for their patients to high-quality subspecialists, high-quality diagnostic imaging, and nonemergency admission to the hospital [7].
We found that the two most important factors influencing ODX testing were tumor stage and receiving care at a municipal hospital. Importantly, site of care influenced testing. Women treated at municipal hospitals were unlikely to get ODX even after Medicaid would have covered the costs. In 2007, when Medicaid coverage became available for ODX, minority women at tertiary centers began to undergo testing; this was not true at municipal hospitals.
Our data suggest that the adoption of ODX was slower in municipal than in tertiary referral hospitals resulting in apparent racial disparity even after insurance coverage would have ensured equal access and could have saved unnecessary International Journal of Breast Cancer 3 chemotherapy costs to patients and hospitals. Thus, the racial disparity in ODX testing reflected the site of care and not race since women were treated equally at the tertiary referral centers, with no racial disparity in those sites. These results reflect that diffusion of innovation lags at hospitals treating minority and underprivileged patients compared to tertiary referral centers.
In this study, chemotherapy decisions appear to be affected by ODX results. Responding to the biologic characteristics elucidated by the ODX, physicians were able to identify women with early stage tumors who could forego chemotherapy. Unfortunately only a small percentage of eligible women who could benefit were tested, and testing was limited to women treated at tertiary referral centers.
Our study is limited by the small sample size and its New York City locale which offers generous Medicaid benefits and may differ from states with more limited coverage.
Conclusion
As Oncotype DX influences treatment and can reduce overuse of chemotherapy, it is imperative to implement innovations that improve quality care at safety net hospitals. Such approaches have the potential to reduce racial and socioeconomic disparities in cancer care.
Conflict of Interests
The authors have no conflict of interests to report. | 2016-05-12T22:15:10.714Z | 2013-12-18T00:00:00.000 | {
"year": 2013,
"sha1": "019fe68d37b82f03bb5302e6c9bf641c742f99dc",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/ijbc/2013/653805.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d77f119661a6dcdcde1d3ed3d86c7915a2aa08a7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
216598027 | pes2o/s2orc | v3-fos-license | Outcome of supplementation of vitamin D on intact parathyroid hormone level in chronic kidney disease patients
Background: Secondary hyperparathyroidism is present in majority of patients with estimated glomerular filtrate rate less than 60 mL/min/1.73 m. Sustained elevated parathyroid hormone level can cause osteitis-fibrosa-cystica, fracture, hypercalcemia, hyperphosphatemia, and calciphylaxis. Kidney Disease Improving Global Outcome guidelines for Chronic Kidney Disease Mineral and Bone Disorder 2017 recommends treatment with calcitriol or vitamin D analogue if parathyroid hormone level is progressively increasing and remains persistently above the upper limit despite correction of modifiable factors. Objectives: The objective of this study was to determine the mean change in intact parathyroid hormone after calcitriol supplementation in patients with chronic kidney disease (stage 3 to 5). Methodology: This prospective observational study enrolled 92 patients with chronic kidney disease stage 3 to 5, not under maintenance hemodialysis. Patients who had intact parathyroid hormone level more than 200 pg/ml, serum phosphate level less than 4.5 mg/dl and corrected serum calcium less than 9.5 mg/dl were selected for the study. They were supplemented with oral calcitriol 0.25μg thrice weekly for three months and intact parathyroid hormone level was measured after three months. Results: Mean intact parathyroid hormone level before supplementation was 332.91 ± 96.046pg/ml and after three months of supplementation with calcitriol was 176.49 ±53.764pg/ml. This finding was statistically significant (Correlation: 0.471, p-value less than 0.05). Thus, supplementation of calcitriol reduced the mean intact parathyroid hormone level in the chronic kidney disease patients in our study. Conclusion: Calcitriol supplementation seems to be an effective measure to reduce intact parathyroid hormone level in chronic kidney disease patients when it remains persistently high despite correction of modifiable factors.
INTRODUCTION
C hronic kidney disease (CKD) is defined as the presence of kidney damage (usually detected as urinary albumin excretion of ≥30 mg/day or equivalent) or decreased kidney function (defined as estimated glomerular filtration rate [eGFR] <60 mL/min/1.73 m 2 ) for three or more, irrespective of the cause 1 .
Chronic kidney disease (CKD) is associated with mineral and bone disorder (CKD-MBD). Initially, serum fibroblast growth factor 23 (FGF23) and parathyroid hormone (PTH) increase and serum calcium and phosphate remain normal. However, as the disease progresses hyperphosphatemia occurs and serum vitamin D decreases 2 . Secondary hyperparathyroidism is present in majority of the patients with eGFR<60 mL/min/1.73 m 2 and occurs as an adaptive response to deteriorating renal function 3 . Sustained elevated parathyroid hormone (PTH) levels can cause osteitis-fibrosa-cystica, fracture, hypercalcemia, hyperphosphatemia and calciphylaxis. Different observational studies have reported an increased relative risk of death in CKD stage 5 disease patients who have PTH values at the extremes, that is less than two or greater than nine times the upper normal limit of the assay 4 The primary objective of the study was to determine the mean change in intact parathyroid hormone after calcitriol supplementation in patients with CKD (stage 3 to 5).
Sample Selection
The following selection criteria were made for patients' enrollment: 1. Patients' with chronic kidney disease (CKD stage 3 to stage 5). 2. Intact parathyroid hormone level more than 200pg/ ml (3 times the normal upper limit 65 pg/ml). 3. Serum phosphate level less than 4.5 mg/dl. 4. Corrected serum calcium level of less than 9.5 mg/dl.
Patients under maintenance hemodialysis were excluded from the study. Patients were enrolled in the study using non-probability consecutive sampling technique.
Estimation of GFR
GFR was estimated using the 4-variable MDRD Study equation 7 controlled trial. 1628 patients with chronic kidney disease participating in the MDRD Study. Serum creatinine levels were calibrated to an assay traceable to isotope-dilution mass spectrometry. Glomerular filtration rate was measured as urinary clearance of 125I-iothalamate. Mean measured GFR was 39.8 mL/min per 1.73 m2 (SD, 21. CKD was classified into five stages defined by the GFR and/or evidence of kidney damage, as recommended by the National Kidney Foundation 7 controlled trial.1628 patients with chronic kidney disease participating in the MDRD Study.Serum creatinine levels were calibrated to an assay traceable to isotope-dilution mass spectrometry. Glomerular filtration rate was measured as urinary clearance of 125I-iothalamate.Mean measured GFR was 39.8 mL/min per 1.73 m2 (SD, 21.2. According to CKD stage of patients, 72.8% of patients had CKD stage 5, 20.7% of them had CKD stage 4 and 6.5% of them had CKD stage 3 (Figure 1).
Comparison of means between initial IPTH and after three months of supplementation with vitamin D has been presented in table 3. Mean change in IPTH level after three months of supplementation of oral calcitriol was tested using paired t-test. Mean IPTH before supplementation was 332.91±96.046 pg/ml and after supplementation of calcitriol was 176.49 ±53.764 pg/ml. P-value was less than 0.05, which revealed the significant association between IPTH and supplementation of oral calcitriol.
DISCUSSION
The study enrolled 92 patients who fulfilled the selection criteria. They were supplemented with calcitriol 0.25µgthrice weekly. Mean IPTH level before and after supplementation of calcitriol was analyzed. Mean IPTH before supplementation was 332.91pg/ml and after three months of supplementation with calcitriol was 176.49pg/ml. This finding was statistically significant (p= <0.001). Thus, supplementation of calcitriol reduced the mean IPTH level in the patients.
PTH is a major uremic toxin and is related to longterm complications like renal osteodystrophy, vascular calcifications, alterations in cardiovascular structure and function, immune dysfunction, and anemia 8 . Thus, timely intervention with measures to lower IPTH is of high priority to reduce the complications of secondary hyperparathyroidism. The study hereby suggests that oral vitamin D supplementation could be one of the interventions which can lower IPTH level in CKD patients. This study however is carried out in one treatment facility with small number of patients. Multicenter studies with large sample size are needed to conclusively establish the relation of Vitamin D supplementation with reduction of IPTH level and thereby decrease progression of CKD.
CONCLUSION
The finding of the study suggests calcitriol supplementation may be an effective measure to reduce IPTH level in chronic kidney disease (stage 3 to 5) patients. This is in congruence to the KIDGOCKD-MBD guidelines 2017which recommends calcitriol supplementation if IPTH remains persistently high despite correction of modifiable factors (serum calcium, phosphorus, anemia, acidosis, diabetes). | 2020-03-29T12:40:04.670Z | 2019-06-30T00:00:00.000 | {
"year": 2019,
"sha1": "248a0f9c284548b510fa79d9c2b60d0529189faf",
"oa_license": null,
"oa_url": "https://www.nepjol.info/index.php/JKMC/article/download/28164/23210",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "248a0f9c284548b510fa79d9c2b60d0529189faf",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
245839356 | pes2o/s2orc | v3-fos-license | Variability in personal protective equipment in cross-sectional interventional abdominal radiology practices
Purpose To determine institutional practice requirements for personal protective equipment (PPE) in cross-sectional interventional radiology (CSIR) procedures among a variety of radiology practices in the USA and Canada. Methods Members of the Society of Abdominal Radiology (SAR) CSIR Emerging Technology Commission (ETC) were sent an eight-question survey about what PPE they were required to use during common CSIR procedures: paracentesis, thoracentesis, thyroid fine needle aspiration (FNA), superficial lymph node biopsy, deep lymph node biopsy, solid organ biopsy, and ablation. Types of PPE evaluated were sterile gloves, surgical masks, gowns, surgical hats, eye shields, foot covers, and scrubs. Results 26/38 surveys were completed by respondents at 20/22 (91%) institutions. The most common PPE was sterile gloves, required by 20/20 (100%) institutions for every procedure. The second most common PPE was masks, required by 14/20 (70%) institutions for superficial and deep procedures and 12/12 (100%) institutions for ablation. Scrubs, sterile gowns, eye shields, and surgical hats were required at nearly all institutions for ablation, whereas approximately half of institutions required their use for deep lymph node and solid organ biopsy. Compared with other types of PPE, required mask and eye shield use showed the greatest increase during the SARS-CoV-2 pandemic. Conclusion PPE use during common cross-sectional procedures is widely variable. Given the environmental and financial impact and lack of consensus practice, further studies examining the appropriate level of PPE are needed. Graphical abstract
Graphical abstract
Variability in personal protecƟve equipment in cross-secƟonal intervenƟonal abdominal radiology pracƟces Planz Keywords Biopsy · Personal protective equipment · Institutional practice
Introduction
Any invasive procedure carries at least a theoretical risk of introducing infection. Surgical site infections (SSIs) are significant clinical problems associated with increased patient morbidity and mortality [1,2]. Efforts have been made in the operating room to counteract SSIs by increasing the physical barriers between the incision site and pathogens introduced from the air or body surfaces. This has led to stringent requirements for use of personal protective equipment (PPE)-scrubs, sterile gloves, sterile gowns, surgical hats, ear covers, beard covers, and disposable shoe covers-albeit based primarily on theoretical benefits rather than evidencebased practices [1][2][3][4][5]. Many of the PPE recommendations have filtered into the practice of procedures performed outside of the operating room, such as US-and CT-guided procedures, commonly referred to as cross-sectional interventional radiology (CSIR) procedures [6]. The infection rate associated with CSIR procedures is far less than is cited for SSIs, likely due to the minimal invasiveness of a skin puncture a few millimeters in diameter; however, the financial cost and environmental impact associated with the use of PPE are considerable [7,8]. Multiple societies have published practice guidance for CSIR procedures, although there is no consensus standard addressing required PPE, and differing practices are observed anecdotally among various institutions [6,[9][10][11][12][13].
In 2020, Society of Abdominal Radiology created the Cross-sectional Interventional Radiology Emerging Technology Commission (CSIR ETC) to support radiologists performing cross-sectional procedures by researching best practices and developing practice guidelines to optimize patient outcomes. CSIR ETC currently includes 38 members from 22 institutions with geographic and practice-type diversity across the USA and Canada. The CSIR ETC members were surveyed to assess currently utilized PPE for CSIR procedures.
The main goal of this study was to determine institutional PPE requirements for common CSIR procedures. The secondary goal was to evaluate changes in PPE requirements during the SARS-CoV-2 pandemic.
Methods
This Health Insurance Portability and Accountability Actcompliant study was exempt from institutional review board approval. An eight-question survey about PPE during CSIR procedures performed by abdominal radiologists was created using SurveyMonkey (www. surve ymonk ey. com). The survey was sent by email to all 38 members of the SAR CSIR ETC from 22 institutions. Data regarding PPE were evaluated on a per institution basis (one respondent per institution) to mitigate undue weighting of particular institutional practices.
Most of the survey items were checkbox questions evaluating required PPE, skin preparation agents, and the use of sterile towels, paper drape, and sterile back table cover during paracentesis/thoracentesis, thyroid fine needle aspiration (FNA), superficial lymph node biopsy, deep lymph node biopsy, solid organ biopsy, and ablation. The survey asked if these procedures were performed and what types of PPE were required (sterile gloves, scrubs, sterile gown, non-sterile gown, surgical mask, surgical hat, disposable shoe covers, and eye shield). In anticipation of changes in PPE use during the SARS-CoV-2 pandemic, survey items evaluated proceduralist PPE requirements prior to and during the pandemic. The complete survey can be found in the Appendix.
CSIR ETC members received the survey on July 14, 2021. A follow-up reminder email to complete the survey was sent on July 28, 2021, and the survey was closed on August 4, 2021. The results were managed using Microsoft Excel for Mac and summarized using descriptive statistics.
Results are graphically represented in Figs. 1, 2, and 3. Prior to the SARS-CoV-2 pandemic, the most common PPE was sterile gloves, which were required by all institutions for every procedure. The second most common PPE was surgical masks, which were required by 14/20 (70%) institutions for superficial and deep procedures and 12/12 (100%) institutions for ablation. Scrubs, sterile gowns, eye shields, and surgical hats were required at nearly all institutions for ablation, whereas approximately half of institutions required their use for deep lymph node and solid organ biopsy. During the pandemic, eye shield and mask requirements increased far more than other PPE.
Foot covers were rarely required and only reported by 1/20 (5%) institutions for deep lymph node and solid organ biopsy and 2/12 (17%) institutions for ablation. Required non-sterile gowns were reported by 1/20 (5%) institutions for all superficial procedures and deep lymph node and solid organ biopsy. During the pandemic, 2/20 (10%) institutions added non-sterile gowns to their PPE requirements during deep lymph node and solid organ biopsy and 3/20 (15%) institutions added required non-sterile gowns during superficial procedures.
The different combinations of required PPE (e.g., "gloves and mask," "gloves, mask, gown, and hat") varied widely among the institutions and among procedures. For example, 13 different combinations of PPE were reported for solid organ biopsy, ranging from required use of only sterile gloves to required use of sterile gloves, scrubs, sterile gown, mask, hat, and eye shield. No more than 3 institutions shared the same combination of required PPE for solid organ biopsy. For paracentesis and thoracentesis, 16 different combinations of PPE were reported and ranged from only sterile gloves to a combination of sterile gloves, scrubs, sterile gown, mask, and hat. No more than 2 institutions shared the same combination of required PPE for paracentesis/ thoracentesis. For the remaining procedures, the numbers of different PPE combinations were 14 for thyroid FNA, 13 for superficial LN biopsy, 14 for deep LN biopsy, and 5 for ablation. No similarities in required PPE use were observed between practices in the same geographic region or the same practice type (academic vs. private). Table 1 represents the institutions reporting PPE practice that complies with recommendations from the joint practice guidelines from Society of Interventional Radiology (SIR), the Association of periOperative Registered Nurses (AORN), and the Association for Radiologic and Imaging Nursing (ARIN), which include wearing scrubs, hat, mask, sterile surgical gown, and sterile gloves during percutaneous biopsy and ablation [6].
Chlorhexidine agents were preferred for skin preparation by the majority of individual respondents for all procedures, ranging from 87 to 92%. Ninety-four percent of individual respondents reported use of a sterile table cover for the back table during ablation, but this was less commonly reported in other procedures, ranging from 48 to 64%. Preference for sterile towel and sterile paper drape use also varied (Fig. 3).
Discussion
Preventing infection is important when performing any procedure and follows the dictum, primum non nocerefirst, do no harm. Use of PPE is viewed as a way to protect both the patient from infection and the proceduralist from body fluid and tissue exposure, but the extent to which PPE is used during CSIR procedures is variable. These survey results demonstrate tremendous variation in PPE practices and are in keeping with a similar past survey of interventional radiologists that also showed varied use of PPE [14]. In the current survey, the majority of institutions required sterile gloves and masks for CSIR procedures prior to the SARS-CoV-2 pandemic, but other PPE requirements were largely inconsistent across the group. During the pandemic, required mask and eye shield use increased for all procedures (65-100% to 95-100% for masks and 45-67% to 80-92% for eye shields), although other PPE requirements continued to vary. Sterile gloves and masks seemingly represent the minimum requirements of PPE for CSIR procedures, along with eye shields during the pandemic, although a majority consensus on other elements of PPE was not evident from the survey.
The lack of consensus and the paucity of data evaluating PPE in CSIR procedures likely contribute to the practice variation observed in this survey. Several societies in radiology and other medical specialties have published practice guidelines for these procedures but make differing recommendations or do not make specific recommendations for PPE use [6,[9][10][11][12][13]. Joint practice guidelines from the SIR/ AORN/ARIN recommend to mirror the operating room setting during all percutaneous biopsies and tumor ablations, requiring proceduralists to wear scrubs, hair coverings, sterile gowns, sterile gloves, and masks, although only a minority of institutions were noted in the survey to comply with this recommendation for biopsies [6]. American Institute of Ultrasound in Medicine (AIUM) Practice Parameters recommend to follow facility infection control practices, but do not provide specific guidance regarding PPE for most procedures [9]. In the literature, PPE use is usually not specified or addressed in articles describing the technique and complications of CSIR procedures, including those focused specifically on post-procedural infection. A large retrospective series evaluating infection after more than 13,000 ultrasound-guided CSIR procedures found an overall incidence of 0.1% for post-procedural infection, but the details of PPE were not included [15].
The relationship between PPE use in the operating room and the prevalence of SSIs is unclear, and the rate of infection during CSIR procedures is exceedingly low (0 to < 1%), less than the rate cited for SSIs [3][4][5][15][16][17][18][19][20][21]. Adopting the same standards of an operating room for CSIR procedures may be unnecessary when considering an analogous comparison in the surgical literature: minor hand and skin surgery. In Canada, the most common procedural setting for carpal tunnel surgery is an ambulatory procedure room using "field sterility," defined by the use of a surgical mask, sterile gloves, and small sterile drape [22,23]. No gown or hat is worn. In this setting, multiple groups have shown no difference in clinical outcomes or postoperative infections when compared to carpal tunnel surgeries performed in the traditional operating room setting [24,25]. A similar trend has also been observed in Mohs micrographic skin surgery, where prospective trials have shown no differences in the prevalence of SSIs between Mohs surgeries performed with non-sterile and sterile gloves [26,27].
PPE guidelines need to consider the protection of the proceduralist from exposures to blood, tissue, and other bodily fluid. Such concerns may be more attributable to procedures involving high-pressure systems (such as arterial access) in which fluid splashes may be more common. For example, in a series of 100 angiographic procedures, 23 blood splashes occurred during 7 procedures, and the authors concluded that while the risk was low, face and eye protection were warranted [28].
Considering rising healthcare costs and the production of approximately four billion pounds of medical waste annually in the USA, it behooves proceduralists to weigh the theoretical benefit of infection rate reduction by PPE with the costs, both financially and environmentally. Increased healthcare costs associated with more stringent requirements for operating room attire has been extensively published in the surgical literature [23,25,[29][30][31][32][33]. The healthcare industry is estimated to be responsible for 8% of the greenhouse gas emissions in the USA [7]. A recent analysis of greenhouse gas emissions from a tertiary care interventional radiology service found that the production and transportation of single-use supplies, including personal protective equipment, accounted as the second largest contributor to emitted carbon dioxide from the service [8]. Not unexpectedly, the survey results showed increased PPE use during the SARS-CoV-2 pandemic. Global shortages in PPE during the beginning of the pandemic further echo the need for prudent and judicious use of these medical resources [34]. Most PPE is designed as single use and intended to be subsequently disposed, but preservation strategies for decontamination and reuse of PPE have been critical during supply shortages [35,36]. These strategies may be useful for decreasing waste and cost when applied to CSIR procedures.
The survey also found that chlorhexidine agents are used by the vast majority of respondents for all procedures for skin site antisepsis, in keeping with the widespread adoption after superior performance of chlorhexidine-alcohol over povidoneiodine was demonstrated [37]. All respondents reported use of sterile towels and/or sterile paper drapes for all procedures.
There are several limitations to this study. This survey was sent to a subset of abdominal radiologists, the vast majority working in academic practices, and the observations may thus vary from other types of practice groups. Nonresponse bias may also affect the results, although members from 20 out of 22 institutions represented in the ETC completed the survey. Additionally, institutional and individual post-procedural infection rates were not assessed and therefore the true relationship between PPE and the risk of infection cannot be determined on the basis of this survey.
Further investigation is warranted to examine the appropriate level of PPE for CSIR procedures and elucidate the true role of PPE in protecting both the patient and proceduralist. Given the extremely low risk of infection and the wide range of current practices evident in the survey, prospective studies comparing procedures performed with and without certain types of PPE can be ethically conducted. Assessment of cost and waste reduction would also be necessary, as this information would be of interest to institutions seeking to reduce their carbon footprint or to maximize profits by decreasing costs.
In conclusion, this survey shows the variation of PPE practices among abdominal radiologists performing CSIR procedures. Considering the lack of strong evidence to support increased PPE use and the financial and environmental impact, it is time to re-examine the theoretical but not proven benefit of PPE in CSIR procedural settings and establish consensus standards. | 2022-01-11T14:46:58.831Z | 2022-01-10T00:00:00.000 | {
"year": 2022,
"sha1": "4f288c1b7b6e94792c17799818151c85a91b5b48",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00261-021-03406-z.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "4f288c1b7b6e94792c17799818151c85a91b5b48",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221865756 | pes2o/s2orc | v3-fos-license | Reproductive sequelae of parental severe illness before the pandemic: implications for the COVID-19 pandemic
Objective To investigate, with pre–COVID-19 data, whether parental exposure to severe systemic infections near the time of conception is associated with pregnancy outcomes. Design Retrospective cohort study. Setting Population-based study covering births within the United States from 2009 to 2016. Participants The IBM MarketScan Research database covers reimbursed health care claims data on inpatient and outpatient encounters that are privately insured through employment-sponsored health insurance. Our analytic sample included pregnancies to paired fathers and mothers. Interventions(s) Parental preconception exposure (0–6 months before conception) to severe systemic infection (e.g., sepsis, hypotension, respiratory failure, critical care evaluation). Main Outcome Measure(s) Preterm birth (i.e., live birth before 37 weeks) and pregnancy loss. Result(s) A total of 999,866 pregnancies were recorded with 214,057 pregnancy losses (21.4%) and 51,759 preterm births (5.2%). Mothers receiving intensive care in the preconception period had increased risk of pregnancy loss, as did fathers. Mothers with preconception sepsis had higher risk of preterm birth and pregnancy loss, and paternal sepsis exposure was associated with an increased risk of pregnancy loss. Similar results were noted for hypotension. In addition, a dose response was observed for both mothers and fathers between preconception time in intensive care and the risk of preterm birth and pregnancy loss. Conclusion(s) In a pre–COVID-19 cohort, parental preconception severe systemic infection was associated with increased odds of preterm birth and pregnancy loss when conception was soon after the illness.
S ince its emergence in December 2019, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has caused at least 4 million infections with more than 250,000 deaths globally (1). As the pandemic unfolds, the medical community continues to explore the sequelae of infection. The short-and long-term impact of coronavirus disease 2019 (COVID-19) on individuals, particularly regarding fertility, is currently unknown in terms of both direct damage done by the virus and related systemic illness. Case reports have shown that COVID-19 infection during pregnancy may lead to adverse birth outcomes, and studies suggest that coronaviruses may adversely affect pregnant women, but any effects of severe systemic infection unrelated to the virus are unknown (2,3). Indeed, while some data suggest fever may affect fertility, less is known about the potential for reproductive harm from sepsis, hypotension, and respiratory failure (4).
Knowledge of the reproductive sequelae of severe systemic illness is needed beyond the COVID-19 pandemic and could be applied to other infections. Severe systemic illness can lead to effects through a variety of mechanisms, including generalized deconditioning, cognitive decline, increased postdischarge mortality, and cardiovascular disease (5)(6)(7)(8).
The mechanisms of reproductive impairments may include a direct toxic effect of the infection or treatment, ischemic effect through hypotension, or disruption in endocrine signaling critical to conception. Such effects could impair gamete quality in both parents or uterine competences in mothers.
Thus, aspects surrounding severe systemic infection and respiratory failure present unknown reproductive risks during the current pandemic. We therefore sought to examine the impact that severe systemic illness may have on pregnancy outcomes (e.g., preterm birth and pregnancy loss) when parents are exposed during the preconception period. We hypothesized that parents who suffered from recent severe systemic illness before conception may have adverse pregnancy outcomes.
Study Cohort
The IBM MarketScan Research database was used for our study cohort. This database provides reimbursed health care claims data on inpatient and outpatient encounters covering more than 150 million individuals who are privately insured through employment-sponsored health insurance and Medicare coverage as supplement. We analyzed claims data from the years 2007-2016. Institutional review board approval was not required for the analysis, because this dataset contains deidentified patient information.
Cohort assembly and outcome ascertainment was based on the previously described methodology of Ailes et al. and Wall-Weiler et al. (9,10). Briefly, pregnant women aged 20-45 years were identified from inpatient and outpatient files. Mothers, fathers, and infants were linked by means of family ID. Through member enrollment files, we verified babies' records using the estimated birth date and enrollment start date. To determine adjudicated gestational age we used International Classification of Diseases (ICD), Current Procedure Terminology (CPT), and Diagnosis-Related Group (DRG) codes according to the aforementioned methodology of Ailes et al. and Wall-Weiler et al. from inpatient and outpatient files from both mothers and newborns (9,11,12). The relevant codes are listed in Supplemental Table 1 (available online at www.fertstert.org). The medical records of mothers and fathers were obtained by inpatient and outpatient claims files. We included only those infants with one male and one female parent at birth. Mothers and fathers had to be enrolled in insurance plans associated with the database for at least 1 year before conception. Outcomes were identified via ICD-9/10 diagnosis and DRG codes from both in-and outpatient claims, as well as CPT codes from outpatient claims for the mother (see Supplemental Table 1).
Pregnancy Outcomes
Pregnancy outcomes analyzed in the study included live birth, stillbirth, ectopic pregnancy, induced abortion, spontaneous abortion, and preterm birth (<37 weeks). Pregnancy loss included ectopic pregnancies, abortions (induced and spontaneous), and stillbirths.
Parental Exposures
We initially identified parental (mother or father) exposure to severe illness related to infection (e.g., sepsis, hypotension, respiratory failure) in the 3 months before estimated conception. This time period was chosen because spermatogenesis takes $3 months and therefore outcomes related to insults that occur during this time may be captured. A sensitivity analysis of up to 6 months before conception was also performed. Exposures related to severe systemic infection and respiratory failure were chosen based on those that have been reported for COVID-19 or influenza. Inpatient variables were examined concerning these outcomes using ICD-9/10, CPT, and DRG codes from 0 to 6 months before conception. The relevant exposure codes are listed in Supplemental Table 1 and included illness associated with sepsis/systemic inflammatory response syndrome, respiratory failure/acute respiratory distress syndrome, hypotension/shock, influenza, and critical care evaluation and management. Reference groups for relative risk (RR) were those individuals with no exposures.
Statistical Analysis
Descriptive statistics were presented as mean AE SD. Categoric variables were expressed as n (%). Differences in illness in both parents were examined with the use of chi-square or Fisher exact test as appropriate. Generalized estimating equation and generalized logit models estimated the RRs and corresponding 95% confidence intervals (CIs) of each outcome to allow for some families contributing subsequent births for binary and multinomial outcomes, respectively. All models were adjusted for birth year, region of care, and maternal factors including age, obesity, diabetes mellitus, hypertension, hyperlipidemia, and smoking. To evaluate unmeasured confounding effects, we calculated E-values, which estimates the minimum strength of association on the RR scale that an unmeasured confounder would need to have for both the exposure and the outcome to fully explain away a specific exposure-outcome association (https://www.evaluecalculator.com) (13). All tests were two sided and P< .05 was considered to be statistically significant. Analyses were done in SAS software version 9.4.
Study Cohort Demographics
In total, 999,866 pregnancies were observed during the study period, with 214,057 pregnancy losses (21.4%) and 51,759 preterm births (5.2%) observed ( Table 1). The mean paternal age was 35.4 years (SD 5.4) and mean maternal age was 33.2 years (SD 4.4).
Severe Systemic Infection and Preterm Birth/ Pregnancy Loss
Preconception respiratory/severe systemic infection in fathers and mothers was associated with preterm birth and pregnancy loss ( Table 2). Any intensive care unit (ICU) admission in the 3 months before conception was associated with an increased risk of pregnancy loss for both mothers (RR 1.99, 95% CI 1.69-2.34) and fathers (RR 1.67, 95% CI 1.43-1.96). A sensitivity analysis for unmeasured confounding (i.e., E-value) determined that the minimum strength of association for an unmeasured confounder to explain away the identified associations between severe systemic infection and adverse pregnancy outcomes varied from 1.39 to 2.6 for men and from 1.51 to 3.39 for women. We next examined the RRs according to abortion types and did not identify differences between the two (Table 3). In addition, a longer stay in the ICU was associated for both mothers and fathers with a higher risk of preterm birth and pregnancy loss (Table 4).
Mothers and fathers with preconception sepsis were at higher risk of having a child with preterm birth (RR 1.61, 95% 1.02-2.54; RR 1.38, 95% CI 1.07-1.78; respectively) and fathers with preconception sepsis had a higher risk of pregnancy loss (RR 1.55, 95% CI 1.22-1.98). Those mothers with respiratory failure during the preconception period had a higher risk of pregnancy loss (RR 1.31, 95% CI 1.13-1.53). Mothers diagnosed with hypotension or shock in the preconception period had a higher risk of pregnancy loss (RR 1.99, 95% CI 1.69-2.34) as did fathers (RR 1.67, 95% CI 1.43-1.96). Furthermore, parents with multiple diagnoses of severe systemic illness (i.e., of sepsis, critical respiratory failure, hypotension/shock, and critical care evaluation) had higher risk of pregnancy loss though the sample size was small (Supplemental Table 2). Expanding the exposure interval up to 6 months did not meaningfully alter the results (Supplemental Table 3, available online at www.fertstert. org). The diagnosis of influenza during the preconception period for both mothers and fathers was not associated with a higher risk of preterm birth or pregnancy loss.
DISCUSSION
The reproductive sequelae of severe systemic illness are unknown, and understanding these are of particular importance during the COVID-19 pandemic. With the use of a U.S. claims cohort, the present report found that profound systemic infection before conception, in both fathers and mothers who were able to conceive soon after severe illness, was associated with higher RRs of pregnancy loss and preterm birth. Moreover, the higher the number and more severe the illnesses (e.g., respiratory failure, sepsis, ICU care), the higher the risk of these two adverse pregnancy outcomes.
Because SARS-CoV-2 can infect those of reproductive age, with most recovering, understanding the reproductive sequelae of the disease is important for counseling and aftercare of these patients as well as others with severe systemic infections. An estimated 5%-10% of COVID-19-positive patients require ICU admission and mechanical ventilation, which includes reproductive-age men and women (14)(15)(16). Severe systemic infection and its sequelae can put tremendous strain on an individual's body, which may affect health long after discharge, with deconditioning, muscle atrophy, cognitive decline, and increased mortality having been observed (5-7). In addition, those who survive sepsis may have an increased risk of early mortality, rehospitalization, emotional distress leading to anxiety and depression, and cardiovascular disease (8). However, the reproductive sequelae of recent preconception exposure to severe illness are unknown and also may apply to a subset of individuals who are able to conceive soon after such ailment.
We found that recent exposure to severe systemic infection in mothers or fathers is associated with adverse pregnancy outcomes such as preterm birth and pregnancy loss. The potential mechanisms that underlie these outcomes are unknown, but may include a direct effect of the infection and its consequences on reproductive organs or gametes (e.g., toxic or ischemic) or the side-effects of treatments during the illness. In addition, a disruption in endocrine signaling, which is critical for conception, may play a role. The underlying mechanisms are likely different in fathers versus mothers because there are likely carryover of these effects into the pregnancy itself in mothers that may affect uterine or placental function. Regarding fathers, the underlying mechanisms of severe systemic illness translating to adverse pregnancy outcomes likely involves a combination of direct pathogenic effects on the testes from either infection or treatment, ischemia, or disruption in the hypothalamic-pituitary-gonadal (HPA) axis. Indeed, acute illness may lead to disruptions in the HPA axis that can effect fertility and thereby may effect birth outcomes (17,18). Systemic inflammation itself may also cause disruption in endocrine signaling or pathogenesis. Indeed, conditions with systemic inflammation have demonstrated increased risk of preterm birth in active disease (19). Moreover, high fevers from acute infection in men are also known to harm spermatogenesis (4). In addition, the epigenetic profile of sperm (e.g., DNA methylation, histone modification, and microRNA expression) may be altered by toxic exposures or illness (20)(21)(22). However, the increased risks of the adverse pregnancy outcomes observed in the present study may be due to the fact that those admitted to an ICU are more unhealthy at baselin,e and prior literature has suggested that poor preconception health can negatively affect perinatal outcomes (12).
Regarding mothers, the underlying mechanisms driving adverse pregnancy outcomes from a preconception exposure to severe illness are likely complex, because the exposures may extend into pregnancy itself. A direct toxic effect on gametes may also drive these effects in mothers, similarly to fathers. Preconception stress in forms other than an acute insult have been documented to adversely affect pregnancy outcomes such as from prenatally underweight mothers and psychosocial stress (23,24). Stressful environments before conception may be independent from traditional social (e.g., alcohol, drugs) and medical (e.g., placental abnormalities, gestational hypertension) stresses, as suggested by these studies. Because the mechanisms through which maternal stress affects pregnancy are unknown, one can only postulate that these events may be driven by epigenetic changes induced by the event (25)(26)(27). In addition, preconception micronutrient deficiencies that may be induced by the event can lead to adverse birth outcomes (28). A few additional limitations warrant mention. As with any database that relies on diagnosis and procedural codes, errors in coding may influence the results and, as such, specific details underlying a patient's comorbidities and treatments could not be ascertained. Furthermore, undersampling of diagnoses such as influenza may occur due to rule-out diagnoses in uninfected and true cases never seen in the health care setting, which would likely bias findings to the null. However, others have used similar techniques to identify influenza cases within the MarketScan database (29)(30)(31)(32)(33). The database is composed of individuals with commercial employment-based health insurance and thus may not be generalizable to other populations (34). For example, the frequency of preterm birth (preterm deliveries/ all deliveries) was only 5.2%, which is significantly lower than the reported rate for the general population (35) In addition, early pregnancy losses from undetected pregnancies are not captured and therefore may alter observed results. Thus, the analysis applies only to couples who were able to have a recognized pregnancy following severe systemic illness. Next, we examined only couples who achieved pregnancy, and some exposed men and women may have been unable to conceive at all. In addition, many social determinants of health, which may represent confounders, were not available in the database (e.g., education, race/ethnicity, income, parity), which may influence the pregnancy-related outcomes. Similarly, several lifestyle factors (e.g., substance abuse) that also may influence outcomes were not available. Furthermore, conception dates were estimated, so that the exact timing of conception may not be precise. Finally, unmeasured confounding may persist which can influence reproductive outcomes (e.g., presence of an underlying chronic condition).
Nonetheless, preconception parental systemic illness near the time of conception may increase risks of preterm birth and pregnancy loss. By examining reproductive sequelae among subjects after exposure to severe systemic infection, the present study may be used to consider timing of pregnancy after recovery. However the findings must be regarded cautiously and considered to be hypothesis generating until they are further investigated prospectively. Although the RR is modest for most exposures (<1.5), it is significant, and future prospective studies should determine strategies to mitigate the observed risks. | 2020-09-24T13:04:58.151Z | 2020-09-23T00:00:00.000 | {
"year": 2020,
"sha1": "3472869529ead85b5acb1ba39d2c0d5c22b7a4f4",
"oa_license": "elsevier-specific: oa user license",
"oa_url": "http://www.fertstert.org/article/S0015028220323955/pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "08467dcf055822c25a332f345e95cc729a73a438",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
202000232 | pes2o/s2orc | v3-fos-license | Acute Pancreatitis in Pancreas Divisum Secondary to an Impacted Stone in the Minor Papilla
Pancreas divisum is reported to occur in up to 14% of the population. The majority of patients with this congenital anomaly remain asymptomatic. Pancreas divisum can be associated with recurrent pancreatitis due to inadequate drainage of pancreatic secretions through the dorsal pancreatic duct and the minor papilla. We present a patient with a six-month history of recurrent acute pancreatitis due to an impacted pancreatic duct stone in the minor papilla and an unrecognized pancreas divisum. This situation has only been reported in two other cases in the literature.
Introduction
Pancreas divisum (PD) is a common congenital pancreatic duct anomaly occurring in 4%-14% of the population [1]. It results from a failure of fusion between the dorsal and ventral pancreatic ducts during the seventh week of embryogenesis [1]. Three variants of PD have been described: type 1 is the total failure of fusion of the ventral and dorsal pancreatic ducts; type 2 is the complete absence of the ventral duct; and type 3, incomplete divisum, is where a small communication is present between the ventral and dorsal pancreatic ducts [2]. In approximately 5% of patients, the anomaly is associated with recurrent pancreatitis because of the inadequate drainage of pancreatic secretions through the dorsal duct via the minor papilla [2]. In PD, the ventral duct drains the inferior and posterior parts of the head of the pancreas through the major papilla. The dorsal duct drains the superior and anterior parts of the head as well as the body and tail of the pancreas through the minor papilla [3]. We present a patient with recurrent acute biliary pancreatitis in the setting of PD with a dorsal pancreatic duct stone impacted in the minor papilla unrecognized until endoscopic ultrasound (EUS) was performed.
Case Presentation
A 65-year-old male with essential hypertension and a history of heavy alcohol use presented to the hospital with dull, unremitting epigastric pain radiating to the back for the past three weeks. He had associated nausea, early satiety, and anorexia. He had no fever, chills, emesis, jaundice, nor changes in bowel movements. On examination, his blood pressure was 148/83 mmHg, heart rate was 83 beats per minute, and he was afebrile. No scleral icterus was noted. He had a soft abdomen with mild epigastric tenderness, no palpable organomegaly, and bowel sounds were present. The remainder of his physical examination was noncontributory. Pertinent laboratory tests at that time included a leukocyte count of 4,700 K/uL, hemoglobin 13.2 g/dl, hematocrit 39%, creatinine 0.72 mg/dl, blood urea nitrogen 4 mg/dl, lipase 250 U/L (range 16-61 U/L), alkaline phosphatase 168 U/L (range 32-117 U/L), with normal bilirubin and transaminase levels. Computed tomography (CT) of the abdomen reported a calcified stone in the pancreatic duct with a dilated duct in the body and tail of the pancreas and an additional stone in the duct at the proximal body of the pancreas. An abdominal ultrasound revealed cholelithiasis. The patient underwent endoscopic retrograde cholangiopancreatography (ERCP) with biliary sphincterotomy but pancreatic duct cannulation was unsuccessful. Laparoscopic cholecystectomy was performed for presumed gallstone pancreatitis and the patient was discharged home.
Abdominal pain recurred and five weeks later, the patient was readmitted with acute pancreatitis. Magnetic resonance cholangiopancreatography (MRCP) demonstrated a 5-mm filling defect in the mid pancreatic duct. An ERCP was repeated but attempts at pancreatic duct cannulation were unsuccessful. Pancreaticoduodenectomy was recommended to the patient, and he was transferred to Cleveland Clinic for a second opinion. Endoscopic ultrasound (EUS) was performed, which demonstrated Type 1 PD, an impacted stone at the minor papilla with dorsal duct dilation, and changes of chronic pancreatitis. ERCP was performed next and the minor papilla was found to be bulging ( Figure 1).
FIGURE 1: Bulging minor papilla (mi) before sphincterotomy and normal-appearing major papilla (MJ)
Dorsal pancreatic sphincterotomy was performed and prompt egress of a single 4-mm stone occurred ( Figure 2).
FIGURE 2: The arrow depicts minor papilla stone extraction after sphincterotomy
A temporary stent was placed into the dorsal pancreatic duct. The patient reported the resolution of his abdominal pain after the procedure. There were no procedure-related complications.
Discussion
Over 95% of patients with PD are asymptomatic while the remaining 5% develop symptoms of acute pancreatitis [4]. In the absence of ductal stones, pancreatitis develops in PD because of increased ductal pressure secondary to insufficient drainage of pancreatic secretions through the minor papilla [4]. Our patient, unfortunately, was in the 5% category who developed recurrent symptoms of acute pancreatitis. The reason as to why only a select few patients develop symptoms is not clear. The diagnosis of PD is often delayed due to the inadequate sensitivity of conventional radiographic cross-sectional imaging like magnetic resonance imaging (MRI) of the abdomen. The reported accuracy of EUS, MRCP, and multi-detector computed tomography (MDCT) to detect PD is variable and appears to relate to imaging protocols, the expertise of radiologists interpreting the study, and the skill of the endoscopist. In one study, the sensitivity of EUS was 86.7% higher than the sensitivities of MDCT (15.5%) and MRCP (60%) [5]. In a detailed systematic review and meta-analysis, MRCP, secretinenhanced MRCP, and EUS were compared to address the diagnostic accuracies in the detection of pancreas divisum. It was concluded that EUS was more sensitive at 85% when compared to MRCP (59%) and secretin-enhanced MRCP (83%). All three imaging modalities had specificities above 97% [6].
This is a unique case with many important clinical points to consider. This is one of three reported cases of acute pancreatitis in a patient with PD due to an impacted stone at the minor papilla [7][8][9]. The two previously published cases were in 1999 from Switzerland and in 2016 from Japan [8][9]. This challenging case is interesting, as our patient experienced recurrent acute pancreatitis due to an impacted stone in the minor papilla in the setting of PD, which went unrecognized until EUS demonstration of the congenital anomaly. His symptoms resolved after minor papilla pancreatic sphincterotomy with the retrieval of the stone at ERCP. He was symptom-free at the six-month follow-up post-discharge.
Conclusions
The two most common causes of acute pancreatitis in the United States are alcohol abuse and cholelithiasis. Other causes include trauma, medications, infections, hypertriglyceridemia, pancreas divisum, and hereditary and auto-immune conditions. As internist and specialist, it is important to consider different etiologies of acute pancreatitis and ordering tests costeffectively to narrow the differential. Our patient quit alcohol several years ago and was status post-cholecystectomy for presumed gallstone pancreatitis prior to presenting to Cleveland Clinic. However, he continued to have recurrent symptoms of acute pancreatitis. The diagnosis of pancreas divisum must be considered for which EUS would be the diagnostic modality of choice, based on its sensitivity and specificity.
Additional Information Disclosures
Human subjects: Consent was obtained by all participants in this study.
Conflicts of interest:
In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | 2019-09-09T18:39:15.284Z | 2019-08-01T00:00:00.000 | {
"year": 2019,
"sha1": "79920b0eee676fe3cf7375249a5fded8e9c81d8a",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/20445-acute-pancreatitis-in-pancreas-divisum-secondary-to-an-impacted-stone-in-the-minor-papilla.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4d5b5d5e240673f88ae793a8da4f70d02d28b073",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
230536385 | pes2o/s2orc | v3-fos-license | Variation of Erbil Municipal Wastewater Characteristics Throughout 26 Years (1994-2020) with Possible Treatments and Reusing: A Review
This research aimed to study characteristics variation of Erbil municipal wastewater (EMWW) during 26 years (1994-2020), appropriate treatment using different methods, and suitability of the treated wastewater (WW) for disposal to the natural environment or using for irrigation purpose. Forty-seven WW quality parameters were studied. A number of EMWW characteristics such as five-day biochemical oxygen demand (BOD5), chemical oxygen demand (COD), ammonia nitrogen (NH3-N), total suspended solids (TSS) were exceeded the WW discharge standards. Consequently, EMWW needs treatment process prior to disposal to the environment. Primary treatment units with lagoons or oxidation ditch or wetland were applied as first scenario for treatment of EMWW; While, using only lagoons or oxidation ditch or wetland directly was the second situation. EMWW normally regarded as weak/low to medium WW type and it is classified as good to injurious irrigation water kind. Commonly, time had not great effect on EMWW characteristics. Life style, climate, sewerage system (combined or separate systems), climate, and areas/zones had effect on the quality of the municipal WWs. Primary units plus wetland led to removal efficiency of 94.75 %, 93.07 %, 89.47 %, 96.72 %, and 57.68 % for BOD5, COD, NH3-N, TSS and PO4, respectively. Treatment of EMWW using both primary units and wetland resulted in achieving effluents agreed with the standards for disposal of WW. Generally, treated EMWW can be used for cooked vegetables and irrigating green areas.
Introduction
Municipal wastewater (MWW) denotes to the domestic wastewater (WW) in addition to those discharged from commercial, institutional, and similar services. MWW includes of WW produced by residences, businesses such as restaurants and shopping centres, institutions such as schools, universities, hospitals, prisons, and rest homes, recreational facilities, storm water, infiltration and industries in a definite community (Al-Zboon and Radaideh, 2012). MWW is the most abundant kind of WW that located into the category of low-strength WWs, characterized by low organic strength and high particulate organic matter content (Sikosana et al., 2019). Sewerage system in Erbil City, Kurdistan Region-Iraq covers both storm water and grey water. Commonly, black water from toiles are treated using cesspools and septic tank with cesspools (in some cases). Therefore, neither full combined and nor full separate sewerage systems are available in Erbil City. In the recently constructed cities and villages in Erbil City and due to the investment laws, small scale WW treatment plant (WWTP) is compulsory for treatment of the produced WWs and in some areas it uses for irrigating green areas. Erbil MWW (EMWW) sometimes used for irrigation directly and in some cases it reaches to the Greater-Zab River water at Gwer area without treatment (Mustafa and Sabir, 2001; Amin and Aziz, 2005). To date, centralized WWTP is not available in Erbil City. Accordingly, treatment of EMWW is necessary for irrigation and prior disposal to the natural environment or water sources. In this work, two options were presented for treatment of EMWW. In the first option, EMWW is treated using primary treatment units plus aerated lagoons or oxidation ditch or wetland; while in the second scenario, EMWW is treated by lagoons, oxidation ditch or wetland. In literature, primary units, lagoons, oxidation ditch, and wetland were applied widely for treatment of MWWs (Asano and Tchobanoglous, 1987 Aziz and Ali, 2018). However, studying EMWW characteristics variation, treatment via different scenarios and reusing of treated water over a period of 26 years have not been published yet. Subsequently, the objectives of the current research were to study: 1) characteristics of EMWW during 26 years (1994-2020), 2) treatment of EMWW using different options, and 3) the suitability of reusing of treated EMWW for irrigation purposes.
EMWW (Erbil Municipal wastewater)
The main wastewater (WW) channel in Erbil City is located at the left side of Erbil-Mosul Main Road at Tooraq Q., Fig. 1. EMWW at Tooraq Q. commonly consists of WWs produces at residential areas, shops and super markets, restaurants, hotels and motels, car washing places, north industrial area, universities and schools, worship places, governmental and administration buildings, private sector houses and buildings, washings, infiltration, and losses of water supply system. Additionally, storm water is mixed with the MWWs during rainy seasons and it dilute the concentration of pollutants.
Characteristics of EMWW
Collected data were arranged into three parts (Tables 1 to 3). Additionally, range and WW discharge standards are shown in the tables as well. pH, temperature, chloride, NO 3 -N, and SO 4 found within the allowable limit of disposal of WW. While, TSS, BOD 5 , COD, NH 3 -N, NO 2 -N, color, oil and grease, Mn, PO 4 , Mg, Cd, Cu, Zn, Pb, and phenols were surpassed the WW disposal standards (EPA, 2003; Iraqi Environmental Standards, 2011). Consequently, EMWW requires treatment processes so as to be fit with WW disposal standards and reusing. EMWW observed been as weak/low to medium WW type according to (Henze and Comeau (2008) (Sperling, 2007). Life style, season, sewerage system (combined or separate systems), climate, and areas/zones had effect on the quality of the MWW. Fluctuations were noticed in the EMWW quality; Commonly, time had not great effect on EMWW characteristics. Mustafa and Sabir (2001) reported that EMWW discharge was 0.85 m 3 /s. While, discharge for 2020 calculated as 5.56 m 3 /s (Metcalf and Eddy, 2014; GDWS, 2020). The discharge increase may be attributed to increase in population, losses in water supply, expansion of Erbil City and sewerage system, and extra storm water. Mixing of storm water and water supply bad using leaded to dilution of EMWW, especially in rainy seasons (Aziz;2004). Biological problems are available in EMWW; accordingly, treatment and disinfection is necessary for EMWW. Nutrients (organic matter, nitrogen compounds, and phosphate) are present in EMWW, it is useful for agriculture and irrigation purpose. Heavy metals such as Cd, Cu, Zn, and Pb were reported in EMWW and exceeded the WW discharge standard limits, normally effects on the biological treatment processes and needs extra treatments (Aziz et al., 2011). Tables 1 and 2 were used.
First Scenario
Normal WWTP consists of primary, secondary/biological, advanced treatment units. Primary treatment units involve screens and comminution, grit removal, flow equalization, primary sedimentation tank units (Metcalf and Eddy, 2014; Jasim 2020). Removal efficiencies for some pollutants such as BOD 5 , COD, NH 3 -N, TSS, and PO 4 in the primary treatment units are shown in Table 4 (Asano and Tchobanoglous, 1987;Teleman et al., 2004;Metcalf and Eddy, 2014). Effluent of primary unit become influent for the further treatment processes such as aerated lagoons, oxidation ditch and wetland. Removal efficiencies for BOD 5 , COD, NH 3 -N, TSS, and PO 4 using aerated lagoons, oxidation ditch and wetland are illustrated in Table 5. It can be noticed from the results shown in Table 5 that commonly wetland was efficient, when compared with the other methods. Treatment of EMWW using both primary units and wetland resulted in the effluent parameters of WW disposal to satisfy with the standards of EPA (2003) and Iraqi Environmental Standard (2011).
Second Scenario
In this scenario, EMWW treated directly by aerated lagoons, oxidation ditch and wetland methods. Table 6 illustrated performance calculations and calculations of the mentioned methods. In general, wetland technology offered better removal efficiencies than aerated lagoons and oxidation ditch techniques. Treated EMWW remain within the standards for WW disposal, except TSS which was a slightly higher than the standards (35 mg/L) (EPA, 2003 ; WHO, 2006; Iraqi Environmental Standard, 2011). First scenario showed presenting better effluent quality than second scenario, Tables 4 to 6.
Reusing
In the current study and based on the pH, EC, TDS, and SAR results, degree on restriction on use for EMWW is slight to moderate (WHO, 2006; Aziz et al., 2019). According to EC, T. salts, and Na% values, EMWW is good to injurious type (Amin and Aziz, 2005). In other classification and regarding to T. salts, Cl, SAR, and alkalinity % figures, EMWW is considered as intermediate for certain crops (Amin and Aziz, 2005). Researchers reported that WW in Erbil City was not safe for all kinds of irrigation and they reported that EMWW is suitable for irrigating green areas and for cooked vegetables (Amin and Aziz, 2005). Authors stated that WW in Erbil cannot be used for irrigation directly (Aziz et al., 2019). | 2020-12-17T09:08:42.630Z | 2020-12-10T00:00:00.000 | {
"year": 2020,
"sha1": "937678582e73689257ecae7458379ca9b630cb64",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/978/1/012044",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "84efc1d5acc7a344f6a046621d19276e3c62a507",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Environmental Science"
]
} |
251120368 | pes2o/s2orc | v3-fos-license | Concurrent Acute Appendicitis and Type III Appendiceal Intussusception: A Case Report
Appendiceal intussusception is exceedingly rare. Although there are few case reports of concurrent ileocecal intussusception and acute appendicitis, to our knowledge, this is the first reported case of concurrent Type III appendiceal intussusception and acute appendicitis. We present the case of an 11-year-old male who underwent appendectomy with partial cecectomy for a Type III appendiceal intussusception with concurrent acute appendicitis.
Introduction
Appendiceal intussusceptions are exceedingly rare with an estimated incidence of 0.01% [1]. Initial reports were more common in children; however, a more recent study found increased rates in adults [1,2]. Interestingly, incidence broken down by age and gender reveals that appendiceal intussusception is more common in females in the adult population, while it is more common among males in the pediatric population.
Appendiceal intussusceptions are rarely diagnosed preoperatively, with one review reporting only 12 cases described in the medical literature [3]. Preoperative diagnosis is challenging and typically requires multiple diagnostic imagining studies, including abdominal ultrasound, barium contrast enema, and abdominal computed tomography (CT). However, even with all these imaging studies, dilated bowel loops may hide sonographic signs of an intussuscepted appendix [4].
Case Presentation
An 11-year-old male presented with a one-week history of worsening generalized abdominal pain that subsequently localized to the right lower quadrant. Physical examination was pertinent for a soft, nondistended abdomen which was tender to palpation in the right lower quadrant with a positive Rovsing sign and obturator sign. Incidentally, a cardiac murmur was also noted on the examination. Workup revealed leukocytosis with a white blood cell count of 18,000, and coronavirus disease 2019 (COVID-19) polymerase chain reaction (PCR) testing was negative. He underwent a CT of the abdomen/pelvis that demonstrated a dilated fluid-filled appendix visualized in the right lower quadrant measuring up to 12 mm in diameter with mucosal hyperenhancement consistent with uncomplicated appendicitis. There were no appendicoliths or signs suggestive of perforation or abscess formation. Cardiology was consulted and the patient was cleared for surgery. The patient received ceftriaxone and metronidazole per our institution's appendicitis management protocol.
The patient was taken to the operating room for routine laparoscopic appendectomy for acute appendicitis. As we attempted to identify the appendix and define the anatomy, we encountered the base of the appendix intussuscepted into the cecum (Figure 1). Multiple varied attempts were employed to reduce the intussuscepted appendix; however, they were unsuccessful and aborted due to concerns of serosal injury to the cecum, as well as traumatically fracturing the inflamed appendix. We were able to determine that the appendectomy would be feasible by sacrificing a small segment of the lateral distal cecum while preserving the medial portion of the cecum and the ileocecal valve. Careful attention was given to ensure no residual appendiceal stump was left behind because residual appendiceal stumps have been found to cause recurrent intussusception or can lead to appendicitis recurrence. The appendix and the intussuscepted portion found in the cecum were sent to pathology. The final pathology report showed focal transmural acute inflammation of the appendix with the proximal appendix showing edema and serosal fibrosis consistent with intussusception. The patient tolerated the procedure well and the postoperative course was uneventful.
The patient was discharged on postoperative day one. The patient did well in the follow-up visit with no complications.
Discussion
In 1941, McSwain described an anatomical classification based on the region of the appendix that is intussuscepted [7]. There are five anatomical classifications of appendiceal intussusception: Type I -mild invagination of the appendiceal tip; Type II -moderate invagination of the appendiceal tip within the proximal appendix; Type III -intussusception of the appendiceal base only; Type IV -a retrograde intussusception; and Type V -complete appendiceal intussusception into the cecum (Figure 2).
FIGURE 2: McSwain appendiceal intussusception classification.
Type I -mild invagination of the appendiceal tip; Type II -moderate invagination of the appendiceal tip within the proximal appendix; Type III -intussusception of the appendiceal base only; Type IV -a retrograde intussusception; Type V -complete appendiceal intussusception into the cecum.
Image drawn by Lily Chen.
A few cases of ileocecal intussusception and concurrent appendicitis have been described in the medical literature. However, to our knowledge, this is the first report of a Type III appendiceal intussusception and concurrent acute appendicitis. Due to concerns about damaging the already inflamed appendix and/or causing injury to the cecum, traditional approaches to performing a laparoscopic appendectomy were not feasible. Fortunately, in our case, the appendix was partially intussuscepted at the base, and we were able to remove a small portion of the cecum while preserving the ileocecal valve. The optimal surgical approach depends on appendiceal intussusception classification. For example, for those with Type V appendiceal intussusceptions, other surgical options may include ligation of the mesoappendix which will lead to necrosis and sloughing of the fully intussuscepted appendix into the bowel lumen [5].
Typical appendiceal intussusceptions can be either idiopathic or due to predisposing factors such as anatomical variation, duplications, neoplasm, angiodysplasia, fecalith, foreign bodies, lymphoid hyperplasia, or underlying disorders such as cystic fibrosis [8]. It is unclear whether the inflamed appendix acted as a lead point resulting in the intussusception or the intussusception resulted in appendiceal obstruction and inflammation.
Conclusions
Appendiceal intussusceptions are rare and can also lead to acute appendicitis. They are typically discovered intraoperatively. Here, we present a concurrent Type III appendiceal intussusception with acute appendicitis managed laparoscopically with appendectomy with partial cecectomy. Laparoscopic management is feasible and safe.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | 2022-07-28T15:09:30.375Z | 2022-07-01T00:00:00.000 | {
"year": 2022,
"sha1": "a973923ceb62c46ae5564441cd9d05790e656664",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/97041-concurrent-acute-appendicitis-and-type-iii-appendiceal-intussusception-a-case-report.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "71850ec25524d5d671a35dd7df02e12ce912e035",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
247769109 | pes2o/s2orc | v3-fos-license | Chronic Stress Does Not Influence the Survival of Mouse Models of Glioblastoma
The existence of a clear association between stress and cancer is still a matter of debate. Recent studies suggest that chronic stress is associated with some cancer types and may influence tumor initiation and patient prognosis, but its role in brain tumors is not known. Glioblastoma (GBM) is a highly malignant primary brain cancer, for which effective treatments do not exist. Understanding how chronic stress, or its effector hormones glucocorticoids (GCs), may modulate GBM aggressiveness is of great importance. To address this, we used both syngeneic and xenograft in vivo orthotopic mouse models of GBM, in immunocompetent C57BL/6J or immunodeficient NSG mice, respectively, to evaluate how different paradigms of stress exposure could influence GBM aggressiveness and animals’ overall survival (OS). Our results demonstrated that a previous exposure to exogenous corticosterone administration, chronic restraint stress, or chronic unpredictable stress do not impact the OS of these mice models of GBM. Concordantly, ex vivo analyses of various GBM-relevant genes showed similar intra-tumor expression levels across all experimental groups. These findings suggest that corticosterone and chronic stress do not significantly affect GBM aggressiveness in murine models.
INTRODUCTION
The role of stress in cancer initiation and progression remains unclear, but it is known that stress can alter neuroendocrine and immune functions, along with having several implications in pathophysiological processes that are also fundamental to cancer growth and progression (1)(2)(3). Epidemiological studies have suggested that combination of chronic stress and low social support is associated to a nine-fold increase in breast cancer incidence (4). On the other hand, experimental animal studies have been providing evidence of the effects of stress on tumor progression. For instance, chronic restraint stress has been shown to promote colorectal tumor growth in nude mice via stimulation of colorectal carcinoma cell proliferation (1). Additionally, in a mouse model of breast cancer, chronic stress restructured the lymphatic networks within and around tumors to provide pathways for tumor cell dissemination (5). Similarly, several other studies have suggested that stress is associated with some types of cancers (e.g. pancreatic, prostate, ovarian, oral cancer), and may be a risk factor for cancer development and progression (1)(2)(3)(5)(6)(7). By contrast, it has been shown that the brain's reward system can modulate an anti-tumor immune response in tumorbearing mice (8). However, nothing is known on the putative role of stress in glioblastomas (GBMs).
GBMs are the most frequent and malignant primary brain tumors in adults (9,10), being characterized by high levels of cellular proliferation, invasion, and necrotic regions, while presenting a remarkable inter-and intra-tumor heterogeneity (10,11). Despite treatment advances, GBM remains among the top deadliest cancers with very poor prognosis (12)(13)(14). The median survival is approximately 15 months, and the 5-year survival rate of GBM patients is still less than 5% after diagnosis (15)(16)(17)(18). Despite considerable progress in the understanding of the biological characteristics of GBM, their etiology has not been fully elucidated. Established risk factors only include exposure to high dose ionizing radiation that is believed to increase the likelihood of developing GBM (19). The full knowledge of the involvement of other risk factors that can have an impact in patient's prognosis is of great importance towards more preventive measures in the future.
Stress is generally defined as an actual or anticipated threat of well-being or disruption of the organism homeostasis. The activation of the stress response is critical to improve an individual's chance of survival, and to promote adaptation when facing threatful or aversive situations (20). Chronic stress is characterized by a maladaptive response to long-term exposure to stressors that initiate a cascade of reactions, including activation of the sympathetic nervous system (SNS) and the hypothalamicpituitary-adrenal (HPA) axis (21). This leads to local and systemic elevated levels of catecholaminergic neurotransmitters and involves an endocrine response with an increased release of stress hormones, such as glucocorticoids (GCs) (20,22). GCs execute a wide range of biological functions, including modulation of the immune, endocrine and inflammatory responses (23). Accumulating evidence supports the role of GCs signaling in the progression of cancer through increased cell proliferation, inhibition of apoptosis and impairment of DNA repair (24). GCs serum levels were associated with a reduction of patients survival in breast and lung cancers, as well as with acquired chemotherapy resistance through cell death impairment of tumor cells (23,25,26).
In this work, we studied the effects of elevated GCs and chronic stress levels in GBM aggressiveness and survival. For that, we used multiple orthotopic mouse models of GBM, including syngeneic mouse or xenografts human GBM cell lines, to determine the prognostic impact of GCs administration and chronic stress paradigms in GBM. This study provides the first insights into understanding whether a previous exposure to chronic stress may influence the prognosis of GBM.
Corticosterone Administration Does Not Affect Overall Survival of a Syngeneic Mouse GBM Model
Chronic stress activates the HPA axis, leading to the secretion of GCs from the adrenal glands, namely corticosterone (CORT).
CORT is an important effector of stress response, inducing diverse genomic and non-genomic effects in most cells of the organism. Importantly, GC signaling has been suggested as a putative pathway through which chronic stress can impact tumor progression (3). With this in mind, we decided to firstly use a paradigm of chronic exogenous CORT administration in a mouse model of GBM and determine the prognostic impact of hypercortisolemia in animals' OS. C57BL/6J male mice were subjected to 4 weeks of subcutaneous injections of 20 mg/kg CORT (CORT-GBM group), while the control group was subcutaneously injected with vehicle (GBM group), before the orthotopic implantation of the GL261 mouse GBM cell line ( Figures 1A, B).
This chronic exogenous CORT administration paradigm has been described to induce abrogated weight gain, and dysregulation of the HPA axis (27)(28)(29). To confirm the efficacy of this paradigm, animals' body weight and adrenal glands weight upon animal's sacrifice were recorded, and CORT circulating levels were measured at the beginning and at the end of the CORT administration protocol.
CORT administration significantly decreased mice body weight from day 9 until the end of the protocol ( Figure 1C; 2way ANOVA, F (13, 286) = 11.50, p < 0.0001). Animals exposed to a chronic CORT administration (CORT-GBM) at sacrifice did not present significant differences in their adrenal glands weight as compared to control animals (GBM; Figure 1D). After the full protocol of CORT administration, there was an increase of CORT levels at the ante meridiem (AM) measurement and a decrease at the post meridiem (PM) measurement of the CORT-GBM group ( Figure 1E). This reflects a dysregulation of the HPA axis in the CORT-GBM group with a significant decrease of the CORT ratio PM/AM ( Figure 1F; t 10 = 8.015, p < 0.0001).
After GL261 orthotopic injection, the two groups were able to recover their weight from surgery, until the appearance of GBMrelated symptoms and consequent weight loss ( Figure 1G). No significant differences were found regarding the survival of CORT-GBM and GBM groups ( Figure 1H; log rank test, p = 0.3399).
Chronic stress has also been suggested to affect cell proliferation and differentiation (1). Thus, we evaluated the expression pattern of Ki67, a proliferation marker commonly overexpressed in GBM, and GFAP, an astrocytic marker, using immunohistochemistry analyses. GL261-derived tumor cells stained positively for Ki67, while the major presence of GFAP positive cells was found in the periphery of the tumor, which is suggestive of astrogliosis. No major differences between CORT-GBM and GBM groups were found regarding the qualitative expression of these two proteins ( Figure 1I response may be more appropriate. To understand if chronic stress could impact GBM aggressiveness, we studied an in vivo model using a human GBM cell line, which more closely mimics the human disease. Since most studies associating chronic stress with cancer progression in animal models are based on the restraint stress paradigm with immunocompromised mice (1, 2, 5, 30), we used a similar approach. NOD scid gamma (NSG) mice were exposed to a chronic restraint stress (CRS) or control protocol for 3 weeks before the orthotopic implantation of the human GBM cell line U87-MG (Figures 2A, B). This CRS paradigm has been associated with abrogated weight gain and/or weight loss, increased adrenal glands weight, and to induce a dysregulation of HPA axis (1,30,31). We recorded animals' body weight along the protocol, adrenal glands weights upon animals' sacrifice, and measured CORT circulating levels at the beginning and at the end of the CRS protocol to control the efficacy of this paradigm. CRS had a significant impact in mice body weight variation from day 4 until the end of the protocol ( Figure 2C; 2-way ANOVA, F (8, 200) = 12.44, p < 0.0001). As expected, the adrenal glands of mice exposed to the CRS protocol were significantly heavier than control animals ( Figure 2D; t 28 = 2.263, p = 0.0316). The CRS protocol also led to a disruption of the HPA axis with a significant decrease of the CORT ratio PM/AM of the CRS-GBM group ( Figure 2F; t 16 = 3.136, p = 0.0064). All of these measures indicate that the stress protocol was effective.
After implantation of U87-MG cells, both groups recovered their body weight, until the appearance of GBM-related symptoms and consequent weight loss ( Figure 2G). No significant differences regarding the OS of CRS-GBM and GBM groups were found ( Figure 2H; log rank test, p = 0.5847). U87-MG-derived tumor cells stained positively for Ki67, but the majority of the tumor cells were negative for GFAP ( Figure 2I). The periphery of the tumor (reactive border) presented a strong staining for GFAP ( Figure 2I). This pattern of expression was similar between both control and CRS groups.
Previous Exposure to Chronic Unpredictable Stress Does Not Affect Overall Survival of a Mouse GBM Model
The stress response is known to produce remarkable changes in the immune system, which can compromise cellular immunity and, thus, contribute to facilitate tumor initiation, progression and aggressiveness (32). Both SNS system and HPA axis mediators regulate distinct aspects of immune function, including antigen presentation, T cell proliferation, and cellmediated and humoral immunity (3,32). Thus, using an immunocompetent GBM mouse model with an aggressive stress paradigm was mandatory to evaluate a possible effect in immunomodulation that could impact GBM.
The chronic unpredictable stress (CUS) protocol is commonly used to study the impact of stress in animal models and involves the daily exposure to a variety of stressors that are presented in a random, intermittent, and unpredictable form during several weeks. The variety of stressors includes social defeat, restraint, overcrowding, hot drier, shaking, inverted light cycle and overnight illumination. C57BL/6J mice were subjected to a CUS protocol for 8 weeks (CUS-GBM group) before orthotopic injection of a mouse GBM cell line (GL261) ( Figures 3A, B). This CUS protocol has been described to induce abrogated weight gain, and to induce alterations in the adrenal glands, because they continuously stimulate the synthesis of stress hormones that can lead to morphological alterations (33,34). Moreover, previous works have suggested a higher adrenal glands weight after a stress protocol in mice (33,35,36). Therefore, to confirm the efficacy of the CUS protocol, animals' body weight was recorded, and for the GBM and CUS-GBM groups we determined the adrenal glands weight upon animals' sacrifice and the CORT circulating levels at the beginning and at the end of the CUS protocol.
CUS significantly decreased mice body weight of the CUS-GBM group when compared to the GBM group from day 6 until the end of the CUS protocol ( Figure 3C; 2-way ANOVA, F (17,504) = 18.64, p < 0.0001). Animals exposed to a CUS protocol before the orthotopic implantation of GBM presented a statistically significant increase in adrenal glands weight ( Figure 3D; t 27 = 3.319, p = 0.0026), suggesting an effective stress protocol. After the CUS protocol, there was a dysregulation of the HPA axis in the CUS-GBM group, with a significant decrease of the CORT ratio PM/AM ( Figure 3F; t 20 = 5.449, p < 0.0001). At later timepoints, coincident with the appearance of GBM-related symptoms, significant weight loss was observed for all groups ( Figure 3G). No significant differences regarding OS were found ( Figure 3H; log rank test, p = 0.8026). The immunohistochemistry for Ki67 and GFAP proteins did not reveal major differences between groups ( Figure 3I).
To further understand if the CUS protocol could affect GBM aggressiveness at the molecular level, we performed qRT-PCR analyses in ex vivo tumor tissues collected from these mice for genes associated with GBM aggressiveness, such as Cxcr4, Gfap, Akt1, Mapk1, Mapk3, Stat3, Egfr, Pdgfra and Trp53. No significant differences were found in gene expression levels between each experimental group ( Figure 4; unpaired t-test).
DISCUSSION
GBM accounts for 80% of malignant primary brain tumors in adults, and remains the most lethal, with a median OS of 14.6 months after diagnosis (18,37). Despite considerable progress in the understanding of the biological characteristics of GBM, this cancer is still associated with very poor prognosis (18,38). The complete understanding of the involvement of bio-behavioral factors in cancer is relevant towards preventive measures and for the awareness of risk factors that can impact patients' prognosis.
Evidences from animal and human studies suggest the implication of chronic stress in the aggressiveness of cancer (5,(39)(40)(41)(42). Nevertheless, the association between psychological stress and cancer remains enigmatic, with some possible biological mechanisms, such as dysregulation of the neuroendocrine axis and impairment of immune functions, being proposed and linked with some cancer types (3, 23, 43,
44
). There are multiple biological mechanisms that underlie the link between stress and cancer, and, as a result, the effects of stress may vary across cancer types (3). Evidence of the influence of bio-behavioral factors on GBM has been previously documented. Previous exposure to environmental enrichment in mice before orthotopic implantation of a mouse GBM cell line leads to a prolonged survival and reduced glioma growth (45). This is evidence that the brain microenvironment can be modulated by environmental factors, such as prolonged sensory, social and physical experiences, ultimately influencing the aggressiveness of brain cancer (45). In fact, some paracrine interactions between glioma cells and the brain microenvironment have been indicated to influence glioma pathophysiology, as well as microglial cells contributing for GBM cell invasion and non-neoplastic astrocytes being able to convert into a reactive phenotype by the glioma microenvironment (46,47). Mechanistic investigations have documented a possible mechanism through which the tumor microenvironment modulated GBM pathophysiology, where they found a crosstalk between GBM and glial cells (48). The reward system can also manipulate tumor growth. A recent study was able to establish a causal link between brain's reward system manipulation and tumor growth that is dependent on SNS activity, with an anti-tumor immune response (8). These findings elucidate how positive stimuli and patient's psychological state can impact cancer progression. Still, the impact of GCs and/or chronic stress in GBM remains uncertain.
Our study provides novel insights on the putative effects of a preexposure to chronic stress in GBM aggressiveness. We demonstrated through in vivo approaches with mouse GBM models that CORT and chronic stress, both in immunocompetent and immunodeficient contexts, do not affect GBM prognosis and aggressiveness.
Chronic stress results in systemic elevated levels of catecholaminergic neurotransmitters and GCs that are able to regulate cellular processes such as inflammation, apoptosis and cellular immune response (49,50). Previous studies suggested that chronic stress can contribute to increased tumor growth through GCs signaling, since they regulate a wide variety of cellular processes and physiologic functions through genomic and non-genomic actions (50,51). A study with clinical and mouse experimental data suggested that dexamethasone, a synthetic GC with potent anti-inflammatory activity, may decrease the effectiveness of treatments and shorten survival in GBM patients (52). Furthermore, dexamethasone treatment of human GBM primary cells fostered a glioma stem cell-like phenotype, typically associated with more aggressive and malignant features (53). Our findings suggest that exogenous administered CORT does not affect the OS of a mouse GBM model. This is of interest, because GCs have been described to present different roles in cancer (54). For example, dexamethasone induced proliferation of tumor cells in a preclinical lung carcinoma mouse model (55). On the other hand, low-dose of dexamethasone suppressed ovarian cancer progression and metastasis in an immunocompetent syngeneic mouse model (56). It is important to refer that this protocol only mimics part of the stress response, not completely replicating all the physiological changes induced by stress. In this perspective, it has been described that both catecholamines and GCs can act in a synergistic fashion to facilitate cancer growth (3,20,50). For example, cortisol increased beta-adrenergic receptors density with increased cAMP accumulation in lung carcinoma cells (57). Interestingly, previous studies suggested the existence of direct effects of beta-adrenergic signaling in models of GBM, particularly in in vitro contexts, where both propranolol and isoproterenol suppressed the proliferation of human glioblastoma cell lines (58), and treatment of cancer cells with propranolol counteracts the epidermal growth factor receptor (EGFR) oncogenic traits (59), that is associated with GBM aggressiveness features. So, it is plausible that a chronic stress paradigm that mimics more closely the stress response with the increase of both catecholamines and GC levels may lead to higher impact on cancer (44). Also, the effects of CORT injection are time-dependent, and as soon as the last injection is administered, the cumulative effective can start to be lost. Therefore, future studies are warranted to properly address the impact of GC and adrenergic signaling in different tumor types in vivo.
A wide variety of stress paradigms with animal models have been used to study the causal effect of stress in cancer aggressiveness. The majority of these studies used xenograft cancer models with a CRS protocol (1,2,5). We demonstrated that a previous exposure to a CRS protocol did not impact GBM aggressiveness in an immunocompromised xenograft model of GBM. Conversely, the CRS paradigm has been shown to promote colorectal cancer growth in a xenograft mouse model (1). However, it has also been reported that restraint stress alone did not significantly promote colorectal cancer growth in a similar xenograft mouse model (60). Another study showed that CRS did not decrease the survival of an oral squamous cell carcinoma mouse model (61).
The stress response is also known to produce remarkable changes in the immune system, which can compromise cellular immunity with down-regulation of the cellular immune response (3,32,44,62). Malignant tumors also develop multiple escape mechanisms through which they evade recognition and destruction by the immune system (63). Considering that NSG mice are severely immunocompromised, which could affect tumor aggressiveness and interfere with survival, we also explored immunocompetent models, in which we could account for the contribution of the immune system in tumor progression. Therefore, the use of a paradigm that comprised all physiological parameters of the stress response is of extreme importance. The CUS protocol is of long-term duration and commonly used to study the impact of stress in animal models and is characterized by the random, intermittent, and unpredictable exposure to a variety of different stressors ultimately leading to a more aggressive phenotype (33,34). Mice exposed to the CUS protocol before GBM implantation did not present any significant differences in OS. This suggests that, in this GBM model, previous exposure to chronic stress does not affect tumor aggressiveness.
Consistent with our survival results, histological and molecular analyses did not show any significant difference between groups of the different stress paradigms that we tested. It would be expected that a more aggressive GBM phenotype would present increased proliferation activity or increased expression of some genes related to GBM aggressiveness. We should denote that GBM is a highly heterogeneous cancer, and we can have different degrees of aggressive phenotypes, as observed between individuals from the same group. Also, since these samples were obtained at the final endpoint of mice survival, this could affect the comparison between animals as each was collected at different time-points. Since the tumors were all at the same final stage, independently of the time they take to reach it, an established fixed time-point for sacrificing animals could address this question in future studies. However, the outcome of survival is of the utmost importance to answer this hypothesis, and not always an increased expression of proliferation markers or tumor size can predict the outcome.
Our findings were surprising in the light of other studies suggesting that stress/GCs can impact cancer initiation and progression (1-3, 5, 6, 42). Though several factors could influence the outcome of these experiments, it is important to refer that stress may impact differently in very distinct cancer types, and the GBM models we used in this work are very aggressive and of fast progression, leading to a short OS that may limit the temporal window to observe a putative impact of stress, particularly if that effect is not very pronounced. Nonetheless, these validated mice models recapitulate the extremely malignant behavior and clinical presentation of GBM, one of the most aggressive human cancers, with patients presenting an extremely poor survival. In addition, we used 1 mouse and 1 human GBM cell lines, so we need to have in consideration the specificity of each model and that GBM is highly heterogeneous. On the other hand, different strains have different susceptibilities to stress (64)(65)(66). In this perspective, less aggressive GBM models could be interesting to study in order to complement these findings. For example, one could use a genetic model where there is already a predisposition for GBM formation (67). For example, the Cre/ Lox mouse model hGFAP-Cre + ;p53 lox/lox ;Pten lox/+ of glioma which results in 73% of mice developing grade III and grade IV gliomas at a median latency of seven months (68); the RCAS/ Ntv-a mouse model Chk2 +/− of glioma which presents an average survival of 55 days, and with 40% of mice developing grade IV gliomas (69). These models would be very interesting to identify the effects of stress on GBM initiation, degree of malignancy, penetrance, and survival.
In this study, we provide evidence regarding the prognostic impact of a previous exposure to chronic stress and GCs in GBM. By using in vivo approaches, we demonstrate that prolonged preexposure to chronic stress/GCs does not impact mice OS, both in the context of a human and a mouse GBM cell line model. Nonetheless, additional studies are needed using other models to fully exclude a putative contribution of stress for GBM pathophysiology, at different stages and dimensions of the disease, including tumor initiation, progression, and aggressiveness.
Animals
Ten weeks old male C57BL/6J mice were obtained from Charles River Laboratories (027), and female NOD.Cg-Prkdc scid Il2rg tm1Wjl /SzJ (NOD scid gamma, NSG) mice were obtained from Charles River Laboratories (005557). Mice were housed 4-5 per cage under standard environmental conditions, light/dark cycle of 12/12 hours with lights on at 8 AM; 22°C of room temperature (RT) and a relative humidity of 55%; with ad libitum access to food and water. Animals were handled twice per day for 2 weeks before the experiments. CD-1 IGS male mice used in the CUS protocol were purchased with 12-weeks-old from Charles River (022) and housed individually under the same conditions. All experiments were performed in agreement with the European Union Directive 2010/63/EU, and approved by the national ethical committee DGAV (Direcão Geral de Alimentacão e Veterinaŕia, reference no. 008516). Sentinel mice housed in the same room were used to confirm the specified pathogen-free health status of the mice as recommended by the FELASA guidelines.
Cell Culture
The established human GBM U87-MG cell line (kindly provided by Dr. Joseph Costello, University of California, San Francisco) and the mouse GBM GL261 cell line (kindly provided by Dr. Maria Conceicão de Lima, University of Coimbra) were used in this study. Cells were cultured in Dulbecco's Modified Eagle Medium (DMEM; Biochrom GmbH, Berlin, Germany) supplemented with 10% Fetal Bovine Serum (FBS; Biochrom GmbH, Berlin, Germany) and maintained in a humidified atmosphere at 37°C and 5% of CO 2 . For the in vivo orthotopic injections, GBM cells were trypsinized and viable cells were counted using Trypan Blue (Gibco) in a Neubauer chamber. Cell suspension was centrifuged and resuspended in the proper volume of cold phosphate buffered saline (PBS 1x) for further orthotopic injection (5 µL/animal).
Intracranial Surgery and In Vivo Assays
For the orthotopic injection of GBM cells animals were anesthetized with a mixture of ketamine (Imalgene, Merial, USA; 75 mg/kg) and medetomidine hydrochloride (Dorbene, Zoetis, Spain; 1 mg/kg) intraperitoneally injected), and analgesia was achieved with butorphanol (5 mg/kg, subcutaneously injected). Mice were placed on a stereotaxic head frame (Stoelting, USA) and a small incision in the skin was made and a burr hole was drilled in the skull. 2x10 4 GL261 or 2x10 5 U87-MG cells were injected using a point style 4 beveled 26sgauge needle 10 µL Hamilton syringe at 1.7 µL/min in the right striatum (1.8 mm mediolateral, 0.1 mm anteroposterior, and 2.5 mm dorsoventral from the bregma). After injection, the needle was left in place for 2 min to avoid any backflow from the needle tract. Mice body weight was measured regularly to assess stress efficacy and later tumor-related symptoms, and the behavior and symptomatology was monitored daily. For the evaluation of OS, humane endpoints for sacrifice were used when any of the following conditions was observed: severe weight loss (> 30% of maximum body weight), and moribund condition. Mice were sacrificed with a lethal dose of anesthesia injected intraperitoneally. Animals assigned for histological analysis were perfused with saline solution followed by 4% paraformaldehyde (PFA) and brains were collected immediately and stored in 4% PFA until embedding in paraffin. Animals assigned for molecular analysis were decapitated after anesthesia overdose and the head was immersed for 5 s in liquid nitrogen (snap-freeze technique) followed by macrodissection of tumor tissue. Adrenal glands were collected and weighed in an analytical balance immediately after sacrifice.
Stress Protocols
CORT Administration: The chronic CORT administration protocol consists in daily subcutaneous injections of CORT for 4 weeks at 20 mg/kg in 1% ethanol and delivered with sesame oil as vehicle (28,29,31). The efficacy of the stress protocols was confirmed by body weight alterations, adrenal glands weight measurements, and CORT circulating levels determination. CRS Protocol: The CRS protocol consists of 3 weeks of restraint of mice for 2 h in the morning in 50 mL plastic tube (Falcon) with holes, as previously described (1,35). CUS Protocol: The CUS protocol consists of 8 weeks of daily exposure to several different stressors presented in a random order and in an unpredictable form. The different types of stressors are: shakinggroups of 4/5 mice are placed in a plastic box container and placed in an orbital shaker for 2 h at 150 rpm; overcrowdinggroups of 8/9 mice are placed in a plastic box container for 3 h; restraint -mouse is placed in a 50 mL plastic tube (Falcon) with openings in the front and sides to allow the breathing of animals, for 3 h; hot driermice are exposed to a hot airstream from a hair dryer for 15 min; social defeatmice are introduced in a cage of an aggressive mice (CD-1 IGS Mouse) and after being defeated, they are placed in a transparent and perforated plastic container to avoid further physical contact, inside the resident home cage for 5-20 min; overnight illuminationmice are exposed to regular room light during the night period; and inverted light cycleregular room light is off during day time and on during night time for 2 days (33).
Blood Collection and Serum CORT Analysis
For measuring circulating CORT levels, tail blood was collected in a subset of animals before and after the stress paradigms (2 collections were performedmorning (8 AM) and evening (8 PM). Collections were made in less than 2 min after taking the animal from its homecage. After collection, the blood was centrifuged for 10 min at 13 000 xg and serum (supernatant) was stored at -80°C until analysis. Serum CORT concentration was determined using a commercially available immunoassay kit (DetectX Corticosterone Enzyme Immunoassay Kit, Arbor Assays, Ann Arbor, MI, USA; #K014-H5) according to the manufacturer's instructions. Assay sensitivity was 18.6 pg/mL.
Immunohistochemistry
Tissues were formalin-fixed and paraffin-embedded, and cut in 4 mm slices. Paraffin wax was removed, and the sample rehydrated in an autostainer (Leica XL) by immersing the slides in a sequence of xylene, ethanol absolute, ethanol 96%, ethanol 70% and water. Before the antigen retrieval, the Ki67 slides were washed with TBS-Tween 0.5% for 10 min followed with TBS 1x. Antigen retrieval was carried out using Heat Induced Epitope Retrieval (HIER), through the immersion of the slides in a Sodium Citrate Buffer (10 mM Sodium Citrate, 0.05% tween 20, pH 6.0), for 20 min. Then slides were incubated in 3% hydrogen peroxide (H 2 O 2 ) for 10 min. The UltraVision Large Volume Detection System Anti-Polyvalent HRP (LabVision Corporation, Thermo Scientific) was used. The blocking solution (LabVision kit) was applied for 30 min, then the respective primary antibody was applied, Ki67 (#550609, BD Bioscience, 1:200) and GFAP (#Z0334, DAKO, 1:2000) diluted in the LabVision kit Primary Ab diluent, and incubated overnight at 4°C. A biotinylated goat secondary antibody (LabVision kit) was applied followed by the streptavidin peroxidase (LabVision kit) and 3,3'-diaminobenzidine (DAB) substrate used as chromogen (1 mL of DAB substrate buffer + 1 drop of DAB chromogen, DAKO). After rinse in TBS 1x and in running water, the contrast and counterstain was performed in the autostainer (Leica XL) by immersing the slides in a sequence of running water, Harris Hematoxylin (25% for Ki67 and 50% for GFAP), running water, ammoniacal water 0.5%, running water, ethanol 96%, ethanol absolute and Xylol. Slides were mounted using entellan. The immunohistochemistry photos were taken with an Olympus BX61 microscope using the CellSens Dimension software at 100x magnification.
Quantitative Reverse Transcriptase-Polymerase Chain Reaction (qRT-PCR)
Total RNA from tumor tissue (collected when animals were sacrificed) was extracted using Trizol Reagent from Invitrogen. One µg of total RNA (quantified by a nanodrop Spectophotometer ND-1000) was reverse transcribed into complementary DNA (cDNA) using High Capacity cDNA Reverse Transcription Kit from Applied Biosystems.
The expression levels of mouse mRNA transcripts C-X-C motif chemokine receptor 4 (Cxcr4, GeneID: 12767), glial fibrillary acidic protein (Gfap, GeneID: 14580), AKT serine/threonine kinase 1 (Akt1, GeneID: 11651), mitogen-activated protein kinase 1 (Mapk1, GeneID: 26413), mitogen-activated protein kinase 3 (Mapk3, GeneID: 26417), signal transducer and activator of transcription 3 (Stat3, GeneID: 20848), epidermal growth factor receptor (Egfr, GeneID: 13649), platelet derived growth factor receptor alpha (Pdgfra, GeneID: 18595), and transformation related protein 53 (Trp53, GeneID: 22059) were assessed by qRT-PCR assays. The TATA-binding protein, (Tbp, GeneID: 21374) was used as reference gene. Primer set sequences are detailed in Table S1. A kit from KAPA SYBR ® FAST qPCR Master Mix (2X) Universal was used. The reactions were performed in duplicate and run on a Thermal cycler CFX96 using the program Bio-Rad CFX Manager. The conditions of PCR were as follows: 3 min at 95°C; followed by 40 cycles of denaturation: 3 s at 95°C, 30 s at respective melting temperature for annealing (Tm ; Table S1) and 30 s at 72°C for extension; the dissociation was performed by 5 s at 65°C with increasing the temperature in 1°C from 65°C to 95°C. PCR products weight were confirmed on 2% agarose gels. Gene expression was evaluated by relative quantification using the delta Ct method (DCt) and each gene was normalized to the reference housekeeping TBP gene.
Statistical Analysis
Statistical analysis was performed using IBM SPSS Statistics version 24 and graph's representation using Graph-Pad Prism version 6. To determine statistical differences between groups in the adrenal glands weight and in CORT Ratio PM/AM, twosided unpaired t-test was applied. Analysis of the overall survival was performed using the log-rank test. Analysis of body weight variance between groups was performed using two-way analysis of variance (ANOVA) followed by the post-hoc Bonferroni test for multiple comparisons. The results are expressed as group means ± SD (standard deviation) and the level of significance in all the statistical analysis was set at p < 0.05.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The animal study was reviewed and approved by Direcão Geral de Alimentacão e Veterinaŕia. 2018 to BC). BC was also funded by Fundacão Calouste Gulbenkian and Liga Portuguesa Contra o Cancro.
ACKNOWLEDGMENTS
The authors would like to acknowledge the support of the ICVS scientific platforms, including animal facility, histology and microscopy departments. | 2022-03-29T13:56:13.307Z | 2022-03-25T00:00:00.000 | {
"year": 2022,
"sha1": "2e0add87caaf464b3923c55981abc42f6bc4d7fd",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "2e0add87caaf464b3923c55981abc42f6bc4d7fd",
"s2fieldsofstudy": [
"Biology",
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
214533084 | pes2o/s2orc | v3-fos-license | On the Construction of Rule of Law Culture Centered on Harmonious Society
Harmonious society is an important development goal of China and a concept of governing the country. The construction of the culture of rule of law in trinsically unite with the connotation of harmonious society; the culture of rule of law we want to construct not only has the commonality in the development of rule of law culture, but also has the nationality and regionality of China; the construction of the culture of rule of law should adhere to the rational thinking of combining history and reality, local and extra-territorial, transplantation and innovation, especially the promotion and popularization of the culture of rule of law.
that legal culture should be a position and method" applied cultural interpretation method to legal research." Due to the polysemy of the concept of culture, scholars have different opinions on the meaning of legal culture, but they can be divided into two types: the method of legal research and the entity with research objects. The latter can be divided into two kinds of viewpoints: the culture of the conceptual form, the culture of the conceptual form and substantial form. The emphasis of each viewpoint on the "concept" or "substance" is different.
The so-called culture of rule of law is often expressed as a legal culture in the legal world; in the political sciences, it is often expressed as a culture of rule of law. Legal culture is opposite of political culture and religious culture; the culture of rule of law is opposite to the culture of rule by individuals. The legal culture can involve the past, the present and the future; the culture of rule of law focuses on the modern form of legal culture. The concepts we generally use are the same and can be used in many ways. From the cultural perspective, the culture of rule of law can be divided into the one on the explicit structure and the other on the implicit structure. The culture of rule of law on the explicit structure can be divided into laws and regulations, legal systems and legal facilities. The culture of rule of law on the implicit structure can be divided into legal psychology, legal awareness (legal concept) and legal thinking. In theory, any kind of legal culture, no matter what it is in the content or in the structure, should be harmonized and unified on the explicit and implicit levels. That is the harmonization among the tangible legal norms, legal systems, legal facilities, and the intangible legal ideology which is compatible with the tangible structure [1].
However, there are differences between the culture of rule of law and the legal culture. First of all, from the perspective of historical development, legal culture is the condensation of history and reality, including the achievements of legal culture in the past and present. It is a comprehensive reflection of ancient, modern and contemporary legal thoughts, theories and concepts, while the culture of rule of law is the product of legal culture when it develops to a certain historical stage. The culture of rule of law is a concentrated reflection of modern and contemporary legal thoughts, theories and concepts, and is inseparable from the rise and development of modern democratic politics. Throughout the history of development of human society, the culture of rule of law is always closely linked to the ideals pursued by human beings such as law, rights, freedom, democracy, equality, and order. The legal culture can involve the past, the present and the future. The culture of rule of law focuses on the modern form of legal culture. Secondly, from the cultural basis, legal culture has no special requirements for culture, while the culture of rule of law emphasizes the need for rational culture as its cultural basis. The cultural foundations required by rule of law include the scientific spirit, human rights ideology, civic awareness, and rights concepts. Without these rational cultures, the culture of rule of law can be neither established nor maintained. Thirdly, from the perspective of concept, legal culture is the opposite of political culture, religious culture and etc. The culture of rule of law is the opposite of the culture of rule by individuals and culture of rule by rites. Since rule of law is by no means simply about the law, but a kind of legal governance established on the basis of negation of rule by individuals, rule of law and rule by individuals have very different ideological and theoretical foundations. Therefore, in this sense, the culture of rule of law is an important symbol in human society that presented rule by individuals changed into rule of law. It is the concentrated expression of the modernization of legal culture [2]. The culture of rule of law is the existence of the "spiritual" level which is the opposite to the "artifact" level. It is the concept of rational culture rather than irrational culture. It is a type of culture that is reflected in industrial society rather than agricultural society.
Therefore, the culture of rule of law is an intrinsic spiritual part of the legal phenomenon, which is distinguished from the external explicit elements such as the legal norm system, legal facilities, legal system operation. It mainly includes thoughts, consciousness, feelings, beliefs, knowledge and theory on the current law. And so on. There are three levels in summary: [3] one is the level of legal psychology, which is mainly manifested as a psychological feeling and psychological reaction as well as a long-formed habits and customs, in general, whether it is willing or not to let the law adjust or standardize our daily life, for example, whether it despises the law, stays away from the law, or attaches importance to law and closeness to the law. The second is the level of legal awareness. Because consciousness is a complex, legal consciousness can only be a vague concept that includes mainly under the certain social conditions people's understanding, evaluation and emotional experience of the legal and legal phenomena of the current law, and adjustment of their own behaviors, and the resulting concept of rule of law such as the rights-based, legal supremacy. The third is the level of legal thought. The legal ideology is a high-level legal cognition. It is the sum total of the legalization and rationalization of legal and legal phenomena, and systematic legal consciousness and legal values. This level should reflect the supremacy of legal authority and rule of just law.
In summary, the author believes that the culture of rule of law is derived from a certain political, economic and cultural history and realistic environment, and has been relatively, stably accumulated in a country or region after a long-term socialization process, that is, the way of thinking and behaving of a country, region, nation and society towards legal life, which holds the values as the core, including people's awareness of rule of law, the concept of rule of law, the thinking of rule of law and the value towards legal.
Organic integration of the culture of rule of law and harmonious society
Harmonious society has profound connotation and can be expressed as "a stable, democratic and vigorous socialism society, where people can live harmoniously with the nature and friendly with each other, a rational social order and legal system can be established and fairness and justice can be upheld". Building a socialism culture of rule of law and establishing a harmonious society are inherently integrated.
First of all, the construction of harmonious society strongly promotes the culture of rule of law.
Mainly presenting in the following aspects: i. The harmonious society itself is the value pursuit of the culture of rule of law. Harmonious society is not only a beautiful vision of our ideal society, but also a realistic goal of building a well-off society in an all-round way in the primary stage of socialism. Not only the concept of harmonious society constitutes the overall value orientation of the culture of rule of law in China, but also the six elements of the harmonious social connotation will become an important focus of the value of legal culture and value criterion in China.
ii. The construction of harmonious society creates an ecological environment for the culture of rule of law. The formation of the culture of rule of law depends on a certain social and historical background and specific conditions for formation. Since the reform and opening up, the gradual establishment of China's market economic system, the orderly advancement of democratic politics and the obvious progress of rational culture have laid a good background for the formation of the culture of rule of law in China. The proposal and development of harmonious society will further promote the coordinated development of China's material, political and spiritual civilizations, and will certainly create more favorable conditions for the formation of the culture of rule of law in China.
iii. The practice of harmonious society provides a broad platform for the construction of the culture of rule of law. The construction of harmonious society is not only an important value goal, but also a profound social practice. This practical process not only tests and enriches the theory of socialism with Chinese characteristics, but also provides a broad platform for the culture of rule of law in China. This platform not only can let the culture of rule of law with Chinese characteristics fully play its role, but will also let the culture of rule of law with Chinese characteristics continue to withstand the test of harmonious social practice and promote its continuous development.
Secondly, the construction of the culture of rule of law responds positively to harmonious society.
This can be explained from the following dimensions: i. The culture of rule of law constitutes the soul and essence of the construction of harmonious society. The harmonious development of social modernization requires not only the material basis, but also the power of spiritual culture as its soul. The culture of rule of law as the advanced human culture must become an organic part of the soul of harmonious society and become an important spiritual orientation of socialism harmonious society. This point can be confirmed through putting democracy and rule of law in the first place of the connotation of harmonious society, and the fact that the spirit of the culture of rule of law is contained in every element.
ii. The rule of law culture provides effective institutional support for the construction of harmonious society. The culture of rule of law not only constitutes the soul of harmonious society as spiritual consciousness, but also served as a normative institutional form, that is, a good institutional arrangement, is more fundamental, overall, stable and long-term. It will provide a long-term system for harmonious society. It can be said that every aspect of the six aspects of the socialism harmonious society requires good institutional arrangements, and all need to be included in the rule of law to make it develop in a harmonious and orderly manner. iii. The culture of rule of law is the necessary way to reshape the legal quality of subjects in harmonious society. The culture of rule of law is not only expressed as the spiritual concept and normative system of the society, but also transforms and internalizes this concept and system into the value beliefs and behaviors of the social subject, and makes this concept and system become the legal quality of the subject. Since culture is a trait of human society, people are not only the creators and communicators of culture, but also the doers and carriers of culture. Therefore, in a unified social subject, culture embraces the quality of people, and quality reflects human culture; and because all aspects of social modernization will ultimately depend on the modernization of people's own qualities, it can be said that the modern citizen's legal quality conserved and shaped by the culture of rule of law will be a fundamental decisive factor in ensuring the success of China's social legal system modernization and harmonious society [4].
In this way, harmonious society and the culture of rule of law are closely related, mutually reinforcing. The two are unified in the great cause of building and developing socialism with Chinese characteristics, and unified in the grand plan of the great rejuvenation of the Chinese nation. We should also carefully consider the connotation of the construction of the culture of rule of law from the perspective of harmonious society.
The Interpretation of the Connotation of the Culture of Rule of Law
The academic community agrees that the culture of rule of law is the product of legal culture development in the modern stage. The culture of rule of law is based on the prosperity of commodity economy, developed democracy and civil society, and takes rights, freedom, equality, fairness, justice and rationality as essential elements, make sovereignty in the people, supreme authority of constitution and law, the protection of human rights, the supervision and control of public power, the administration of law and the fair and independent judiciary as core values, and includes a stable legal psychology in society such as compliance with the law, believing in law, law-protection. The interpretation of the connotation of the culture of rule of law can follow the following two ways of thinking.
The advanced nature of the times
The culture of rule of law is a way of thinking and behavior taking values as its core towards legal life which a country or a nation holds. In history, rule of law has always been linked to democracy. The fundamental spirit of rule of law is that the people are the masters of the country; the fundamental strength of rule of law lies in the support of the people; the core value of rule of law lies in the democratic system. Socialism democracy is people's democracy. The people's basic right to participate in the management of the country's social affairs through the election of representatives must be guaranteed. This basic right must and only under the leadership of the party can be realized. The people's democracy under the leadership of the Party is the political premise and political guarantee of the socialism rule of law. Therefore, the basic connotation of rule of law culture that we advocate is the people's democracy under the leadership of the Party and the high degree of condensation of the people's democratic spirit. The culture of rule of law with the people's democracy as its connotation reflects the concept of equality of the subject, the concept of honesty and the supremacy of law in terms of values, and reflects freedom, equality and human rights in the concept of consciousness. This content and form relationship of the people's democracy and the culture of rule of law reflects the class interests and democratic characteristics of advanced culture.
Historical necessity
With the continuous deepening of reforms, China has not only made great achievements in the economic field, but also the entire society has experienced tremendous changes. The people's material and cultural living standards have rapidly increased. In the past 40 years, people's thoughts, concepts, and understandings have greatly changed which particularly have direct, obvious, and strong reflection in their legal aspects. The performance is that with the expansion of the market economy's breadth and depth, laws and regulations that is appropriate with the development of market. At present, a relatively complete legal system has been formed. As a means of regulating market economic activities, the function of law has become increasingly prominent. The citizens' demands for the law and the consciousness using laws to protect their various rights and interests are constantly increasing. The rational expectations of using law to restrain, control, and crack down abuses of power are becoming more and more intense, and there are more and more examples that various social conflicts resolved through legal proceedings. This kind of change from concept and thought to system and behavior, which caused by the development of market economy, profoundly reflects the rapid development of the culture of rule of law of market economy in China, and also strongly illustrates that the market economy must construct a kind of the culture system of rule of law which is compatible with the development of itself. Therefore, the development of China's market economy forms the most direct and fundamental need for the culture of rule of law.
Support of institution
The socialism rule of law is an objective requirement and guarantee for promoting the healthy and stable development of socialism democratic politics. Socialism democracy is the inherent requirement of socialism and an important part of the fundamental rights and interests of the overwhelming majority of the Chinese people. Without rule of law, it means the arrival of anarchy. Only by institutionalizing democracy can we guarantee the authority and stability of democracy and ensure that the political life of the country operates in accordance with legal procedures; only under the guidance, norm and constraint of the law can citizens be rationalized and ordered in political participation, and ensure the effective operation of democratic procedures; only by using legal means to define the rights and obligations of various social subjects can we truly guarantee the realization of people's democratic rights. Therefore, democracy means rule of law, and democratic politics requires the culture of rule of law. In addition, the socialism rule of law is also an objective requirement for achieving social stability and long-term stability of the country. At the same time of China's rapid socio-economic development, there are many unfavorable factors that affect stability of order and social harmony. The culture of socialism rule of law should also have the connotation of the positive solution to these factors. market economy, democratic politics and spiritual civilization construction, and is an important part of the contemporary Chinese modernization movement. Its openness is manifested in that it has never let itself fall into the comfort zone, and is good at absorbing human beneficial legal and cultural achievements, especially transplanting and referring Western "rule of law" ideas and theories and Western legal systems, and communicating and responding in the education of Chinese and Western law. With the arrival of the era of economic globalization and the emergence of legal convergence, socialism legal culture and Western legal culture will coexist, and the entire human society is developing in the direction of harmonious rule of law.
National locality
The socialism culture of rule of law is an extremely important component in the construction and development of the socialism cause with Chinese characteristics. Its spiritual connotation should be deeply rooted in the rich soil of the Chinese nation. It is consistent with the cause of socialism construction and can reflect China's national conditions and national characteristics. Mr. Cai Shuheng believes that: Law is the form of state social organization, in the future, the construction of China's legal culture should be based on the national consciousness of law [5]. You can foresee that the culture of rule of law we want to build is neither continental nor British-American. It should be a socialism culture of rule of law with Chinese national characteristics. The basic connotation is the people's democracy under the leadership of the Party, and is the condensation of the people's democratic spirit. It is the culture of rule of law that is vividly interpreted by the reality and ideals of rule of law in China, rather than purely using the culture of the West or other developed countries to improve the life of the Chinese people.
The Construction of Culture of Rule of Law in China
In the new era, guided by the fundamental idea of building a harmonious society, the construction of the culture of rule of law in China should firmly base on the reality of China's social and economic development, and continue to adhere to the world's perspective, and focus on the continuous innovation of legal theory, gradually construct a socialism culture of rule of law with Chinese characteristics.
The basic dimensions of the construction of the culture of rule of law Dimension 1: The persistence of the national spirit
Building the culture of rule of law must be based on the promotion of national spirit. The culture of the rule of law is a new form of modernization of China's excellent traditional culture [6]. Historically, different civilizations have produced legal and cultural traditions with different characteristics and different styles in their evolution. They constitute the carrier of national spirit, embody the national value norms and value pursuits, and contain the rich experience of national law adjustment, and also become a mirror for the national observation and reflection. China's legal culture is an important part of the advanced culture of socialism with Chinese characteristics, the fruit of the progress of social civilization, and the ideological and cultural foundation and spiritual drive to promote China's legal system. It is deeply rooted in the fertile soil of China, condenses the essence of national culture, has a unique spiritual character, formal characteristics and strong vitality, and is a concentrated expression of the national spirit. In the great process of building a well-off society in an allround way and accelerating socialism modernization, the cultivation and promotion of this cultural spirit is an important foundation for governing the country according to law [7]. The problem of today's construction of culture of rule of law should be how to effectively use local culture and ethical resources to serve the modernization of rule of law. Taking traditional Chinese culture and ethics as an example, the concept of tolerance contained in the "Doctrine of the Mean", and the integrity contained in "Good intention and friendly attitude", and the requirements of "People-oriented" idea for rule of law, the requirement of "Great harmony" contained for the rational order of rule of law, and the mechanism for reducing disputes reached by "Peace is most precious", all of these can become an effective local resource for the construction of the culture of rule of law [8]. Therefore, in the construction of the culture of rule of law, we must base on the traditional Chinese legal culture, inherit and carry forward the outstanding achievements of Chinese traditional culture, and maintain the nationality of the culture of rule of law.
Dimension 2: Persistence of open attitude
To build a culture of rule of law, we must maintain an open attitude. With the development of the trend of world economic integration, the integration of world culture is also carried out in a subtle manner. The excellent concept of rule of law in the West collides with and integrates with China's long history and culture, which is of great significance to the promotion of the construction of rule of law, the spirit of rule of law and the practice of rule of law in China. Every country has its own characteristics. The right way to treat Western culture is to absorb its essence and discard its dross. Some developed countries in the West have a long and developed legal process and have led the process of modernizing rule of law in the world and accumulated a large number of outstanding legal and cultural achievements. In development, we must learn from and absorb these outstanding achievements, and transform them into a legal culture that adapts to China's national conditions and promote the building of the culture of rule of law in China. Since the reform and opening up, we have paid attention to absorbing the excellent and beneficial legal and cultural achievements of the West, especially borrowing and transplanting Western theories of rule of law, the legal system and the educational way of rule of law, and effectively applied them to the construction of the culture of rule of law, and achieved remarkable results. The effect is particularly prominent in the areas of civil and commercial, economic and environmental. The openness and internationalization of the culture of rule of law should be guided by the concept of "harmony without uniformity", should be based on mutual respect and understanding, seeking common ground while reserving differences, in order to achieve the common progress in the construction of the culture of rule of law. This absorption and reference are not as same as the full westernization of the modernization of rule of law. Our country has different historical traditions and social systems from Western society. If the western legal culture is completely copied, it will inevitably lead to the superstructure being incompatible with the whole society, which will cause the malformation of the rule of law in China. Therefore, the essence of the open attitude in the construction of the culture of rule of law is reforging the civilization of rule of law in the West with Chinese characteristics, rather than mechanically splicing.
Dimension 3: The ultimate pursuit of legal modernization
Modernizing the law must be the ultimate pursuit of building the culture of rule of law. China's culture of rule of law has a close relationship with the socialism market economy with Chinese characteristics. The competition of the comprehensive national strength of the world today has become a focus of global attention. Culture and politics are blended, and the role and status of them in competition are becoming more and more prominent. For China, which is developing at a rapid pace, the modernization of the economy will inevitably require the modernization of the superstructure to adapt to it, and the construction of rule of law plays a pivotal role in it. The market economy needs the culture of rule of law to maintain it. At the same time, the culture of rule of law is constantly improving and developing in the process of maintaining and adapting, thus promoting the birth of a new culture of legal administration. The culture of rule of law should continue to carry out theoretical innovation on the principle of market economy, and apply this new theoretical method to the development and operation of the market economy to promote the rapid development of the modern economy. Building a country ruled by law requires constantly updating the legal values. Because correct and good legal values can provide a correct direction and guidelines for enacting laws, strict enforcement, and correcting evil laws. The culture of rule of law plays an important role in the establishment of legal theory and the operation of legal practice, and provides a theoretical basis and a spiritual basis for the progressive development of law. Only by forming a good culture of rule of law can we truly realize the modernization of the legal norms at the "artifact" level, provide a broader space for the development of China's economy, and realize the great rejuvenation of the Chinese nation.
Some Rationales for the Construction of the Culture of Rule of Law
Therefore, in the new era, the culture of rule of law at the conceptual level focuses on the inner recognition, advocacy and belief in rule of law. We need to construct the culture of rule of law and forma culture and legal spirit of believing and honoring law and respecting rights and interests in the whole society. At the same time, improving the quality of citizens will foster and promote the formation of universal freedom, equality, fairness, democracy and rights.
Incorporating modern humanistic spirit
Integrating the humanistic spirit into the construction of rule of law is a necessary way to construct the culture of rule of law. The so-called humanistic spirit, in short, is the spirit of respecting people, caring for people. This spiritual character is not always there, but a product of market-economic relations based on freedom, democracy, and equal rights. Incorporating the humanistic spirit into all aspects and links of spiritual civilization construction in China will not only greatly enrich the content of China's spiritual civilization construction, but also induce the rational spirit of market entities correctly recognizing and dealing with market economy relations through the influence of spiritual civilization construction [9].
Pay attention to local cultural resources
Paying full attention to local cultural resources and actively exploring and utilizing the beneficial ingredients is a key link in the construction of a culture of rule of law. In the historical process of their long-term survival and development, each country and nation has created and formed a cultural system that is unique and regards it as its most precious wealth. This part of wealth cannot be exported or imported. In recent years, China has transplanted and introduced advanced western legal technical systems, which resulted in an imbalance between the total supply of legal resources and total demand. Why are the legal and technical systems that we transplant and import, once transplanted and introduced to China, no longer shining, even they are very efficient in Western countries? The profound reason is that it is separated from the local cultural resources on which it works. This cultural resource, as part of the national spirit, cannot be transplanted and introduced. As the accumulation of national wisdom and the spiritual culture of the long-term development of the nation, it cannot be ruled out that China's native cultural resources are still rich in some positive and reasonable elements needed for the present era.
Enrich the communication mechanism of legal culture
The legal communication culture is one of the basic components of the culture of rule of law. Without the full development of the legal communication culture, there will be no full development of the culture of rule of law.
First of all, enriching the communication mechanism of legal culture must enrich the communication channels. It is an important measure to construct the ethical foundation of the culture of rule of law by changing the legal education method, breaking the narrow discipline restrictions, and exploring and establishing the communication mechanism of law culture education for various disciplines and various media. The construction of the culture of rule of law is a ground-breaking project in the system of rule of law in China. It is not just a matter of law, but a matter of the whole society. This is because modern social life is an organic whole with mutual connection and interaction, and the construction of culture of rule of law involves the social integration mode and behavior regulation mode from the central to the local and even the grassroots communities. It is a way of developing the useful and discarding the useless and renewing of the traditional means of governance. Obviously, it is difficult to work alone by relying on the discipline of law. It is necessary to adapt to the needs of contemporary Chinese practice of rule of law, update the existing concept of legal education, and realize the transformation from single legal education to multidisciplinary and interdisciplinary big law education, and realize the transformation from the legal knowledge education to legal quality education, and establish a reasonable model of legal education combining the "school system" with the "apprenticeship system".
Second, enriching the legal culture communication mechanism need to enrich the scope of communication. The emphasis of the construction of the culture of rule of law is to make it popular with the people by effective dissemination [10]. As one of the important carriers of contemporary legal communication culture, the legal communication activities represented by "Sixth Entry of Law" have a prominent long-term effect on improving the legal quality of all members of society. The socialism culture of rule of law reflects a set of concepts, beliefs, ideals and values, which are inherently required by the socialism rule of law. It is the guideline and principle for guiding and adjusting socialism legislation, enforcement, justice, law-abiding and legal supervision. Through the effective dissemination of the "Legal Advancement" activities, more members of society can understand the socialism culture of rule of law put forward on the basis of summing up the practical experience of construction of rule of law and drawing on the achievements of the rule of law in the world, and understand the inherent requirements, basic laws and value orientation of the socialism rule of law. It will promote the socialism culture of rule of law to all levels of social life, expand its coverage and influence, and contribute to the improvement of legal spirit of the whole nation. Of course, we must vigorously explore new and more models of the culture of rule of law. Through the internet+, we should strengthen the propaganda of the culture of rule of law, encourage the literature, art, film and television works focusing on the theme of socialism culture of rule of law, enhance the appeal of the rule of law.
To propaganda the culture of the rule of law on the important day of commemoration or special festival, so that people can feel the charm of the culture of the rule of law in leisure.
Conclusion
The rule of law is the support of the country to achieve the effective governance, and the culture of rule of law is the soul of the construction of the rule of law [11]. Harmonious society is an important historical and epochal proposition put forward by the Party and the state in the process of building and developing the socialism cause with Chinese characteristics. The construction of the culture of rule of law and the connotation of harmonious society are inherently united. The culture of rule of law we want to build not only has the commonality in the development of human culture of rule of law, but also has the nationality and regionality of China. Therefore, the construction of culture of rule of law should adhere to the combination of history and reality, local and extra-territorial, transplantation and innovation. In the new era, we must pay special attention to the promotion and popularization of the culture of rule of law. | 2020-01-30T09:14:07.759Z | 2019-08-31T00:00:00.000 | {
"year": 2019,
"sha1": "1f035da316bc6f91b1ee7c55035fe56fb360d714",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.24966/flis-733x/100031",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ae70543a9d5487ec10e3eb00719db8e517b830f3",
"s2fieldsofstudy": [
"Law"
],
"extfieldsofstudy": [
"Political Science"
]
} |
265487927 | pes2o/s2orc | v3-fos-license | Organizational justice and motivation on Organizational Citizenship Behavior (OCB)
This research is to examine the influence of organizational justice and motivation on employee organizational citizenship behavior (OCB). This research was conducted at Bank BPD Tabanan Branch, Bali, Indonesia with a sample of 75 employees taken using the saturated sampling method. Data collection was carried out by distributing questionnaires that used a 5-point Likert scale to measure 11 statements. The analysis technique used is multiple linear regression. The results of the analysis show that organizational justice has a positive and significant effect on organizational citizenship behavior (OCB) and motivation has a positive effect on organizational citizenship behavior (OCB). The research results support all hypotheses and indicate the positive influence of organizational justice on motivation on organizational citizenship behavior (OCB).
Introduction
Human resources are valuable assets owned by an organization, because the success of an organization is determined by the human element.Without employees, an organization cannot realize all the plans it has made, because in the hands of employees all of this can develop.If an employee does everything that is not driven by things that are profitable for him, but because the employee feels satisfied if he can do or help with something that is more important to his role, then this condition can be called organizational citizenship behavior (OCB).
Organizational citizenship behavior (OCB) is behavior that arises based on an employee's discretion and is carried out voluntarily and without coercion (Andriani, 2012).A number of studies show a strong relationship between organizational justice and OCB.The real Organizational Justice that companies need to prioritize is that employees must feel that they are treated fairly that procedures and results are fair.This fair concept includes several things that are of concern to companies, including the division of work, wages, rewards, treatment, and things that determine the quality of interactions within the company.
PIn the process of developing fair behavior, it is important to understand how to influence the scales based on justice, satisfaction, staff motivation and commitment (Ghaziani et al., 2012).Apart from organizational justice, OCB is also influenced by motivation according to the statement put forward by George and Jones (2005) that high motivation greatly influences the emergence of OCB behavior in companies where employees have good behavior, are willing to try and work hard and do not give up easily.are characteristics of OCB behavior.
Motivation is an indicator that can make a worker more satisfied in carrying out his activities.According to Luthans (2006) motivation is a process as the first step for someone to take action due to physical and psychological deficiencies, namely an encouragement shown to fulfill certain goals.Murti and Srimulyani (2013) conducted research on motivation which found that employees whose needs were not met could become a motivation for them to fulfill these needs.
Providing motivation to employees in a trading company is very important because employees have a very big responsibility in providing the best service to customers in order to achieve the company's profit targets that have been set by the company.This research was conducted on employees of Bank BPD Tabanan Branch, Bali, Indonesia.Bank BPD Tabanan Branch, Bali, Indonesia is a trading company operating in the textile and fashion sector.
Literature review and development hypothesis
Research conducted by Nwibere (2014) proves that organizational justice has a positive and significant influence on organizational citizenship behavior (OCB).Sani's (2013) research also states that organizational justice has a significant positive effect on OCB.Research conducted by Ince and Gul (2011) proves that there is a certain relationship between perceptions of organizational justice and OCB.Employees behave positively to contribute to organizational development and pay attention to their work when they have positive perceptions of organizational justice.Sportsmanlike and helpfulness are dimensions of OCB has the smallest influence on positive justice perceptions, the type of justice that most determines OCB is distributive justice.Widyaningrum (2010) said that the influence of organizational justice on OCB becomes stronger if fair treatment can increase job satisfaction and employee commitment to the company.Meanwhile, research by Iqbal, Aziz, and Tasawar (2012) states that procedural justice has a strong positive influence but distributive justice has a weak positive influence on OCB.Based on the description above, the following hypothesis can be formulated: H 1 : Organizational Justice has a positive and significant effect on Organizational Citizenship Behavior (OCB) Nawawi (2003) said that motivation is a condition that encourages or causes someone to do an action.Antonio and Sutanto (2014) say that successful companies need employees who are able and willing to do tasks that are not part of their formal duties.Robbins and Judge (2007) say that there is a drive that makes someone achieve their maximum performance.This drive takes the form of the need for achievement, the need for socialization and the need for power or influence over other people.Research conducted by Panggalih and Zulaicha (2012) shows that motivation has a significant and positive influence on organizational citizenship behavior (OCB).This shows that the higher the motivation, the higher the organizational citizenship behavior (OCB).Based on the theoretical basis and various previous studies, the following hypothesis can be put forward.
Methods
This research uses a qualitative approach, where this approach is an approach used to research a particular population or sample by analyzing qualitative data with the aim of testing a predetermined hypothesis.The objects studied in this research are organizational justice, motivation, and organizational citizenship behavior (OCB) in employees who work at Bank BPD Tabanan Branch, Bali, Indonesia.The population in this study were employees of Bank BPD Tabanan Branch, Bali, Indonesia, totaling 75 employees.The sampling technique used in this research used a saturated sampling technique.The data collection method used in this research is and questionnaire.The analysis technique used in this research is multiple linear regression analysis to determine the influence between the independent variable and the dependent variable.The independent variable in this research is organizational justice (X1) and motivation (X2), while the dependent variable is organizational citizenship behavior (OCB) (Y).
Results and discussion
The analysis in this research uses multiple linear regression analysis techniques.Multiple linear regression technique is data processing where this technique is used to estimate the value of the dependent variable using more than one independent variable.The results of multiple linear regression analysis can be seen in Table 1.
Organizational Justice on Organizational Citizenship Behavior (OCB)
The hypothesis of the organizational justice variable on organizational citizenship behavior (OCB), the result was that H 0 rejected and H 1 accepted.These results mean that organizational justice has a significant positive effect on organizational citizenship behavior (OCB).This develops research conducted by Nwibere (2014) proving that organizational justice has a positive and significant influence on organizational citizenship behavior (OCB).Sani's (2013) research also states that organizational justice has a significant positive effect on OCB.Ince and Gul (2011) proves that there is a certain relationship between perceptions of organizational justice and OCB.
Employees behave positively to contribute to organizational development and pay attention to their work when they have positive perceptions of organizational justice.Sportsmanlike and helpfulness are dimensions of OCB has the smallest influence on positive perceptions of justice.The type of justice that most determines OCB is distributive justice.Widyaningrum (2010) said that the influence of organizational justice on OCB becomes stronger, if fair treatment can increase job satisfaction and employee commitment to the company.Meanwhile, research by Iqbal, Aziz, and Tasawar (2012) states that procedural justice has a strong positive influence but distributive justice has a weak positive influence on OCB.The results of this research show that the Interactional Justice indicator has the highest average value, which means that the main tasks assigned by the company are in accordance with the employee's field.
Motivation on organizational citizenship behavior (OCB)
The hypothesis of motivation variables on organizational citizenship behavior (OCB), the result was that H 0 rejected and H 1 accepted.These results mean that motivation has a significant positive effect on organizational citizenship behavior (OCB).This develops research put forward by Nawawi (2003) which states that motivation is a condition that encourages or causes someone to do an action.Antonio and Sutanto (2014) say that successful companies need employees who are able and willing to do tasks that are not part of their formal duties.Robbins and Judge (2007) say that there is a drive that makes someone achieve their maximum performance.This drive takes the form of the need for achievement, the need for socialization and the need for power or influence over other people.Research conducted by Panggalih and Zulaicha (2012) shows that motivation has a significant and positive influence on organizational citizenship behavior (OCB).This shows that the higher the motivation, the higher the organizational citizenship behavior (OCB).Based on the theoretical basis and various previous studies, the following hypothesis can be put forward.The results of this research show that the behavioral direction indicator has the highest average value, which means that employees have good relationships between co-workers.
Conclusion
Based on the results of the previous discussion, it can be concluded that Organizational Justice has a positive and significant effect on Organizational Citizenship Behavior (OCB).This means that employees who are treated fairly in the place where they work will have a high level of Organizational Citizenship Behavior (OCB).
Motivation has a positive and significant effect on Organizational Citizenship Behavior (OCB).This means that employees who have high motivation towards the place where the employee works will have a high level of Organizational Citizenship Behavior (OCB).
Based on the results of the analysis and conclusions obtained, suggestions can be given as follows.The first suggestion is that Bank BPD Tabanan Branch, Bali, Indonesia should improve indicators of distributive justice where the rewards received must be in accordance with the tasks assigned to improve good relations between employees and the organization.Apart from that, Bank BPD, Tabanan Branch, Bali, Indonesia should also improve indicators of behavioral direction in terms of good relationships with co-workers to increase organizational citizenship behavior (OCB).The second suggestion is that Bank BPD Tabanan Branch, Bali, Indonesia should increase interactional justice indicators where the tasks assigned by the company are in accordance with employee abilities.Apart from that, Bank BPD Tabanan Branch, Bali, Indonesia should also increase the Business Level indicator where employees must take the initiative to complete work in accordance with organizational standards.
Future researchers who wish to conduct related research are expected to consider other variables that have a relationship with organizational justice, motivation, and organizational citizenship behavior (OCB) such as job satisfaction and can conduct research on different types of work in several other large companies, so that Research results can vary which can enrich references about organizational justice, motivation and organizational citizenship behavior (OCB).Apart from that, future researchers can also carry out variations in data analysis techniques, using paths or other analysis techniques.
Disclosure of conflict of interest
No conflict of interest to be disclosed.
H 2 :
Motivation has a positive and significant effect on Organizational Citizenship Behavior (OCB)
Table 1
Results of Multiple Linear Regression Test Analysis | 2023-11-29T16:26:49.513Z | 2023-11-30T00:00:00.000 | {
"year": 2023,
"sha1": "423ac2965755b481ddb133b44e647abafe37cfcf",
"oa_license": "CCBYNCSA",
"oa_url": "https://wjarr.com/sites/default/files/WJARR-2023-2391.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "5058328f6f90665c4dd520b5c35f07cca3730d41",
"s2fieldsofstudy": [
"Business",
"Psychology"
],
"extfieldsofstudy": []
} |
220970057 | pes2o/s2orc | v3-fos-license | Exercise Programs for Muscle Mass, Muscle Strength and Physical Performance in Older Adults with Sarcopenia: A Systematic Review and Meta-Analysis
Sarcopenia is an age-related condition that is characterized by progressive and generalized loss of muscle mass and function. Exercise treatment has been the most commonly used intervention among elderly populations. We performed a systematic review and meta-analysis to evaluate the available literature related to the effects of exercise interventions/programs on muscle mass, muscle strength and physical performance in older adults with sarcopenia. We searched PubMed, EMBASE, MEDLINE and the Web of Science for randomized controlled trials and controlled clinical trials exploring exercise in older adults with sarcopenia published through July 2019 without any language restrictions. Pooled analyses were conducted using Review Manager 5.3, with standardized mean differences (SMDs) and fixed-effect models. A total of 3898 titles and abstracts were initially identified, and 22 studies (1041 individuals, 80.75% females, mean age ranged from 60.51 to 85.90 years) were included in the meta-analysis. The exercise programs in the studies consisted of 30 to 80 min of training, with 1 to 5 training sessions weekly for 6 to 36 weeks. Muscle strength (grip strength [SMD 0.57, 95 % CI 0.42 to 0.73, P <0.00001] and timed five chair stands [SMD -0.56, 95 % CI -0.85 to -0.28, P < 0.0001]) and physical performance (gait speed [SMD 0.44, 95 % CI 0.26 to 0.61, P < 0.00001] and the timed up and go test [SMD -0.97, 95 % CI -1.22 to -0.72, P < 0.00001]) showed significant improvement following exercise treatment, while no differences in muscle mass (ASM [SMD 0.15, 95 % CI -0.05 to 0.36, P = 0.15] and ASM/height2 [SMD 0.21, 95 % CI -0.05 to 0.48, P = 0.12]) were detected. Exercise programs showed overall significant positive effects on muscle strength and physical performance but not on muscle mass in sarcopenic older adults.
Appendicular skeletal muscle mass (ASM), total body skeletal muscle mass (SMM) and muscle mass adjusted for body size (height squared, weight and body mass index) were used as indicators of muscle mass to assess sarcopenia [8]. For example, ASM <20 kg for men and <15 kg for women, ASM/height 2 <7.0 kg/m 2 for men and <6.0 kg/m 2 for women were used as cut-off points for diagnosing sarcopenia according to the EWGSOP2. Dualenergy X-ray absorptiometry (DXA), bioelectrical impedance analysis (BIA), computed tomography (CT) and magnetic resonance imaging (MRI) are common techniques to measure muscle mass. In addition, grip strength and chair stand tests are simple and effective measurement in clinical practice [9,10] and are routine methods to evaluate muscle strength to identify sarcopenia: grip strength <27 kg for men and <16 kg for women and chair stand test >15 s for five rises (as defined by the EWGSOP2). Finally, physical performance has been previously detected by gait speed and the timed up and go (TUG) test [11], contributing to the assessment of the severity and prognosis of sarcopenia among elderly individuals [12]: gait speed ≤0.8 m/s and TUG test ≥20 s (as defined by the EWGSOP2).
It is widely accepted that physical exercise, especially resistance training [13], is effective for improving muscle function and functional ability among older adults. Habitual exercise has been confirmed to be beneficial for preventing sarcopenia in elderly individuals regardless of the exercise type and intensity [14]. However, the implementation of exercise treatment for sarcopenia has just begun, and the correlation between exercise programs and sarcopenia-related symptoms remains unclear. Previous literature regarding exercise interventions suggested muscle strength and physical performance, even muscle mass have been increased in older adults with sarcopenia, whereas no consensus recommendations on physical exercise for the prevention of sarcopenia have been made due to the existence of multiple contributing variables [15]. Therefore, this study aims to perform a meta-analysis of randomized controlled trials and controlled clinical trials and systematically assess the effects of exercise programs on muscle mass, muscle strength and physical performance in older adults with sarcopenia.
Data sources and search strategy
This systematic review and meta-analysis were performed according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) standards [16]. The protocol of this study was registered at PROSPERO (Center for Reviews and Dissemination, University of York: CRD42019141658).
A systematic literature search for randomized controlled trials and controlled clinical trials published from January 1990 to July 2019 was conducted using PubMed, EMBASE, MEDLINE and the Web of Science. The search included the keywords 'sarcopenia', 'sarcopenic', 'exercise', 'physical' and 'training' (the PubMed search strategy, which was used for all the databases, is available in the supplementary files). The electronic search was then supplemented with a manual search of the bibliographies of the identified studies. No restrictions on the language of publication was applied during the database searches.
Inclusion and exclusion criteria
The reference lists obtained were independently screened by two investigators (BWX and SY) in accordance with the inclusion and exclusion criteria, and disagreements regarding study eligibility were resolved by a third investigator (ZTF). After screening the titles and abstracts, the initially eligible articles were selected for a full text review.
Articles were included if they met all of the following criteria: 1) participants were diagnosed with sarcopenia based on any established definitions (by a working group, a certain article or clinical experience); 2) mean or median age ≥60 years; 3) physical exercise training was performed, without a limitation regarding exercise type; and 4) the assessment of muscle mass, muscle strength or physical performance was reported.
Studies were excluded if 1) no original data was included (review, protocol, abstract, etc.); 2) they were animal studies; 3) they were performed with young or middle-age populations; 4) the participants had other accompanying diseases (e.g., cancer, liver cirrhosis, diabetes, stroke, depressive disorder, and metabolic syndrome); 5) there was no comparison group; 6) no outcome of muscle mass, muscle strength or physical performance was included; and 7) the exercise intervention was combination with other interventions (nutrition)
Data extraction and quality assessment
The following information was extracted: authors, year, number of participants, age, sex, body mass index (BMI), diagnostic criteria, training period, training frequency, exercise intensity or workload, exercise modality, program design, ASM, ASM/height 2 , SMM, SMM/height 2 , grip strength, five chair stand time, gait speed, TUG test and other pre/postintervention performance indicators. The risk of bias of the included trials was assessed by two independent investigators (BWX and ZTF) using Review Manager 5.3 software (Cochrane Collaboration, UK), and disagreements regarding the methodological quality were resolved by discussion. The quality assessment was performed according to the Cochrane criteria, including selection bias, performance bias, detection bias, attrition bias, reporting bias, and other potential biases, which were categorized into three grades: low risk, unclear risk and high risk. Percentage of the three groups were then calculated.
Outcomes and effect size calculation
All the outcomes were continuous variables, and pooled analyses were conducted using Review Manager 5.3, with standardized mean differences (SMDs) and fixed-effect models. The standardized effect sizes and 95% confidence intervals (CIs) were calculated to test the results. The degree of heterogeneity of the effect sizes was quantified with the I 2 statistic, ranging from 0% to 100%. Possible sources of heterogeneity within the study were investigated using subgroup analyses stratified by different exercise programs. Further, a sensitivity analysis was conducted to determine the robustness of our results.
To assess the risk of publication bias, funnel plots and Egger's test were conducted using StataSE V.13 (StataCorp, College Station, Texas, USA). A P value less than 0.05 was considered significant for all analyses.
The effect on muscle mass
ASM and ASM/height 2 were selected to evaluate the efficacy of the exercise program on muscle mass in older adults with sarcopenia (Fig. 2). Seven trials included information regarding ASM, and five trials included information regarding ASM/height 2 , which were pooled together in the method of inverse variance using a fixedeffect model. The value for the change in overall effect size in the general assessment indicated that no significant effect of exercise was shown for ASM (SMD 0.15, 95 % CI -0.05 to 0.36, P = 0.15, I 2 = 34 %) or ASM/height 2 (SMD 0.21, 95 % CI -0.05 to 0.48, P = 0.12, I 2 = 66 %). In addition, the outcomes of SMM (SMD 0.21, 95 % CI -0.13 to 0.55, P = 0.23, I 2 = 0 %) and SMM/height 2 (SMD 0.29, 95 % CI -0.01 to 0.59, P = 0.06, I 2 = 0 %) showed no significant difference (Supplementary Fig. 2).
The effect on muscle strength
Grip strength and five chair stand time were selected to evaluate the efficacy of the exercise programs on muscle strength in older adults with sarcopenia (Fig. 3). Sixteen trials included information for grip strength, and four trials included information regarding five chair stand time, which were pooled together using the method of inverse variance with a fixed-effect model. The value for the change in overall effect size in the general assessment indicated that the efficacy of exercise was statistically significant for grip strength (SMD 0.57, 95 % CI 0.42 to 0.73, P < 0.00001, I 2 = 84 %) and five chair stand time (SMD -0.56, 95 % CI -0.85 to -0.28, P < 0.0001, I 2 = 21 %). The sensitivity analysis of grip strength indicated that some trials [18,32] might be possible sources of heterogeneity, the degree of heterogeneity was obviously decreased by excluding them (SMD 0.37, 95 % CI 0.21 to 0.54 P < 0.00001, I 2 = 37 %). The subgroup analysis demonstrated that the association between exercise and grip strength was independent of different exercise programs (Fig. 4). Grip strength was significantly improved by resistance training (SMD 0.64, 95 % CI 0.46 to 0.83, P < 0.00001, I 2 = 87 %), weight training (SMD 0.30, 95 % CI 0.09 to 0.51, P = 0.005, I 2 = 12 %), and aerobic training (SMD 0.48, 95 % CI 0.13 to 0.83, P = 0.007, I 2 = 83 %). With regard to the subtests, the I 2 for the weight training program decreased substantially compared with the others.
The effect on physical performance
Gait speed and the TUG test were selected to evaluate the efficacy of the exercise program on physical performance in older adults with sarcopenia (Fig. 5). Eleven trials included information for gait speed, and six trials included information for the TUG test, which were pooled together with the method of inverse variance using a fixed-effect model. The value for the change in overall effect size in the general assessment indicated that the efficacy of exercise was statistically significant for gait speed (SMD 0.44, 95 % CI 0.26 to 0.61, P < 0.00001, I 2 = 67 %) and the TUG test (SMD -0.97, 95 % CI -1.22 to -0.72, P < 0.00001, I 2 = 91 %). The sensitivity analysis indicated that Tsekoura' trial [34] and Liao CD's trial [32] might be the possible sources of heterogeneity for gait speed and the TUG test, respectively. The degree of heterogeneity was decreased by excluding the relevant trial for gait speed (SMD 0.35, 95 % CI 0.17 to 0.52 P = 0.0001, I 2 = 40 %) and the TUG test (SMD -0.79, 95 % CI -1.05 to -0.54 P < 0.00001, I 2 = 33 %).
Study quality
Details about the risks of bias of the included studies are shown in Figure 6A and Figure 6B. Four studies used single-blinded assessments, which may lead to high risks of selection bias. Two studies used nonrandomized designs, which may lead to high risks of performance and detection bias.
DISCUSSION
In this systematic review and meta-analysis, existing evidence from 22 randomized controlled trials and controlled clinical trials demonstrated that any type of exercises (e.g., resistance training, aerobic training, balance training, weight training, and whole-body vibration training) significantly improved muscle strength and physical performance in older adults with sarcopenia. However, the outcome of muscle mass showed no differences after exercise intervention, which is in accordance with previous studies suggesting that loss of muscle and bone mass may not be prevented by exercise [39]. In the context of the previous studies, the combination of exercise intervention and nutrition supplementation could achieve the greatest improvement in muscle mass and strength [40]. For a relevant normative result, muscle mass could be adjusted to the body size, such as the height squared, the weight or the BMI. In the present study, no differences in ASM, SMM or the muscle mass adjusted for height squared were observed between the exercise training and control groups. Low muscle mass and muscle strength are characteristic features in the definition of sarcopenia, while muscle strength is affected more than muscle mass in individuals with sarcopenia and was formerly considered the most reliable measurement [7].
Based on previous studies, any type of exercise interventions or combinations of interventions have been shown to be effective methods to treat muscle loss and weakness [41]. Currently, resistance training and aerobic training are the most common exercise programs to maintain and improve physical function in older adults [42]; while aerobic exercise is aimed at improving cardiovascular adaptations with increased peak oxygen consumption, resistance exercise is aimed at improving neuromuscular adaptations with increased muscle strength. In addition, weight training serves as an alternative to resistance training and aerobic training, it is good for balance performance and muscular coordination. Villareal, D. T. suggested that a combined exercise program provided greater improvement and prevented more adverse effects than a single exercise training program among elderly individuals [43]. Center-based and home-based exercise training are two program settings that depend on the experimental location; the former represents an informal, flexible program and is recommended for short-term interventions, while the latter represents a formal, controllable program and is recommended for long-term interventions [44].
Other reviews have reported that exercise training is generally effective for muscle strength and performance of healthy elderly adults regardless of training programs [45][46][47], while large clinical trials of exercise for individuals diagnosed with sarcopenia are still lacking. Compared with previous studies, our study has three strengths. First, the inclusion criteria in this meta-analysis were relatively rigid; we included only older individuals with a definite diagnosis of sarcopenia. Second, 1041 participants were enrolled in the present meta-analysis, which is twice as large as the previous reviews of sarcopenia treated by exercise [48]. Finally, we provided an integrated overview (three aspects with six outcomes) to evaluate the general effectiveness of exercise programs.
The effects of exercise programs in older adults with sarcopenia were explored in this systematic review and meta-analysis. Considerable heterogeneity (I 2 > 50%) was inevitably detected in most of the included studies due to the complex characteristics of the exercise programs, however, there were insufficient data to conduct subgroup analyses. When excluding some trials, the degree of heterogeneity was markedly decreased. Therefore, some of the results should be interpreted with caution, and more research is needed to confirm the findings. Other important limitations of the included articles were the limited sample size, the different diagnostic criteria and the detection instruments used to diagnose sarcopenia, which may result in high heterogeneity.
In conclusion, this meta-analysis indicates that exercise programs have potential to support muscle function in elderly individuals with sarcopenia, which was recommended in the daily life. Compared with muscle mass, muscle strength and physical performance can be improved to a greater extent by exercise training. Although most of the studies suggested that regular exercise interventions improve overall performance in sarcopenic participants, more studies focused on multiple training variables and outcome measurements based on a larger population are needed to design the optimal training strategy and guide clinical practice. | 2020-08-06T05:05:33.075Z | 2020-07-01T00:00:00.000 | {
"year": 2020,
"sha1": "6af032c40dcc08f67a5aabbd4cca9576a86c0ebb",
"oa_license": "CCBY",
"oa_url": "http://www.aginganddisease.org/EN/article/downloadArticleFile.do?attachType=PDF&id=147904",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "33dc8145cd00cf5edbf5a8bd9ba44c52a9392c7d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
225155047 | pes2o/s2orc | v3-fos-license | Strength characteristics and effectiveness of Scotchkote® 2400 liner used for rehabilitation of utility potable water transport pipelines
The article considers the problem of potable water transport pipelines rehabilitation. The article analyzes the results of the experiments for determining the strength characteristics of Scotchkote® 2400 protective corrosion resistant coating of pipes, that were carried out in the laboratory of the Water Supply and Sanitation Department of the Moscow State University of Civil Engineering with Instron 3345 electromechanical tensile testing machine. The effectiveness of Scotchkote® 2400 liner relative to restoring and improving the strength and hydraulic characteristics of worn water pipelines. The author comes to the conclusion that Scotchkote® 2400 liner is a substantial alternative to cement-sand and other internal coatings, since in many respects it surpasses them owing to the ability of colmatage sealing of large diameter holes; high wear resistance; smooth surface; ability to endure high hydraulic pressures. The coating does not overlap service connections and only slightly reduces the diameter of the restored pipelines. Based on the conducted research the characteristics of Scotchkote® 2400 liner spraying technology are presented compared to the basic methods of trenchless pipeline reconstruction. Mathematical processing of the results of experimental studies of the physical and mechanical characteristics of Scotchkote® 2400 liner is given. As a result of the studies, the calculated values of the protective coating thicknesses for partially worn and worn out pipelines have been obtained and recommended.
Introduction
In recent decades a promising direction, known as trenchless technologies, has been effectively developing in the field of construction, repair and rehabilitation of public water supply and sewer systems. [1.2].
This trend is a weighty alternative to the open method of construction, repair and reconstruction of all types of underground pipelines since it surpasses it in almost all respects (cost-effectiveness, operational efficiency, environmental friendliness etc.) [3,4].
Scotchkote® 2400 liner is a new effective inside protective coating for pipeline rehabilitation. The coating is a fast-curing bicomponent polyurea-based polymer material applied to the inner surface of a pipeline using special centrifugal spraying equipment. The coating can be applied in water pipes transporting drinking or process water. Pipe material -steel, gray cast iron, ductile iron, PVC and asbestos-cement. Scotchkote® 2400 liner provides for renewing and improving the strength and hydraulic characteristics of obsolete pipelines while maintaining the properties of transported water; it VIII International Scientific Conference Transport of Siberia -2020 IOP Conf. Series: Materials Science and Engineering 918 (2020) 012132 IOP Publishing doi:10.1088/1757-899X/918/1/012132 2 ensures the required level of pipeline integrity, reduction of the failure rate, mitigation of the negative impact of repair works on the environment and reduction of energy costs while pumping water. Table 1 presents the key physical characteristics of the cured Scotchkote® 2400 liner. In accordance with the required design values of the coating thickness and bury depth of the pipeline, Scotchkote® 2400 protective coating meets the requirements of ASTM F 1216-09 for material properties at the end of 50 years of operation [6,7]. Scotchkote® 2400 technology is used to form either an anticorrosion barrier or a thick-layered corrosion-resistant system for the renewal of the inner surface of a water distribution pipeline and restoration of its strength characteristics (structural integrity) while providing for [7]: -maintaining the quality of transported water by applying a protective coating layer that meets the established sanitary and epidemiological requirements of the Russian Federal Service for Supervision of Consumer Rights Protection and Human Welfare (RF Rospotrebnadzor) on the inner surface of the old pipeline; -maintaining (improving) the hydraulic characteristics of the pipeline by reducing the specific resistance and roughness coefficient of the inner surface of the rehabilitated pipeline while applying the liner.
Scotchkote® 2400 application types: -structural; -barrier corrosion-preventive. In case of the structural application the resulted coating has the following characteristics: -up to 50 years tensile strength at the maximum allowable operating pressure of the pipe subject to repair; -the capacity to withstand dynamic loads and other short-term impact associated with loads of internal working pressure, load of soil, groundwater, as well as partial vacuum caused by sudden emptying of the pipeline. Table 2 presents the comparative characteristics of Scotchkote® 2400 spraying technology and those of the main trenchless methods of pipeline rehabilitation. To put into practice the advanced Scotchkote® 2400 protective technology in Russia, studies shall be carried out on the specific features of its application for the protection of well-worn water transportation pipelines in Russian cities and settlements. While choosing the thickness of the protective coating layer for the section of a pressure water pipeline, strength calculation of the "pipeline + coating" design must be made taking into account the degree of wear and tear; hypothetically, by this the condition "worn-out" and "partially worn" pipeline section is meant. Partially worn pipe -a pipe capable of withstanding independently all loads (internal or external) throughout the entire life of the applied coating. In this case, the pipe may have displaced joints, cracks, and traces of corrosion. Moreover, the pipe under repair must withstand all soil loads and temporary loads during the presumable remaining life of the pipeline. The coating in this case must withstand the hydrostatic pressure caused by leaks, as well as the internal pressure in the places of pores bridging.
A worn-out pipe is a pipe that cannot independently withstand all loads (internal or external) during the entire service life of the applied coating. The key parameter for evaluating the effective coating thickness is the bending stress around the hole created by the internal pressure affecting the pipe surface.
In case of a partially worn pipeline Formula 1 can be used to determine the thickness of the protective coating layer: where t -thickness of the protective coating layer taking into account the bending strength at the predicted pipeline service life in the range of 20 -50 years, mm; SLlong-term bending strength, МPa; D0pipe diameter (ranging from 100 to 610), mm; Dhdiameter of a through-hole in the pipe, mm; Pipressure in the pipe (ranging from 0.4 to1.2), MPa; Ndesign safety factor (for N values = 1-2). The calculation results are compared with the coating thickness for partially worn gravity pipelines and an option is selected with a larger coating thickness. For a worn-out pipeline Formula 2 is used to determine the thickness of the protective coating layer: where t -thickness of the protective coating layer, taking into account the tensile strength at the predicted service life of the pipeline in the range of 20 -50 years, mm; StLlong-term tensile strength, МPa; Dpipe diameter (ranging from 100 to 610), mm; Pipressure in the pipe (ranging from 0.4 to 1.2), МPa; Ndesign safety factor (for N values = 1-2).
The calculated values of Scotchkote® 2400 liner thicknesses for partially worn and worn-out pipes are presented in Tables 3 and 4. Notes: * With the depths of the pipeline above the specified range of values, reliable operation of the coating for an estimated period of 50 years cannot be ensured. ** For these values of Scotchkote® 2400 liner thickness the safety factor is: 1.0 < Kzap. <2.0. The actual value of the safety factor for the given working conditions of the pipe structure is indicated in parentheses The following physical and mechanical characteristics of the liner were subjected to the experimental research: the maximum load applied to the liner sample (F, N), the maximum damage emerging in the sample (σ, MPa), the maximum extension of the sample (x, mm), and the maximum longitudinal deformation of the sample (ε, mm / mm), as well as the deformation diagram (characteristic curve of σ versus ε). The physical and mechanical characteristics of Scotchkote® 2400 liner used as a coating material were obtained by the calculation and analytical methods, taking into account 50 years of its operation. They include: maximum tensile strength (σ50, MPa); Young's modulus (modulus VIII International Scientific Conference Transport of Siberia -2020 IOP Conf. Series: Materials Science and Engineering 918 (2020) 012132 IOP Publishing doi:10.1088/1757-899X/918/1/012132 6 of elasticity E50, MPa); density (ρ, kg/m 3 ); Poisson's ratio (ν); shear modulus (G50, MPa); modulus of volume elasticity (K50, MPa).
Results
The mathematical processing of the results of studying the physical and mechanical characteristics of Scotchkote® 2400 liner was carried out by modeling Ansys finite element analysis in automated computer environment. This automated system provides for the accurate representation of the object of study on the basis of its geometric characteristics and the physical and mechanical characteristics of the materials of the target [9].
The computer model of the pipe structure based on Scotchkote® 2400 allows exploring the strength capacity of this protective coating for 50 years of its use, taking into account partial or complete wear of the original pipeline and the effect of Scotchkote® 2400 liner with the accommodation of various loads and impacts. Table 5 gives the initial data for modeling the strength capacity of a pipe structure based on of Scotchkote® 2400 liner when in use for 50 years. The following parameters were determined in order to evaluate the efficiency of Scotchkote® 2400 liner: -maximum equivalent stress σе, MPa, determined by Formula (3): where , , -primary stresses, MPa; -reserve coefficient Кzap. (the ratio of the maximum tensile strength of the coating material to the maximum equivalent stresses arising in it), determined by Formula (4): 4. Conclusions 1.Scotchkote® 2400 liner is a considerable alternative to cement-sand and other internal coatings, as it is superior in many respects owing to the ability to seal large diameter holes, and to withstand high hydraulic pressures; owing to high wear resistance, and smooth surface. Scotchkote® 2400 coating does not block service laterals and only slightly reduces the diameter of the pipelines under repair.
2.The use of Scotchkote® 2400 technology ensures either an anti-corrosion barrier or a thick-layer corrosion-resistant system that restores the inner surface of the water pipe and its strength characteristics (structural integrity), and provides for: -maintaining the quality of the transported water by applying on the inner surface of an old pipeline a protective coating layer that meets the sanitary-epidemiologic requirements set by RF Rospotrebnadzor; -maintaining (improving) the hydraulic characteristics of the pipeline by reducing the specific resistance and roughness coefficient of the inner surface of the pipelines subject to rehabilitation, which ensures the energy saving effect while pumping water. | 2020-10-28T18:55:31.628Z | 2020-10-07T00:00:00.000 | {
"year": 2020,
"sha1": "0670838a390287b1f88d06bd24ea4e1bc9a60877",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/918/1/012132",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "0c1856ebe936f2d35b086fa686933e105453d902",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Environmental Science"
]
} |
35872200 | pes2o/s2orc | v3-fos-license | Tracheal instillation of urban PM 2 . 5 suspension promotes acute cardiac polarization changes in rats
The mechanisms by which PM2.5 increases cardiovascular mortality are not fully identified. Autonomic alterations are the current main hypotheses. Our objective was to determine if PM2.5 induces acute cardiac polarization alterations in healthy Wistar rats. PM2.5 samples were collected on polycarbonate filters. Solutions containing 10, 20, and 50 μg PM2.5 were administered by tracheal instillation. P wave duration decreased significantly at 20 μg (0.99 ± 0.06, 0.95 ± 0.06, and 0.96 ± 0.07; P < 0.001), and 50 μg (0.98 ± 0.06, 0.98 ± 0.07, and 0.96 ± 0.08; 60, 90 and 120 min, respectively) compared to blank filter solution (P < 0.001). PR interval duration decreased significantly at 20 μg (0.99 ± 0.06, 0.98 ± 0.07, and 0.97 ± 0.08) and 50 μg (0.99 ± 0.05, 0.97 ± 0.0, and 0.95 ± 0.05; 60, 90, and 120 min, respectively) compared to blank filter and 10 μg (P < 0.001). QRS interval duration decreased at 20 and 50 μg in relation to blank filter solution and 10 μg (P < 0.001). QT interval duration decreased significantly (P < 0.001) with time in animals receiving 20 μg (0.94 ± 0.12, 0.88 ± 0.14, and 0.88 ± 0.11) and 50 μg (1.00 ± 0.13; 0.97 ± 0.11 and 0.98 ± 0.16; 60, 90 and 120 min, respectively) compared to blank filter solution and 10 μg (P < 0.001). PM2.5 induced reduced cardiac conduction time, within a short period, indicating that depolarization occurs more rapidly across ventricular tissue.
Introduction
Numerous studies have associated air pollution, especially its particulate component, with hospital admissions and mortality due to cardiovascular diseases (1,2).When exposed to particulate matter (PM), individuals with cardiac disorders experience acute cardiovascular events such as myocardial infarction (3)(4)(5)(6)(7), as well as release of inflammatory mediators (8)(9)(10)(11).Additionally, increased plasma viscosity and other changes in bloodrelated parameters such as fibrinogen levels or red blood cell counts have been demonstrated after particle inhalation (12,13).
The mechanisms by which PM increases mortality through cardiovascular disease are not completely clear, but probably include autonomic effects or direct myocardial toxicity of PM components.It is plausible that PM evokes the beginning of pulmonary inflammation triggered by reflexes that may affect cardiovascular function (14).Changes in autonomic control may induce alterations in vascular permeability, edema and systemic inflammation, causing heart failure and sudden cardiac death (15).Epidemiologic studies have also demonstrated a consistent link between sudden cardiac death and particulate air pollution (16), with changes such as increased QRS duration and arrhythmias (17), alteration of heart rate variability (HRV) (18,19) and repolarization abnormalities (20)(21)(22)(23) increasing morbidity and mortality.
Previous epidemiological studies have shown that daily variations in particulate air pollution induced a decrease of HRV (24,25) and an increase of arrhythmias (26).Studies focusing on the onset of myocardial infarction in patients exposed to traffic in their routine activities indicated that the most critical time window is within one hour of the onset of clinical manifestations (27).Controlled exposure of patients with coronary insufficiency during moderate exercise indicates that PM acutely increases the magnitude of ischemic burden to myocardial tissue (28).In addition, an acute increase in the risk of cardiac tachyarrhythmias was observed in patients with implanted cardioverter defibrillators exposed to air pollution during routine activities (29).Previous studies from our group have demonstrated that PM instillation in rats promotes an acute decrease in HRV after 1 h (30), as well as pulmonary arteriole vasoconstriction, hematological changes and pulmonary inflammation after 24 h (31).
Recent publications have reported that repolarization abnormalities play a role in arrhythmogenesis (32,33).In this context, analysis of the duration of wave intervals in electrocardiograms (ECG) could potentially identify patients at risk for cardiac death and sudden cardiac death.The objective of the present study was to determine if fine particulate matter is able to induce acute cardiac polarization alterations during the first 2 h of ECG recording in healthy Wistar rats.To assess the effects of environmental PM concentration we used solutions containing 10, 20, and 50 μg PM 2.5 administered by tracheal instillation.
Animals
Adult male Wistar rats ( Particle sampling and analysis PM 2.5 (fine mode particles) samples were collected on 10 polycarbonate filters throughout the study using Harvard impactors (Air Diagnostics, USA) operating at 10 L/min for 24 h and PM 2.5 concentration was determined gravimetrically (34).The exposure site was located <100 m from a busy traffic corner in downtown São Paulo and in close proximity to a monitoring station of the State of São Paulo Sanitation Agency.At this intersection, it is estimated that approximately 83,941 cars, 9936 diesel vehicles and 6321 motorcycles circulate daily on the main street and 25,590 cars, 5299 diesel vehicles and 808 motorcycles circulate on the lateral street of the crossing.There are no industries or significant biomass sources in the surrounding area.Filters were weighed before and after collection to determine particle mass.After PM 2.5 collection, the filters were maintained in an acclimatized environment and re-weighed.Particles were sampled in September 2006.
Trace elements determination
The comparative determination of trace elements present in PM 2.5 was made on 10 filters using Energy Dispersive X-ray Fluorescence Spectrometer analysis (ED-XRF; EDX 700, Shimadzu Corporation Analytical Instruments Division, Japan).The spectrometer used a low power Rhtarget tube, voltage of 5 to 50 kV, and a current of 1 to 1000 μA.The characteristic X-ray radiation was detected by a Si (Li) detector.The analysis was made in a vacuum for the element range Na to U on the 10-mm filter surface.The non-exposed filters were used as blanks and their contribution was discounted from the results (30,31).
Filter extracts
After ED-XRF analysis, aqueous suspensions of filters were prepared.The filters were submerged in distilled water and the filtrate was removed via agitation in an ultrasound water bath for 2 h.
Particulate matter instillation
Forty rats were submitted to tracheal instillation of 0.5 mL of the following solutions: blank filter (N = 10), a solution obtained by ultrasonication of a blank filter in distilled water; P 10 (N = 10), a solution obtained by ultrasonication of a filter containing PM submerged in distilled water, containing 10 μg PM 2.5 ; P 20 (N = 10), a solution obtained by ultrasonication of a filter containing PM submerged in distilled water containing 20 μg PM 2.5 ; P 50 (N = 10), a solution obtained by ultrasonication of a filter containing PM submerged in distilled water containing 50 μg PM 2.5 .
The PM 2.5 amount instilled into each rat corresponded to the dose that would deposit in their lungs during 24 h of ambient exposure at concentrations similar to those of the study site, which had an annual average of PM 2.5 of approximately 30 μg/m 3 (35).Assuming that the ventilation of a resting adult rat is about 200 mL/min, the amount of PM 2.5 inhaled would be about 9 μg/m 3 .Thus, our lowest Particulate matter promotes ECG changes in rats www.bjournal.com.brdose represents a typical day in downtown São Paulo, while the higher doses are representative of particularly bad pollution days.
The instillation procedure was performed under anesthesia with 3% sodium pentobarbital (30 mg/kg body weight, ip).The rats were submitted to tracheal intubation using an adapted pediatric laryngoscope and a 16-gauge polyethylene tube was inserted serving as an endotracheal tube.The solution (0.5 mL of blank or extract suspensions) was instilled during three separate inspirations through the endotracheal tube coupled to a syringe.
Electrocardiographic data acquisition and analysis
An ECG was recorded through stainless steel needles implanted under the skin during anesthesia after tracheal intubation.Electrocardiogram electrodes were implanted subcutaneously in a Lead II configuration (right arm, left leg, and right leg) with one retrocordial derivation.We employed a device primarily developed for use in human ECGs (TEB -Tecnologia de Engenharia Brasileira ® ), which was adapted for use in rodents.ECG signals were band-pass filtered, amplified, digitized (500 Hz) and stored in a microcomputer (30).
ECGs were recorded during five consecutive minutes for each period of analysis (pre-instillation, 30, 60, 90, and 120 min after instillation) and were analyzed manually by two observers for P wave (which represents the wave of depolarization that spreads from the sinoatrial node throughout the atria), PR interval (period of time from the onset of the P wave to the beginning of the QRS complex), QRS complex (which represents ventricular depolarization), and QT interval (which represents the time for both ventricular depolarization and repolarization).The program automatically counts each wave segment selected by the researcher with the mouse cursor on a computer screen.Wave segments were coded for blind analysis and the code was only revealed when the studies were completed.We analyzed 40 cardiac cycles for each period of analysis.
Additionaly, we assessed heart rate (HR) and HRV through the standard deviation of beat-to-beat intervals (SDNN).Changes were computed by inspecting ECGs recorded during five consecutive minutes for each period of analysis.Considering that the HR in our animals was consistently over 200 beats per minute, we analyzed at least 1000 beats.HR, calculated as the reciprocal of mean beat-to-beat interval, and the SDNN were calculated immediately before (Pre) and 30, 60, 90, and 120 min after instillation.After regaining consciousness, the animals were returned to their cages and taken to the vivarium.
Statistical analysis
The significance of wave segment duration and of HR and HRV results was determined by employing general linear models using dependent variables for the difference between the various times of measurements after instillation and the values measured before instillation.The Bonferroni test was employed for post hoc analysis.The level of significance was set at 5%.All statistical calculations were performed with the aid of the SPSS v 10.0 package.
Results
None of the animals under study died during the course of the experiment.One animal died during the night after instillation of 10 μg PM 2.5 .Autopsy revealed pulmonary congestion and neutrophilic alveolar infiltrate.
Table 1 reports the percentage of each element collected on 10 filters measured by ED-XRF.Initially, a qualitative and semi-quantitative analysis of elements from Na to U was performed.After semi-quantitative analysis, the resultant spectrum was generated according to the detectable elements of interest.In general, the concentrations of sodium, silicon, calcium, and iron are representative of crustal and soil resuspension.Sulfur, which is characteristic of vehicle emissions, was the most representative element (37.59%).
Table 2 provides the HRV results assessed by SDNN values.While there was no significant difference in SDNN among groups, we did observe a trend of reduction of SDNN values with the large dose (50 μg), the same dose that provoked a significant SDNN decrease in our previous study (30).Tables 3 and 4 present HR and absolute measures of P, PR, QRS, and QT wave segment duration for the various groups, respectively.There were no significant differences in these parameters.Data are reported as means ± SD (in milliseconds) for 10 rats in each group.
Table 3. Table 3. Table 3. Table 3. Table 3 wave segment durations measured (P, PR, QRS, and QT) for all experimental groups and periods of analysis.P wave duration decreased significantly with time in animals receiving 20 and 50 μg PM 2.5 compared to the blank filter group (P < 0.001).PR interval duration decreased significantly in animals receiving 20 and 50 μg PM 2.5 compared to blank filter and 10 μg (P < 0.001).QRS interval duration was reduced among animals that received 20 and 50 μg PM 2.5 in relation to blank filter and 10 μg (P < 0.001).However, the groups showed no effect with time for QRS interval duration (P = 0.057).QT interval duration decreased significantly (P < 0.001) with time in animals receiving 20 and 50 μg PM 2.5 compared to the blank filter and 10 μg PM 2.5 groups (P < 0.001).We observed a time-dependent decrease of QT interval duration that was more pronounced in animals receiving 20 μg compared to 50 μg PM 2.5 (P < 0.001)
Discussion
In this study, tracheal instillation of PM 2.5 , even at low doses, induced ECG changes expressed by reduced cardiac conduction time in young and healthy rats within a short period of time.We demonstrated in a recent study that the concentration of PM 2.5 mass collected at the monitoring station and from the roof of the São Paulo Medical School revealed that PM 2.5 emission values were about 30 μg/m 3 in most of our daily measures (35).Our results indicate a trend of reduction in HRV that was observed with a major dose (50 μg) of PM 2.5 , close to ambient levels of downtown São Paulo.This result was similar to that obtained in a previous study (30) that demonstrated a decrease in HRV 60 min after tracheal instillation of 50 μg PM 2.5 .Our main objective was to continue these analyses, assessing acute ECG alterations caused by ambient particle concentrations similar to real conditions.
ECG changes occurred 60 min after PM 2.5 instillation, reflecting an acute myocardial response.Despite the differences between an experimental study and real-world environmental conditions, our results support the view that air pollution adversely affects cardiovascular systems.Epidemiological studies support the evidence that these alterations occur within a short period of time (24 h) and may increase the risk of arrhythmia development and sudden cardiac death.It is interesting to note that the time course of the ECG alterations in our healthy rats (2 h) was within the same time window as observed in clinical studies on patients with portable defibrillators or survivors of myocardial infarction (36,37).Peters et al. (26) showed that patients with implanted cardioverter defibrillators experienced potentially life-threatening arrhythmias shortly after an 18 μg/m 3 increase in air PM 2.5 concentration.These findings suggest an increased sympathetic tonus or direct effect of fine particulate matter on cardiac ionic channels.Both are possible mechanisms by which PM Table 5.Table 5.Table 5.Table 5 could lead to arrhythmias or ischemic events (38,39).As a general rule, particles elicited an increase in HR with major doses of PM 2.5 (20 and 50 μg), the most significant effect being observed in the QT interval.This finding indicates that depolarization and repolarization occur more rapidly across ventricular tissue, a condition that may favor the development of arrhythmia and increase oxygen demand in myocardial tissue.Cardiac death is a consequence of a complex interplay between the autonomic nervous system, altered myocardial substrate and myocardial vulnerability leading to arrhythmogenic or ischemic responses.Within this context, evaluation of electrocardiographic parameters provides the opportunity to assess some of the key components related to cardiac death due to these events.Depolarization and repolarization abnormalities assessed by wave interval duration reflect the myocardial substrate state and are associated with an increase of cardiac events in healthy or post-infarction patients (40) after exposure to air pollution.
It is important to characterize the limitations of our study in order to better evaluate its real contribution.Extraction of PM 2.5 using distilled water does not preserve all of its components, such as volatile compounds, organic and inorganic insoluble substances and transition metals.In addition, alveolar PM 2.5 deposition by tracheal instillation of an aqueous suspension differs from inhalation in real world conditions.Thus, it is difficult to extrapolate experimental conditions involving animals to real world conditions involving humans.For example, respiratory depression is a typical and common effect provoked by anesthetics.In real ambient conditions, these effects will vary according to individual health and exercise training.Because of the "non-real world exposure" approach employed in this study, we were limited in the number of characterizations performed (such as some gases or organic elements).Although we cannot ignore that gases and some other ambient factors could have influenced our results, we believe that particle emission is an important factor that influences our results.The urban particulate matter of São Paulo has been characterized as typically vehicular in origin.In fact, similar to our previous study (30), sulfur was the most representative element demonstrated by PM 2.5 elemental analysis.
Our results indicate that a very simple approach -ECG measurements in rats -may represent a noninvasive and non-lethal approach to the evaluation of particle toxicity in the cardiovascular system.Such an approach is necessary because of the complexity of these studies for which human experiments would not be possible.Urban aerosol is a complex mixture of air toxins, the composition of which varies in time and space due to dynamic traffic density, weather conditions, and photochemistry.Thus, a large number of studies, with corresponding chemical analysis, should be performed to better understand the components with higher toxicity, aiming both to describe the mechanisms of injury and to devise strategies of air pollution control in order to reduce risks.The availability of a simple, non-lethal and inexpensive experimental approach that works at near ambient concentrations of particles may be of use to design further experiments to better understand particle-induced toxicity to the cardiovascular system, providing useful information to improve the air quality of our large urban centers.
Data are reported as mean percent of each element collected on 10 filters measured by energy dispersive X-ray fluorescence spectrometer analysis.L.F. Maatz et al. www.bjournal.com.br
Table 5
lists the descriptive statistics of variations in relation to the pre-instillation time of each of the ECG
. Heart rate for all experimental groups immediately before (PRE) and at 30, 60, 90, and 120 min after tracheal instillation of PM 2.5 .
Data are reported as means ± SD (in milliseconds) for 10 rats in each group.
Table 5 .
. Variations in relation to pre-instillation time (PRE) of each ECG wave segment duration (P, PR, QRS, and QT) for all experimental groups at 30, 60, 90, and 120 min after tracheal instillation of PM 2.5 .
+Data are reported as means ± SD (in milliseconds) for 10 rats in each group.*P < 0.001 compared to blank filter; + P < 0.001 compared to blank filter and 10 μg PM 2.5 (Bonferroni test). | 2017-10-31T02:33:33.509Z | 2009-02-01T00:00:00.000 | {
"year": 2009,
"sha1": "af06e0b59b712fe920d35a8c6c2ecffe6544f1f5",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/bjmbr/a/wHJjXxkQX3GCcpgtMCWbNTN/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "af06e0b59b712fe920d35a8c6c2ecffe6544f1f5",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
222217078 | pes2o/s2orc | v3-fos-license | Combining IL-2-based immunotherapy with commensal probiotics produces enhanced antitumor immune response and tumor clearance
Background Interleukin-2 (IL-2) serves as a pioneer of immunotherapeutic agent in cancer treatment. However, there is a considerable proportion of patients who cannot benefit from this therapy due to the limited clinical responses and dose-limiting toxicities. Mounting evidence indicates that commensal microbiota shapes the outcome of cancer immunotherapies. In this study, we aim to investigate the enhancing effect of Akkermansia muciniphila (AKK), a beneficial commensal microbe receiving considerable attentions, on the antitumor efficacy of IL-2 and explore the underlying molecular mechanism. Methods Colorectal carcinoma patient-derived tumor tissues were used to evaluate the therapeutic efficacy of combination treatment. AKK was orally delivered to B16F10 and CT26 tumor-bearing mice along with systemic IL-2 treatment. Flow cytometry was carried out to analyze the tumor immune microenvironment. The molecular mechanism of the enhanced therapeutic efficacy was explored by RNA-seq and then verified in tumor-bearing mice. Results Combined treatment with IL-2 and AKK showed a stronger antitumor efficacy in colorectal cancer patient-derived tumor tissues. Meanwhile, the therapeutic outcome of IL-2 was significantly potentiated by oral administration of AKK in subcutaneous melanoma and colorectal tumor-bearing mice, resulting from the strengthened antitumor immune surveillance. Mechanistically, the antitumor immune response elicited by AKK was partially mediated by Amuc, derived from the outer membrane protein of AKK, through activating toll-like receptor 2 (TLR2) signaling pathway. Besides, oral supplementation with AKK protected gut barrier function and maintained mucosal homeostasis under systemic IL-2 treatment. Conclusion These findings propose that IL-2 combined with AKK is a novel therapeutic strategy with prospecting application for cancer treatment in clinical practice.
Flow cytometry analysis of CRC patients derived tumor tissues
Tumor tissues collected from CRC patients were cut into small pieces and digested and filtered into single cell suspensions. For cell viability analysis, IL-2 and AKK were prepared and added to the cultured cells from CRC patients. The cells were observed under a microscope every six hours. After cultured for 24 hours, cells were collected and stained with annexin V and propidiumiodide for apoptosis detection by flow cytometry. For tumor immune microenvironment analysis, single cell suspensions of tumor infiltrating lymphocytes were isolated from CRC patients and treated with IL-2, AKK and their combined therapy for 24h.
Flow cytometry analysis of side population cells
110 6 single cell suspensions in culture medium containing 1 % FBS were prepared from tumor tissues of subcutaneous tumor bearing mice. The suspensions were stained with fluorescent dye (Hoechst 33342) at 5 μg/mL in the presence or absence of 50 μM verapamil. Then, cells were BMJ Publishing Group Limited (BMJ) disclaims all liability and responsibility arising from any reliance Supplemental material placed on this supplemental material which has been supplied by the author(s) 4 incubated in darkness at 37 °C for 90 min, washed twice with pre-cooling PBS and resuspended in 300 µL PBS and kept on ice until further analysis.
Tumor-repopulating cells culture
Tumor-repopulating cells were selected from the single cell suspensions by soft 3D fibrin gels according to the previous study [1]. First, fibrinogen was diluted to 2 mg/mL with T7 buffer (50 mM Tris, pH 7.4, 150 mM NaCl). Fibrinogen/cell mixtures were obtained by blending 2 mg/mL fibrinogen with similar volume of cell suspension (210 3 cells/mL), which produced gels of 90 Pa in elastic stiffness. 250 μL of the mixtures were loaded into each well of 24-well plate pre-added with 5 μL thrombin (0.1 U/μL). The cell culture plate was then incubated at 37 °C for 30 min. Finally, 1 ml RPMI 1640 medium containing 10 % FBS and antibiotics were added. On the fifth day, tumor spheroids were obtained, and the colony size and number were measured.
Antitumor effect and mechanism study of Amuc in tumor bearing mice
To explore the involvement of TLR2 pathway in the antitumor effects of Amuc, tumor-bearing mice received i.p. injection of BLP at 10 g per mice and CU-CPT22 at 3 mg/kg every 5 days for 3 weeks. Amuc was delivered at 10 g per mice by oral administration every three days for 3 weeks. Tumor volume and body weight were recorded every 3 days. The length (L) and width (W) of tumor were measured every other day with a digital caliper and tumor volume was calculated as L×W 2 ×0.5. When the tumor volume reached about 2000 mm 3 , mice were sacrificed according to the guidelines for animal care. Tumor samples were collected for further analysis. All mice received the humane care and had free access to water and the maintenance diet.
Dual-luciferase reporter gene assay for TLR2
Human HEK 293T cells were seeded at a density of 110 5 cells in 24-well plates. Cells were transfected with 1µg PCDNA3.1(+)-hTLR2-Flag plasmid, 0.5 µg pGL4.32-NF-κB-luciferase BMJ Publishing Group Limited (BMJ) disclaims all liability and responsibility arising from any reliance Supplemental material placed on this supplemental material which has been supplied by the author(s) plasmid, and 0.01 ug pRL-TK plasmid by using Lipofectamine 2000 reagent (Invitrogen). After incubation for 24 h, the transfection solutions were replaced with AKK suspension (110 7 CFU/mL) or Amuc solution (10 μg/mL), followed by incubation at 37 °C in a 5% CO2 incubator for 24 h. Receptor ligand Pam3CSK4 (10 ng/mL) and maintenance medium (DMEM) were used as the positive control and the negative control, respectively. Subsequently, cells were rinsed twice with PBS (pH 7.4) and lysed with 1 passive lysis buffer (100 μL/well). The Luciferase and Renilla luciferase activity were separately measured using a fluorescence spectrophotometer (GloMax® 20/20 Luminometer, Promega) following the manufacturer's instructions for the Dual-Luciferase Assay System (Promega). The activation ratio of TLR2/ NF-κB was calculated by fluorescent detection to evaluate the level of TLR2 activation.
Expression and purification of AKK-derived outer membrane protein
The outer membrane protein of AKK (here termed Amuc) was expressed and purified following the previous method with modifications [3]. The expression plasmid was constructed by amplification of its gene devoid of the coding sequence for its signal sequence and cloning of the resulting PCR product in pET28a-Amuc.
The resulted plasmid pET28a-Amuc, with the conformation confirmed by sequence analysis, was transformed into E. coli BL21. This strain was then grown in LB-broth containing kanamycin (50 µg/mL) followed by IPTG induction at a final concentration of 2 mM by shaking at 220 rpm at 28 °C. After ten hours of induction, cells were pelleted by centrifugation at 9,000 g for 15 min and stored at -80 °C for further lysis. Cell pellets were resuspended and lysed using lysozyme and ultrasonic homogenizer (SCIENTZ, Ningbo, China). After centrifugation, the supernatants were collected and purified for Amuc by using BeyoGold™ His-tag Purification Resin (Beyotime, Shanghai, China). The purified protein sample was determined by BCA assay and stored at -80 °C for further experiments.
RNA sequencing and data analysis
Total RNAs were prepared with Trizol reagent (Invitrogen, USA) and were sequenced by Illumina HiSeq X10 (Illumina, USA). Significance analysis (2-fold change and P-value < 0.05) was used to identify the differential genes with a false discovery rate (FDR) < 0.05. All identified sequences were mapped with Gene Ontology Terms (GO, http://geneontology.org/) and Kyoto Encyclopedia of Genes and Genomes (KEGG, https://www.kegg.jp/) to determine the functional and biological properties. Hypergeometric test was employed to conduct GO and KEGG pathway enrichment. Polyclonal-antibodies against Amuc were prepared by Dia-An Biotech, China. 2 mg of purified Amuc protein was mixed with FCA (Freund's complete adjuvant) or FIA (Freund's incomplete adjuvant), followed by subcutaneous injection into two Japanese white rabbits for four times.
Preparation and purification of antibodies against Amuc
The mixture containing FCA was only used in the first injection, while the mixture with FIA for the rest three injections. The 2 nd injection was on the 28 th day post the first injection. There were 2-week intervals between other injections. On the third day after the last injection, antiserum titer was tested by ELISA. On the 64 th day post the first injection, the rabbits with higher titer was killed and its blood was collected. An affinity column for the purification of the antibodies was made by coupling 1mg purified Amuc protein to CNBr-activated Sepharose 4B from GE. The antiserum was applied onto the column, followed by elution of the specific antibodies using the Glycine HCl buffer at pH 2.5.
Culture and in vitro activation of BMDC
To prepare bone marrow-derived dendritic cells (BMDCs), the tibias and femurs of normal C57BL/6 mice (wt) were removed under sterile conditions. Bone marrow cells were flushed out from the bone cavity gently using the needle of a 1 mL syringe and then inserted into a sterile culture dish with RPMI-1640 medium. Cell suspensions in the dish were centrifuged at 1200 rpm for 5 min and resuspended in RPMI-1640 medium supplemented with 10% FBS, 10 ng/mL IL-4 and 20 ng/mL GM-CSF. Cell were then distributed into 24-well plates (NEST Biotechnology, Wuxi, China) at a density of 1×10 6 cell/mL and were cultured for 5 days in 37 °C, 5 % CO2. On day 3, a fresh cell culture medium containing IL-4 and GM-CSF were added. On day 5, the nonadherent cells suspended in the medium were collected, centrifuged, and resuspended in the fresh culture medium containing IL-4 and GM-CSF. For further activation, 110 6 BMDCs were seeded in a 6-well plate (NEST Biotechnology, Wuxi, China) and stimulated with PBS, Amuc (10 μg/mL), and AKK (110 7 CFU/mL) for 24 h. The expression of the surface markers CD11c, MHC-II, and CD86 on the BMDCs was then measured and calculated by flow cytometry.
Histological analysis
Tumor tissues were fixed in 4 % paraformaldehyde, sectioned and stained with H&E. The rest of tumor tissues were frozen and prepared to be stained with Ki67 and TUNEL according to the manufacturer's instructions. Cell nuclei were stained by DAPI. For immunofluorescence staining of CD133, frozen slices of the dissected tumor tissues from different groups were blocked with 5% BSA in PBS for 60 min. Heat mediated antigen retrieval was performed in 0.01M citrate-buffer (pH 6.0). The slices were then incubated with a 1:500 dilution of anti- treated with blocked Amuc. In the anti-Amuc group, Amuc was pretreated with anti-Amuc antibody before oral delivery to tumor-bearing mice. All data are shown as mean ± SD (n = 6, ** P < 0.01). (B) Proportions of Foxp3 + CD25 + in CD4 + T cells in tumor-draining lymph nodes. Data are shown as mean ± SD (n = 6, *P < 0.05, ** P < 0.01). Data are shown as mean ± SD (** P < 0.01). After treatment with alive AKK (110 7 CFU/mL) and pasteurized AKK (an equivalent dose of alive AKK was inactivated by pasteurization at 70 °C for 30 min) for 24 hours. Cell viability was measured by CCK-8 method (n = 8). Figure S32. Survival of AKK after exposure to oxygen over time. AKK co-cultured with tumor cells were exposed to ambient air and incubated at 37 °C, which was the same condition as the experiments on CRC patients derived ex-vivo tumor tissues. At predetermined time points over a period of 24 h, the co-culture suspensions were collected and serially diluted 10-fold with PBS. 10 L aliquots of each dilution were spotted on mucin-based plates. Survival rate of AKK was determined by CFU counts on the plates. | 2020-10-09T13:05:29.538Z | 2020-10-01T00:00:00.000 | {
"year": 2020,
"sha1": "dabe8572dbbbc3ab22c2aada740df891b33abf7b",
"oa_license": "CCBYNC",
"oa_url": "https://jitc.bmj.com/content/jitc/8/2/e000973.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "be26dba7dedb9fb2c2378e061d78cea493849fc2",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237941546 | pes2o/s2orc | v3-fos-license | Processed pseudogenes: A substrate for evolutionary innovation
Processed pseudogenes may serve as a genetic reservoir for evolutionary innovation. Here, we argue that through the activity of long interspersed element‐1 retrotransposons, processed pseudogenes disperse coding and noncoding sequences rich with regulatory potential throughout the human genome. While these sequences may appear to be non‐functional, a lack of contemporary function does not prohibit future development of biological activity. Here, we discuss the dynamic evolution of certain processed pseudogenes into coding and noncoding genes and regulatory elements, and their implication in wide‐ranging biological and pathological processes. Also see the video abstract here: https://youtu.be/iUY_mteVoPI
INTRODUCTION
The human genome is an evolutionary playground for repurposing ancestral coding sequences, facilitating the birth of novel genes and regulatory elements. The diversification of ancestral genes by accumulated mutations is a well-studied mechanism of genetic evolution that can drive speciation. [1] Adaptive evolutionary innovations can also arise through the emergence of novel genes leading to species-specific phenotypic changes. [2,3] Most novel genes arise through DNA-and RNA-based gene duplication of ancestral sequences. [3,4] RNA-based gene duplication is mediated by retrotransposition. [2] In humans, long interspersed element-1 (LINE-1 or L1) retrotransposons can 'copy and paste' genes into new genomic locations via an mRNA intermediate. [5,6] These retroposed gene copies are often termed processed pseudogenes, [7,8] and are largely presumed to be 'dead on arrival' due the loss of flanking regulatory elements and introns, or the rapid accumulation of disruptive mutations. [8] However, a current state of non-functionality does not imply that a pseudogene never was nor will be biologically relevant. Evolution is a dynamic continuum that constantly re-arranges the building blocks of life, leaving space for novel combinations to come together and break apart. The birth and death of genes plays a clear role in this process, yet the contribution of pseudogenes to evolution remains a controversial topic. Do these 'defective' gene copies merely drift through evolutionary time, accumulating mutations until they are unrecognisable? Or do some represent an opportunity for evolutionary innovations to take place by providing the raw material to engineer novel genes or regulatory regions under positive selective pressure?
The implication of pseudogenes in biological processes including neurogenesis, [9,10] inflammatory responses [11] and cancer [12] necessitates revisitation of the notion that pseudogenes are evolutionary 'junk' . [13] However, the extent of pseudogene activity remains poorly investigated, in part perhaps due the bias inherent to the term 'pseudogene' , [14] which presumes non-functionality. Furthermore, technical shortcomings have impeded unambiguous distinction of pseudogene activity from their near identical parental counterparts. [14] Here, we propose that evolutionary innovations are facilitated by retrotransposons that in rare instances propagate gene copies throughout the genome. Some of these gene copies, known as processed pseudogenes, may act as a genetic repository of raw material for evolution, giving rise to novel genes and regulatory elements that shape the human genome.
RETROTRANSPOSITION DRIVES GENOME EVOLUTION
The notion that novel genes emerge through the repurposing of ancestral genes dates to the early 1930s [15,16] and persisted through the genomics revolution, influenced by Susumu Ohno's essay that proposed gene duplication as the major driving force of evolution. [17] There are several key features of Ohno's model: firstly, gene duplication produces redundant gene copies that accumulate mutations that act as raw material for evolutionary innovation (neofunctionalisation), while ancestral genes continue to fulfil original functions. Secondly, duplicated genes serve as a genetic surplus for ancestral genes that are silenced during dosage compensation. Thirdly, the most probable fate of a duplicated gene is degeneracy. [13] Duplicated genes can arise through segmental duplication or through the activity of retrotransposons. [2] Retrotransposons are a class of transposable elements that contribute to the instability and evolution of the human genome. [18] Once referred to as 'selfish entities' , retrotransposons copy genetic information through an mRNA intermediate by evading host genome defences. [19] Over one third of the human genome is composed of retrotransposons, [20] raising the question as to whether they contribute to the functional evolution of the human genome.
L1 retrotransposition is capable of genome-wide mutagenesis, through cis-preferential reverse transcription of L1 mRNA into singlestranded DNA, followed by second strand synthesis (double-stranded DNA) and integration into a new genomic location. [21,22] Full-length L1 transcripts span ∼6 kb and consist of a 5′ untranslated region (UTR) with an internal RNA polymerase II promoter, [23] two nonoverlapping open reading frames (ORF1 and ORF2) and a polyadenylated (polyA) 3′UTR. [24] The ORFs encode the molecular machinery necessary for mobilisation and integration of an L1 transcript into a new genomic location. [21] Over 500 000 L1 copies are annotated in the human reference genome [20] although the majority are inactivated by truncations or disruptive mutations, precluding the ability to mobilise. [20,25] Nonetheless, approximately 100 L1s retain intact ORFs and are retrotransposition-competent. [26,27] These active elements are engaged in an evolutionary arms race against host defences to infiltrate the human genome. [28,29] Past and ongoing L1 activity has generated profound changes to the human genome (reviewed in ref. [18]). For example, L1 retrotransposition can directly alter the genomic landscape by generating instability via insertional mutagenesis, producing heritable insertions when mobilised in the committed germline or during early embryonic development. High L1 copy number and sequence similarities can also indirectly spur genomic rearrangements through recombination-F I G U R E 1 L1-mediated generation of processed pseudogenes. L1-encoded proteins, including a nucleic-acid chaperone (ORF1p), reverse transcriptase (RT) and endonuclease (EN) (ORF2p), bind transcripts originating from distinct genes. The proteins retrotranspose the transcript to a different genomic location by cleaving genomic DNA at the EN target consensus site 5′-TTTT/A-3′. Through a process termed target-site primed reverse transcription (TPRT), the L1-encoded RT generates a single-stranded DNA sequence from the transcript template, from which a second strand synthesis (double-stranded DNA) occurs following integration at the cleavage site. The resulting processed pseudogene is stripped of its flanking regulatory elements and does not contain introns associated deletions. In contrast to L1s, higher processed pseudogene density is inversely correlated with recombination rate. [30] The impact of L1 on human genome evolution is not limited to mobilisation of the L1 RNA; L1s can trans-mobilise other polyA RNAs in rare cases, including those produced by the nonautonomous retrotransposons Alu [31] and SVA. [32] L1 retrotransposons can also mobilise processed protein-coding mRNAs [5] (Figure 1). Transcripts in the cellular vicinity of the L1-encoded molecular machinery are 'hijacked' , reverse transcribed and re-integrated into the genome, dispersing intronless gene copies sometimes stripped of parental cis-regulatory elements. These stripped-down gene copies are called 'processed pseudogenes' [8] and are considered constituents of 'junk DNA' .
Seven years after Ohno's seminal publication on gene birth through duplication, the term 'pseudogene' was officially coined by Jacq and coworkers to describe transcriptionally silent, tandemly repeated truncated copies of the 5S ribosomal RNA gene in Xenopus laevis. [33] Indeed, segmental duplication or erroneous non-equal cross-over results in widespread formation of duplicated pseudogenes that retain parent gene intron-exon structures and regulatory elements. [7] A rarer class of unitary pseudogenes are generated de novo by accumulation of disabling mutations in a protein-coding gene rendering it transcriptionally and translationally silent. [34] Following Ohno's assumption that most duplicated genes are destined for degeneracy, Jacq and co-workers concluded that the identified 5S ribosomal RNA 'pseudo' genes were artefacts of evolution. These remarks provided the foundation for a framework that categorises apparently defective sequences with similarity to another gene as pseudogenes. The genomics revolution subsequently saw regions of the genome with pseudogene hallmarks labelled functionless en masse. Although pseudogenes are almost as numerous as protein-coding genes (14 767, of which, 72% are processed, 24% duplicated, 1.6% duplicated, 2.4% other, and 19 957, respectively) (Gencode v38 [35] ), comparatively little is known about the contribution of pseudogenes to the evolution of the human genome.
RESURRECTED PSEUDOGENES EMERGE THROUGH ACQUISITION OF NOVEL PROMOTERS AND REGULATORY ELEMENTS
The prevailing view of pseudogenes has long been that of defective gene copies undergoing evolutionary decay. Indeed, most processed pseudogenes are incapable of producing mRNA due to loss of parental promoters and regulatory elements as well as acquisition of disabling mutations. [8] Nonetheless, evidence of pseudogene transcription in tissue-and cancer-specific patterns emerged following the advent of high throughput, short and long-read RNA-sequencing (RNAseq). [36][37][38][39] How do pseudogenes devoid of the regulatory machinery necessary for transcription become expressed? Various studies have identified sources of novel regulatory elements that facilitate the birth of functional pseudogenes (retrogenes). [40][41][42][43] Processed pseudogenes may preferentially integrate into genomic regions with open-chromatin and actively expressed genes [44,45] and are transcribed by 'hitchhiking' on pre-existing regulatory machinery. [45,46] For example, pseudogenes may integrate within an intron or exon and become transcribed from a 'host gene's' core promoter, generating a fusion transcript of pseudogene and host gene exons [45,[47][48][49][50][51] (Figure 2A). Alternatively, bidirectional promoters can facilitate the transcription of nearby pseudogenes [4,42,45] (Figure 2B).
Open chromatin can permit interactions between distal enhancers and regulatory elements associated with neighbouring genes, strengthening pseudogene transcription [45] ( Figure 2C). Furthermore, promoters from retrotransposons immediately upstream of pseudogene integration sites can regulate pseudogene expression [39,42,45] (Figure 2D). sequencing, is expressed from the promoter of an upstream human endogenous retrovirus-K [39] in an acute myeloid leukaemia cell line. In addition to benefiting from pre-existing regulatory elements, pseudogenes can acquire novel promoters from GpG-rich regions containing proto-promoter sequences with the inherent potential to facilitate transcription [42,52,53] (Figure 2E). Certain retrocopy promoters may have evolved through the repurposing of regulatory elements, [54] and promoters and enhancers that share similar architectures and functional features [55,56] can lead to enhancer or enhancer-like elements being co-opted as promoters [46] ( Figure 2F). Indeed, putative mouse enhancers were found to be orthologous to the promoters of several rat-specific retrocopies suggesting that regulatory element conversion is a mechanism through which pseudogenes can become transcriptionally active. [46] The recruitment of novel regulatory elements enables pseudogenes to evolve distinct expression patterns and functions compared to their parent genes. Notably, many pseudogenes show highly specific expression patterns, suggestive of coordinated regulation. [36,38,39]
PROCESSED PSEUDOGENES CAN EXPAND THE GENETIC REPERTOIRE THROUGH NOVEL GENE FUSIONS AND EXON GAIN
New genes can form when pseudogenes integrate into unrelated host genes. These insertions result in fusion transcripts containing sequences derived from both the pseudogene and host gene that may have functions distinct from those of the original host or parent gene. In rare instances that these novel sequences provide an adaptive evolutionary benefit, they can be selectively preserved. A classic example and the first discovered 'young' chimeric pseudogene fusion formed in a common ancestor of two African Drosophila species around 2.5 M years ago. [47] The chimeric gene jgw arose through a series of evolutionary events where a retroposed copy of Adh inserted into an intron downstream of the 5′ regulatory region and several exons of a segmental gene duplicate, ynd, producing a novel coding fusion transcript. Under the influence of positive selection, the novel testisexpressed gene evolved a functional role in hormone and pheromone metabolism. [57] Other retrogene fusions have been identified including Sphinx, [48] a Drosophila noncoding RNA-gene fusion implicated in male-male courtship behaviour, as well as young coding fusions that emerged in the primate lineage. [45] Notably, retrotransposition of CypA into the coding sequence of TRIMα, a restriction factor, produced the functionally-important gene fusion TRIM-CypA that confers retroviral resistance. [58][59][60] Strikingly, the chimeric gene arose independently in both New and Old World monkeys through convergent evolution.
Recently, 71 human-specific transcripts containing exapted sequences of 56 retrocopies were identified. [61] Retrocopy insertion events can have profound structural implications by generating novel proteins from alternative splicing. [39] For example, a splice variant of BRCA1 Exon gain is associated with evolution of pseudogenes into functional genes. [42] While some processed pseudogenes become multiexonic through intronisation (acquisition of splice sites within a parentderived exon), the majority recruit novel exons from upstream or downstream flanking sequences. [46] Notably, multi-exonic pseudogenes with novel 5′ exons are highly overrepresented in several vertebrate species, presumably providing distal promoters. [46] Over 80% of ancient retrogenes have accumulated complex gene structures that have resulted in broad expression patterns, [46] in contrast to younger mono-exonic pseudogenes that tend to be testes-specific. [42] The importance of exon gain is exemplified in the mouse retrocopy Rps23r1 that emerged through retroposition of the ribosomal protein S23. [62] Rps23r1 is transcribed from the complementary strand relative to the parent gene and incorporates additional coding sequence from sites flanking the insertion. The structurally distinct yet functional protein confers heightened resistance to the formation of amyloid plaques associated with Alzheimer's disease, demonstrating the recycling potential of pseudogene sequences to fulfil functions unrelated to their parent genes. Mono-and multi-exonic pseudogenes can also generate transcript isoforms from alternative splice sites and termination start/stop sites [46,63,64] as exemplified by the retrogene HNRNPF that encodes one broadly expressed and one testes-expressed isoform in several species. [46] Thus, alternative splicing of mono-and multi-exonic pseudogenes can enable expansion of genetic repertoires by producing functionally distinct transcript isoforms that may display organ-specific expression patterns.
EXPRESSED PROCESSED PSEUDOGENES CAN ACT THROUGH NONCODING MECHANISMS
Since early anecdotal evidence of functional pseudogenes, [65] it is now clear that many processed pseudogenes have evolved into bona fide F I G U R E 3 Pseudogenes contribute to the regulatory landscape. Processed pseudogenes can function through several RNA-and protein-based mechanisms. (A) A retrotransposed transcript is integrated in reverse orientation relative to the parent gene and expressed by a proximal promoter to generate an antisense pseudogene transcript. Alternatively, a bi-directional promoter in close proximity can generate an antisense pseudogene transcript, (B) pseudogene asRNA can hybridise with parent gene sRNA forming an RNA-RNA duplex, inhibiting the translation of the parent gene, (C) pseudogene asRNA can localise to the promoter of the parent gene and recruit factors involved in epigenetic silencing, (D) RNA-RNA duplexes consisting of pseudogene-pseudogene or pseudogene-parent gene transcripts can process into siRNAs by Dicer and then incorporate into RISC to target the parent gene transcript for degradation, (E) pseudogene transcripts with highly homologous 5′ regions of the 3′UTR can sequester parent gene-targeting miRNAs, enhancing expression of the parent gene and (F) processed pseudogenes with intact ORFs can produce proteins with redundant functions to the parent gene. Additionally, partially duplicated pseudogenes or pseudogenes bearing premature stop codons may produce a truncated protein that complements the biological function of the parent gene genes with important roles in development and disease (see reviews ref. [66][67][68]). These pseudogenes act through both coding and noncoding mechanisms. [14] One of the most widely recognised roles of functional pseudogenes is the regulation of parent gene expression due to high sequence similarity. For example, integration of processed pseudogenes into the genome in a reverse orientation relative to an adjacent promoter produces antisense RNA (asRNA) ( Figure 3A). Highly complementary regions of pseudogene asRNA can hybridise with parental sense RNA and form a stable double-stranded RNA (dsRNA) duplex, inhibiting protein synthesis from the parent gene ( Figure 3B).
Pseudogene antisense-mediated gene regulation was first identified in neurons from the Lymnaea stagnalis snail. [69] Co-expression of RNA from nNOS and antisense mRNA from the corresponding pseudogene leads to the formation of a stable dsRNA duplex, suppressing nNOS translation. Similarly, in humans, the pseudogene FLT1P1 produces sense and antisense transcripts that reduce the levels of the parent gene, VEGFR1, decreasing human colorectal tumour cell proliferation and xenograft tumour growth. [70] Pseudogene asRNA transcripts can also regulate epigenetic processes ( Figure 3C). For example, PTEN is regulated by the presence of asRNA from its pseudogene, PTENP1 that localises to the PTEN promoter and induces transcriptional silencing via recruitment of the H3K27 methyl-transferase EZH2 and DNA methylase DNMT3A. [71] Similarly, OCT4, a master regulator of pluripotency has six related pseudogenes, of which, OCT4-pg5 generates an asRNA capable of binding to the OCT4 locus and inducing epigenetic silencing. [72] RNA-RNA duplexes consisting of pseudogene-pseudogene or pseudogene-parent gene transcripts can process into endogenous small interfering RNAs in mouse oocytes and downregulate parent gene expression levels through RNA interference (RNAi) [73,74] ( Figure 3D). RNAi is a crucial biological process responsible for the suppression of gene expression by post-transcriptional targeting of mRNA with complementary siRNA molecules that direct targeted cleavage. [75] Pairing of sense and antisense pseudogene transcripts of Hdac1 induces cleavage by Dicer, [76] an enzyme that recognises and cleaves dsRNA, generating siRNAs complementary to the parent gene transcript. siRNAs incorporated into an RNA-induced silencing complex (RISC) localise to complementary regions on parent gene mRNA and induce cleavage by argonaute 2, the catalytic component of the RISC. [73] Similarly, the pseudogene Ppp4r1 generates an antisense transcript that hybridises to its parent gene's complimentary transcript and downregulates its expression by RNAi. [73,74] Processing of internal secondary structures or hairpin loops formed by pairing of homologous regions within a single pseudogene transcript can also elicit an inhibitory effect on parent genes including the Au76 pseudogene and its parent gene, Rangap1. [74] Pseudogene-derived siRNAs can also act as tumour-suppressors.
For example, a pseudogene of PPM11k, ψPPM1K, produces siRNAs that inhibit cell growth in hepatocellular carcinoma (HCC). [77] Inverted repeats within the pseudogene RNA form a hairpin that is cleaved by Dicer, generating two siRNAs that bind and downregulate both parent gene and NEK8, a target gene of PPM1K that promotes cellular proliferation and is overexpressed in HCC.
Pseudogene transcripts can also increase expression of their parent genes ( Figure 3E). PTENP1, KRASP1 and BRAFP1 enhance expression of their parent genes by acting as molecular sponges that sequester microRNAs (miRNAs) via shared binding sites. [78,79] The concept of competitive endogenous RNAs (ceRNAs) explains upregulation of PTEN, an intensively studied tumour suppressor gene that is frequently mutated in cancer, when its highly homologous pseudogene, PTENP1, is expressed. [78] In a pseudogene context, the ceRNA hypothesis postulates that shared miRNA response elements (MREs) within the high homology region of the 3′UTR of both pseudogene and parent gene transcripts results in competition for parent gene-targeting miRNAs.
Increasing PTENP1 levels depletes the pool of PTEN-targeting miRNA molecules, relieving PTEN inhibition. PTENP1 acts as tumour suppressor in cancer by increasing PTEN levels and consequently, the PTENP1 locus is frequently deleted in cancers. Oncogenic KRAS was similarly upregulated when the MRE-containing 3′UTR of its pseudogene, KRASP1, was overexpressed in vitro. [78] The first causal role of pseudogene ceRNA in cancer formation was identified in vivo when Braf-rs-1 overexpression in transgenic mice induced an aggressive malignancy resembling human diffuse large B cell lymphoma. [79] Braf-rs-1 and its human ortholog, BRAFP1, elicit oncogenic activity as ceRNAs by elevating BRAF expression and MAPK activation, promoting cell proliferation, differentiation and migration. However, there is increasing scepticism about the impact of pseudogene-miRNA interactions as most evidence for the ceRNA hypothesis relies on non-physiological levels of pseudogene expression. [80] Indeed, normal physiological changes in usually low pseudogene transcript levels do not sufficiently diminish the pool of parent gene-targeting miRNA to impact the often higher parent gene expression levels. [81] ceRNA levels must approach target abundance of parent gene-targeting miRNA to have a de-suppressive effect, making pseudogenes unlikely candidates as ceRNAs as they generally have considerably lower expression levels than their parent genes. Caution must be exercised when generalised theories of gene-regulatory networks are applied to large, heterogeneous bodies of noncoding RNAs that are not yet functionally characterised to simplify complex genomic interactions.
EXPRESSED PROCESSED PSEUDOGENES CAN IMPACT THE GENOME THROUGH CODING-DEPENDENT MECHANISMS
While the definition of pseudogene is not compatible with proteincoding capacity, some processed pseudogenes have undisrupted ORFs that are translated ( Figure 3F). PGK2 was the first reported retrotransposed gene-copy capable of producing a biologically significant protein. [65] PGK2 exhibits hallmarks of a processed pseudogene (no introns, genomic polyA tract) yet contains a complete ORF and is translated in the human testes to compensate for inactivation of its X-linked counterpart, PGK1. Mass-spectrometry-based analyses of the human proteome revealed 140 pseudogenes that are capable of producing over 200 peptides [82] and, more recently, over 700 retrogenes were found to produce uniquely matching peptides. [61] Furthermore, estimates based on ribosomal profiling that utilises a percentage of maximum entropy (PME) approach to measure read distribution uniformity and distinguish coding from noncoding RNAs, suggests 40% of annotated pseudogenes are translated. [83] Long-read cDNA sequencing of mixed human tissues and cell lines identified 160/318 full-length pseudogene transcripts (50%) that encode ORFs over 100 amino acids in length. [39] Notably, 53 pseudogenes contained ORFs over 90% the length of their parent genes, highlighting the potential of translated pseudogenes to produce intact proteins.
Recently, a systemic analysis of CRISPR-Cas9-induced frameshift mutations in protein-coding genes revealed that protein production was often not completely ablated. [84] The disruption of 136 distinct genes by frame-shift mutations resulted in residual protein expression for one third of gene targets (ranging from low to wild-type levels). Two causal mechanisms explain continued protein production: translational re-initiation downstream of a mutated exon producing N-terminally truncated proteins and skipping of the mutated exon during splicing. The same phenomena may apply to pseudogenes with disrupted ORFs, revealing why numerous pseudogenes retain the ability to translate into protein. While the functions of most translated pseudogenes remains to be elucidated, several sporadically characterised pseudogenes encode functional proteins that play important roles in tumorigenesis [12,85,86] and have been re-annotated as novel protein-coding genes. [87] Newly established protein-coding pseudogenes can evolve novel functions through the relocalisation of encoded proteins to novel cellular niches and perform compartment-specific functions under the influence of natural selection. [45] The process, known as subcellular adaptation, is reflected in the functional adaptation of a hominoidspecific retrogene, CDC14Bretro. [88] The retrogene emerged as a splice variant of CDC14B and encodes a protein that became expressed in the brain and testes, playing a role in stabilising microtubules. After a period of intense positive selection, CDC14Bretro completely relocalised and began to function within the endoplasmic reticulum. The functional diversification of GLUD2, a retrogene derived from GLUD1, occurred when it underwent subcellular relocalisation in a common ancestor of humans and apes 18-25 million years ago. [89] Under positive selection, two key amino acid substitutions induced biochemical alterations in the encoded protein causing sublocalisation from both the cytoplasm and mitochondrial compartments to just the mitochondria where it began to target the neurotransmitter glutamate in the brain for degradation. [90] Thus, numerous processed pseudogenes have contributed to genome evolution, serving as ncRNAs or producing functional proteins with biologically important roles.
PSEUDOGENES CAN EVOLVE PROTO-PROMOTER ACTIVITY AND INFLUENCE PROXIMAL GENES
Processed pseudogenes are a rich source of genetic information that can provide raw material for the evolution of novel regulatory regions.
We argue that retroposed transcripts originating from protein-coding genes contain both high GC content and various transcription factorbinding motifs (TFBMs) that provide a favourable environment to evolve enhancer-like or proto-promoter properties. For example, the presence of TFBMs in young processed pseudogenes could increase their propensity to evolve into enhancer-like elements and impact neighbouring genes. In the absence of selective pressure for their retention, TFBMs degenerate over evolutionary time. [91] Under selection, processed pseudogenes could evolve promoter activity and profoundly impact the coding sequences of nearby genes or host genes. For example, some pseudogene transcripts are embedded with alternative downstream transcriptional start sites inherited from parental transcripts that can evolve promoter activity. [4,41,45,92] These pseudogenes can serve as alternative promoters for adjacent or host genes, contributing coding sequence and generating novel splice isoforms. For example, the imprinted tumour suppressor gene RB1 harbours PPP1R26P1, a 5′-truncated retrocopy of PPP1R26. [93] The retrotransposition event occurred before the split of New and Old World monkeys and resulted in the integration of PPP1R26P1 into intron 2 of RB1, in reverse orientation relative to the host gene.
The region of PPP1R26P1 derived from exon 4 of the parent gene contains a differentially methylated CpG island that evolved promoter activity. Furthermore, the CpG island harbours an exon that is spliced into exon 3 of RB1 generating an imprinted pseudogenegene transcript. The contribution of pseudogene sequences to protein-coding transcripts is widespread. PacBio long-read cDNA sequencing of mixed human tissues reveals that the retrocopy constitutes the majority of the 5′ exon of RB1 and can contribute 179 codons to the fusion sequence. [39] cDNA sequencing also identifies 93 protein-coding genes that contain coding sequence derived from pseudogenes. [39] Semi-processed pseudogenes present an interesting mechanism through which pseudogenes could impact proximal genes. Retrotransposition of partially spliced transcripts can generate semi-processed pseudogenes that retain one or more parent gene introns. [94,95] Parentally-derived enhancer-like or regulatory elements embedded within unspliced introns could provide an avenue for semi-processed pseudogenes to become transcribed or influence nearby genes. Thus, some processed pseudogenes that integrate within exons of host genes can evolve promoter properties that can influence the expression of novel fusion transcripts, or provide alternative regulatory elements imbedded within unspliced introns, highlighting the potential of pseudogenes to disperse information-rich sequences throughout the genome and impact surrounding genes.
PROCESSED PSEUDOGENES ARE A GENETIC RESERVOIR FOR EVOLUTIONARY INNOVATION
While most pseudogenes appear to pose no contemporary biological benefit, a current state of non-functionality does not necessarily imply that a pseudogene never was nor never will be biologically relevant. The tens of thousands of processed pseudogenes in the human genome that appear to be 'inert' may be undergoing subtle changes that slowly impact the functional landscape of the genome.
Indeed, pseudogenes are not strongly conserved across non-primate mammals [96] suggesting that many pseudogenes are non-adaptive and rather exaptive. In other words, a pseudogene may not initially arise as a consequence of adaptive selection but rather exists as a substrate for selection to act upon. This is a well-known evolutionary mechanism for the birth and death of genes that generates substrates for adaptive selection and evolution to occur. [97] This is exemplified in the recent discovery of a potent inhibitor of enveloped virus budding, retroCHMP3. The retrogene arose through independent retrotransposition events in New World monkeys and mice, producing variants that evolved the same antiviral function. [98,99] retroCHMP originated from the retroduplication of CHMP3, an ESCRT-III protein involved in cellular membrane remodelling events. The ESCRT pathway is frequently exploited by enveloped viruses to enable budding from the cellular membrane. While inhibition of the ESCRT pathway prevents viral replication, loss-of function induces cytotoxicity due to the essential nature of the pathway. Remarkably, retroCHMP3 evolved an exquisitely balanced functional role as a potent antiviral factor by inhibiting the ESCRT pathway during infection while causing minimal cytotoxicity. Interestingly, the evolutionary pathways of retroCHMP3 emergence display species-and lineage-specific diversity producing a variety of full-length, truncated and degraded copies that present differing levels of antiviral activity and cytotoxicity. Full-length copies subjected to C-terminus-truncating mutations display enhanced inhibitory activity of the ESCRT pathway with minimal cytotoxicity as seen in squirrel monkeys and mice. Many primates surveyed contained full-length retroCHMP3, concomitant with recent duplication or long-term selection, indicating that the retrocopy is one truncating mutation away from becoming a potent antiviral defence mechanism.
This demonstrates the ability for multiple evolutionary trajectories to lead to the repeated emergence of retrogenes that act as a genetic reservoir for the evolution of common immune defences across multiple species. The existence of functional truncated pseudogenes also highlights the lack of empirical basis for computationally distinguishing inert pseudogenes from functional genes.
With numerous reported cases of functional pseudogenes, the distinction between a gene and pseudogene is blurred. What do functional pseudogenes lack that precludes them being perceived as a conventional gene? Furthermore, what constitutes a gene? The precise definition is highly contested and has continued to evolve since the outdated and impractical 'one gene -one protein' convention. [100] As our understanding of the molecular basis of our biology has expanded so has our view of what constitutes a gene (for review, see ref. [101]). For example, we now know that a large fraction of genes are noncoding and govern vital biological processes through RNA. Additionally, a single gene locus may contain differing transcriptional start and stop sites and undergo alternative splicing, producing various transcripts that give rise to proteins with profoundly different structures and functions. Furthermore, many genes overlap on the same or opposite strands or reside within introns and produce fusion transcripts that increase the repertoire of genetic diversity. Nonetheless, a gene-centric view of molecular biology may be detrimental as it acts as an oversimplification for complex biological processes and blurs potential lines of enquiry. This generalisation is mirrored in pseudogenes that are understudied and underappreciated perhaps not only due the technical challenge of distinguishing pseudogenes from parent genes, but also due to the bias implied by the term 'pseudogene' , which immediately assumes non-functionality.
As a result, the growing body of pseudogenes with demonstrated functions are cast aside from those of conventional genes and continue to be thought of as biologically inconsequential artefacts. Further investigation of coding and noncoding pseudogenes will likely greatly expand the pseudogene functional repertoire, highlight their contribution to the evolution of the human genome and alleviate bias carried by the term 'pseudogene' .
CONCLUSION
Here, we have argued that processed pseudogenes contribute functional elements to genomes through diverse mechanisms, including as gene-regulatory elements, novel protein-coding genes and noncoding RNAs. Thus, whilst retrotransposition has generally been regarded as deleterious to the host genome, [18] its role in mobilising mRNAs in the germline may contribute to evolutionary adaptation. The generation and dispersal of stripped down retrocopies throughout the genome provides a rich source of genetic information and opportunities for evolutionary innovations. Far from evolutionarily inert regions of the genome, the generation of processed pseudogenes may increase genetic innovation to enable adaptation of organisms to the environment. | 2021-09-28T06:23:09.415Z | 2021-09-27T00:00:00.000 | {
"year": 2021,
"sha1": "81978f198966b29abea17e6701c01d75e27ce552",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/bies.202100186",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "ec45e71f60faf2bebb3eddc83ad083fcb8c69157",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
124013097 | pes2o/s2orc | v3-fos-license | generalized curvature
. For any semi-Riemannian manifold ( M, g ) we define some generalized curvature tensor E as a linear combination of Kulkarni-Nomizu products formed by the metric tensor, the Ricci tensor and its square of given manifold. That tensor is closely related to quasi-Einstein spaces, Roter spaces and some Roter type spaces. 1
Introduction
Let (M, g) be a semi-Riemannian manifold. We denote by g, R, S, κ and C, the metric tensor, the Riemann-Christoffel curvature tensor, the Ricci tensor, the scalar curvature and the Weyl conformal curvature tensor of (M, g), respectively. Further, let A ∧ B be the Kulkarni-Nomizu product of symmetric (0, 2)-tensors A and B. Now we can define the (0, 2)-tensors S 2 and S 3 , the (0, 4)-tensors R · S, C · S and Q(A, B), and the (0, 6)-tensors R · R, R · C, C · R, C · C and Q(A, T ), where T is a generalized curvature tensor. For precise definitions of the symbols used, we refer to Section 2 of this paper, as well as to [32,Section 1], [35,Section 1], [36,Chapter 6] and [43,Sections 1 and 2].
A semi-Riemannian manifold (M, g), dim M = n ≥ 2, is said to be an Einstein manifold [2], or an Einstein space, if at every point of M its Ricci tensor S is proportional to g, i.e., S = κ n g (1.1) on M, assuming that the scalar curvature κ is constant when n = 2. According to [2, p. 432] this condition is called the Einstein metric condition.
Let (M, g) be a semi-Riemannian manifold of dimension dim M = n ≥ 3. We set E = g ∧ S 2 + n − 2 2 S ∧ S − κ g ∧ S + κ 2 − tr g (S 2 ) 2(n − 1) g ∧ g. (1.2) It is easy to check that the tensor E is a generalized curvature tensor. Further, we define the subsets U R and U S of M by U R = {x ∈ M | R− κ (n−1)n G = 0 at x} and U S = {x ∈ M | S − κ n g = 0 at x}, respectively, where G = 1 2 g ∧ g. If n ≥ 4 then we define the set U C ⊂ M as the set of all points of (M, g) at which which C = 0. We note that if n ≥ 4 then (see, e.g., [23]) An extension of the class of Einstein manifolds form quasi-Einstein, 2-quasi-Einstein and partially Einstein manifolds.
A semi-Riemannian manifold (M, g), dim M = n ≥ 3, is said to be a quasi-Einstein manifold, or a quasi-Einstein space, if rank (S − α g) = 1 (1.4) on U S ⊂ M, where α is some function on U S . It is known that every non-Einstein warped product manifold M × F N with a 1-dimensional (M, g) base manifold and a 2-dimensional manifold ( N , g) or an (n − 1)-dimensional Einstein manifold ( N, g), dim M = n ≥ 4, and a warping function F , is a quasi-Einstein manifold (see, e.g., [7,32]). A Riemannian manifold (M, g), dim M = n ≥ 3, whose Ricci tensor has an eigenvalue of multiplicity n − 1 is a non-Einstein quasi-Einstein manifold (cf. [22,Introduction]). We mention that quasi-Einstein manifolds arose during the study of exact solutions of the Einstein field equations and the investigation on quasi-umbilical hypersurfaces of conformally flat spaces (see, e.g., [26,32] and references therein). Quasi-Einstein hypersurfaces in semi-Riemannian spaces of constant curvature were studied among others in [28,38,41,57] (see also [26] and references therein). Quasi-Einstein manifolds satisfying some pseudosymmetry type curvature conditions were investigated recently in [1,7,23,30,40].
Let (M, g), dim M = n ≥ 3, be a semi-Riemannian manifold. We note that (1.4) holds at a point x ∈ U S ⊂ M if and only if (S − α g) ∧ (S − α g) = 0 at x, i.e., (1.5), by a suitable contraction, we get immediately Using (1.1) we can easily check that the following equation is satisfied on any Einstein manifold (M, g) i.e., E = 0 on M, where the tensor E is defined by (1.2).
The semi-Riemannian manifold (M, g), dim M = n ≥ 3, will be called a partially Einstein manifold, or a partially Einstein space (cf. [5,Foreword], [75, p. 20]), if at every point x ∈ U S ⊂ M its Ricci operator S satisfies S 2 = λS + µId x , or equivalently, where λ, µ ∈ R and Id x is the identity transformation of T x M. Evidently, (1.6) is a special case of (1.8). Thus every quasi-Einstein manifold is a partially Einstein manifold. The converse statement is not true. Contracting (1.8) we get tr g (S 2 ) = λ κ + n µ. This together with (1.8) yields (cf. [24, Section 5]) In particular, a Riemannian manifold (M, g), dim M = n ≥ 3, is a partially Einstein space if at every point x ∈ U S ⊂ M its Ricci operator S has exactly two distinct eigenvalues κ 1 and κ 2 with multiplicities p and n − p, respectively, where 1 ≤ p ≤ n − 1. Evidently, if p = 1, or p = n − 1, then (M, g) is a quasi-Einstein manifold.
In Section 3 we present definitions of some classes of semi-Riemannian manifolds determined by curvature conditions of pseudosymmetry type. Investigations of semi-Riemannian manifolds satisfying some particular curvature conditions of pseudosymmetry type lead to Roter spaces (see Propositon 4.1). Roter spaces form an important subclass of the class of non-conformally flat and non-quasi-Einstein partially Einstein manifolds of dimension ≥ 4. A semi-Riemannian manifold (M, g), dim M = n ≥ 4, satisfying on U S ∩ U C ⊂ M the following equation where φ, µ and η are some functions on this set, is called a Roter type manifold, or a Roter manifold, or a Roter space (see, e.g., [6,Section 15.5], [21,32,33,36]). Equation (1.10) is called a Roter equation (see, e.g., [27,Section 1]). In Section 4 we present results on such manifolds. For instance, every Roter space (M, g), dim M = n ≥ 4, satisfies Let (M, g), dim M = n ≥ 4, be a non-partially-Einstein and non-conformally flat semi-Riemannian manifold. If its Riemann-Christoffel curvature R is at every point of U S ∩ U C ⊂ M a linear combination of the Kulkarni-Nomizu products formed by the tensors S 0 = g and S 1 = S, . . . , S p−1 , S p , where p is some natural number ≥ 2, then (M, g) is called a generalized Roter type manifold, or a generalized Roter manifold, or a generalized Roter type space, or a generalized Roter space. For instance, when p = 2, we have where φ, φ 1 , φ 2 , µ 1 , µ and η are functions on U S ∩ U C . Because (M, g) is a non-partially Einstein manifold, at least one of the functions µ 1 , φ 1 and φ 2 is a non-zero function. Equation (1.12) is called a Roter type equation (see, e.g., [27,Section 1]). We refer to [27,31,32,33,40,67,68,69,70,71] for results on manifolds (hypersurfaces) satisfying (1.12).
Preliminaries.
Throughout this paper, all manifolds are assumed to be connected paracompact manifolds of class C ∞ . Let (M, g), dim M = n ≥ 3, be a semi-Riemannian manifold, and let ∇ be its Levi-Civita connection and Ξ(M) the Lie algebra of vector fields on M. We define on M the endomorphisms X ∧ A Y and R(X, Y ) of Ξ(M) by respectively, where X, Y, Z ∈ Ξ(M) and A is a symmetric (0, 2)-tensor on M. The Ricci tensor S, the Ricci operator S and the scalar curvature κ of (M, g) are defined by respectively. The endomorphism C(X, Y ) is defined by Now the (0, 4)-tensor G, the Riemann-Christoffel curvature tensor R and the Weyl conformal curvature tensor C of (M, g) are defined by respectively, where X 1 , X 2 , . . . ∈ Ξ(M). For a symmetric (0, 2)-tensor A we denote by A the endomorphism related to A by g(AX, Y ) = A(X, Y ). The (0, 2)-tensors A p , p = 2, 3, . . ., are defined by A p (X, Y ) = A p−1 (AX, Y ), assuming that A 1 = A. In this way, for A = S and A = S we get the tensors S p , p = 2, 3, . . ., assuming that S 1 = S. Let B be a tensor field sending any X, Y ∈ Ξ(M) to a skew-symmetric endomorphism B(X, Y ), and let B be the (0, 4)-tensor associated with B by The tensor B is said to be a generalized curvature tensor if the following two conditions are fulfilled: For B as above, let B be again defined by (2.1). We extend the endomorphism B(X, Y ) to a derivation B(X, Y )· of the algebra of tensor fields on M, assuming that it commutes with contractions and B(X, Y ) · f = 0 for any smooth function f on M. Now for a (0, k)-tensor field T , k ≥ 1, we can define the (0, k + 2)-tensor B · T by If A is a symmetric (0, 2)-tensor then we define the (0, k + 2)-tensor Q(A, T ) by In this manner we obtain the (0, 6)-tensors B · B and Q(A, B).
Substituting in the above formulas For a symmetric (0, 2)-tensor A and a (0, k)-tensor T , k ≥ 2, we define their Kulkarni-Nomizu tensor A ∧ T by (see, e.g., [23, Section 2]) It is obvious that the following tensors are generalized curvature tensors: R, C and A ∧ B, where A and B = T are symmetric (0, 2)-tensors. We have and (see, e.g., [23, By an application of (2.4)(a) we obtain on M the identities Further, by making use of (2.2), (2.3) and (2.5), we get immediately From (2.4) (a) it follows immmediately that Q(g, g ∧ g) = 0. Thus we have where the tensor E is defined by (1.2).
From (2.8) we get easily (see also [23, Lemma 2.2(iii)] and references therein)
Let A be a symmetric (0, 2)-tensor and T a (0, k)-tensor, k = 2, 3, . . .. The tensor Q(A, T ) is called the Tachibana tensor of A and T , or the Tachibana tensor for short (see, e.g., [34]). Using the tensors g, R and S we can define the following (0, 6)-Tachibana tensors: Q(S, R), Q(g, R), Q(g, g ∧ S) and Q(S, g ∧ S). We can check, by making use of (2.4)(a) and (2.5), that other (0, 6)-Tachibana tensors constructed from g, R and S may be expressed by the four Tachibana tensors mentioned above or vanish identically on M.
Let A be a symmetric (0, 2)-tensor on a semi-Riemannian manifold (M, g), dim M = n ≥ 3. We denote by U A the set of points of M at which A = trg(A) n g.
This, by suitable contractions yields respectively. From (2.20), by symmetrization in l, j, we obtain From .14), completing the proof of (ii).
A semi-Riemannian manifold (M, g), dim M = n ≥ 4, is said to be Weyl-pseudosymmetric if the tensors R · C and Q(g, C) are linearly dependent at every point of M [23,26]. This is equivalent on U C to R · C = L 1 Q(g, C), (3.8) where L 1 is some function on U C . We can easily check that on every Einstein manifold (M, g), dim M ≥ 4, (3.8) turns into For a presentation of results on the problem of the equivalence of pseudosymmetry, Riccipseudosymmetry and Weyl-pseudosymmetry we refer to [26,Section 4].
Warped product manifolds M × F N , of dimension ≥ 4, satisfying on U C ⊂ M × F N , the condition R · R − Q(S, R) = L Q(g, C), (3.10) where L is some function on U C , were studied among others in [10]. In that paper necessary and sufficient conditions for M × F N to be a manifold satisfying (3.10) are given. Moreover, in that paper it was proved that any 4-dimensional warped product manifold M × F N , with a 1-dimensional base (M , g), satisfies (3.10) [10, Theorem 4.1].
If we set Λ = 0 in (4.2) then we obtain the line element of the Reissner-Nordström spacetime, see, e.g., [58, Section 9.2] and references therein. It seems that the Reissner-Nordström spacetime is the oldest example of the Roter warped product space.
(iii) In [39] a particular class of Roter warped product spaces was determined such that every manifold of that class admits a non-trivial geodesic mapping onto some Roter warped product space. Moreover, both geodesically related manifolds are pseudosymmetric of constant type.
(iii) An algebraic classification of the Roter type 4-dimensional spacetimes is given [8].
(iv) Some comments on pseudosymmetric manifolds (also called Deszcz symmetric spaces), as well as Roter spaces, are given in [9, Section 1] (see also [8, Remark 2 (iii)], [39, Remark 2.1 (iii)]): "From a geometric point of view, the Deszcz symmetric spaces may well be considered to be the simplest Riemannian manifolds next to the real space forms." and "From an algebraic point of view, Roter spaces may well be considered to be the simplest Riemannian manifolds next to the real space forms." For further comments we refer to [77].
We finish this section with the following recent result on Roter spaces.
Warped product manifolds with 2-dimensional base manifold
where τ is some function on U S ∩ U C . N be the warped product manifold with a 2dimensional semi-Riemannian manifold (M , g), an (n−2)-dimensional semi-Riemannian manifold ( N, g), n ≥ 4, a warping function F , and let ( N, g) be a space of constant curvature when n ≥ 5. Then (5.1) holds on U S ∩ U C ⊂ M × F N .
It is well-known that the Cartesian product S 1 (r 1 ) × S n−1 (r 2 ) of spheres S 1 (r 1 ) and S n−1 (r 2 ), n ≥ 4, and more generally, the warped product manifold S 1 (r 1 ) × F S n−1 (r 2 ) of spheres S 1 (r 1 ) and S n−1 (r 2 ), n ≥ 4, with a warping function F , is a conformally flat manifold. (ii) As it was stated in [56, Example 3.2], the Cartesian product S p (r 1 ) × S n−p (r 2 ) of spheres S p (r 1 ) and S k (r 2 ) such that 2 ≤ p ≤ n−2 and (n−p −1)r 2 1 = (p −1)r 2 2 is a non-conformally flat and non-Einstein manifold satisfying the Roter equation (1.10) on U S ∩U C = S p (r 1 ) ×S n−p (r 2 ). (iii) [42, Example 4.1] The warped product manifold S p (r 1 ) × F S n−p (r 2 ), 2 ≤ p ≤ n − 2, with some special warping function F , satisfies on U S ∩ U C ⊂ S p (r 1 ) × S n−p (r 2 ) the Roter equation (1.10). Thus some warped product manifolds S 2 (r 1 ) × F S n−2 (r 2 ) are Roter spaces. (iv) Properties of pseudosymmetry type of warped products with 2-dimensional base manifold, a warping function F , and an (n − 2)-dimensional fibre, n ≥ 4, assumed to be of constant curvature when n ≥ 5, were determined in [32,Sections 6 and 7]. Evidently, warped product manifolds S 2 (r 1 ) × F S n−2 (r 2 ), n ≥ 4, are such manifolds. Let g, R, S, κ and C denote the metric tensor, the Riemann-Christoffel curvature tensor, the Ricci tensor, the scalar curvature and the Weyl conformal curvature tensor of S 2 (r 1 )× F S n−2 (r 2 ), respectively. From [32,Theorem 7.1] it follows that on set V of all points of U S ∩ U C ⊂ S 2 (r 1 ) × F S n−2 (r 2 ) at which the tensor S 2 is not a linear combination of the tensors S and g, the Weyl tensor C is expressed by where λ is some function on V . This, by (2.2), turns into Thus (1.12) is satisfied on V . Moreover, (1.10) holds at all points of (U S ∩ U C ) \ V , at which (1.4) is not satisfied. From Lemma 2.2 it follows that (5.2) holds at all points of U S ∩ U C ⊂ S 2 (r 1 ) × F S n−2 (r 2 ), n ≥ 4, at which (1.4) is not satisfied. Finally, in view of Theorem 2.4, we can state that (5.1) holds on U S ∩ U C .
Hypersurfaces in conformally flat spaces
Let M, dim M = n ≥ 4, be a hypersurface isometrically immersed in a semi-Riemannian conformally flat manifold N, dim N = n + 1. Let g ad , H ad , G abcd = g ad g bc − g ac g bd and C abcd be the local components of the metric tensor g, the second fundamental tensor H, the (0, 4)-tensor G and the Weyl conformal curvature tensor C of M, respectively. As it was stated in [46, eq. (20)] (see also [51, where ε = ±1, tr(H) = g ad H ad , H 2 ad = g bc H ab H cd and µ is some function on M. From (7.1), by contraction we get easily µ = ε (n − 2)(n − 1) ((tr(H)) 2 − tr(H 2 )) , (7.2) where tr(H 2 ) = g ad H 2 ad . Now (7.1) and (7.2) yield If H = tr(H) n g at a point x ∈ M, i.e., M is umbilical at x, then from (7.3) it follows immediately that the tensor C vanishes at x. If at a non-umbilical point x ∈ M, we have rank(H − αg) = 1, for some α ∈ R, i.e., M is quasi-umbilical at x, then in view of Proposition 2.1 (i), the tensor C vanishes at x. Conversely, if at a non-umbilical point x ∈ M the tensor C vanishes then in view of Proposition 2.1 (ii) we have rank(H − αg) = 1, for some α ∈ R. Thus we can present [46,Theorem 4.1] in the folowing form. Remark 7.2. Let M, dim M = n ≥ 4, be a hypersurface isometrically immersed in a semi-Riemannian conformally flat manifold N, dim N = n + 1.
(ii) The above presented result, i.e., if (7.4) is satisfied at every point of U C ⊂ M then (3.9) holds on this set, was already obtained in [50, Proposition 3.1]. We mention that Proposition 3.1 of [50] was proved without application of [63, Theorem 3.1 (i)].
(iv) Recently curvature properties of pseudosymmetry type of hypersurfaces isometrically immersed in a semi-Riemannian conformally flat manifold were investigated in [53] and [64].
Let N n+1 s (c), n ≥ 4, be a semi-Riemannian space of constant curvature with signature (s, n + 1 − s), where c = κ n(n+1) and κ is its scalar curvature. Let M, dim M = n ≥ 4, be a connected hypersurface isometrically immersed in N n+1 s (c). We denote by U H ⊂ M the set of all points at which the tensor H 2 is not a linear combination of the metric tensor g and the second fundamental tensor H of M. We have U H ⊂ U S ∩ U C ⊂ M (see, e.g., [29,34,35] or [55, p. 137]). Further, we assume that the following conditions are satisfied on U H ⊂ M H 3 = tr(H) H 2 + ψ H + ρ g and C · C = Q(g, T ), (7.9) where T is a generalized curvature tensor and ψ and ρ some functions on U H . Then in view of [35,Theorem 4.5] on U H , where λ 1 is some function on this set. Using (1.2), (7.9) and (7.10) we get immediately Q(g, E) (7.11) on U H , where the tensor E is defined by (1.2) and λ is some function on this set. In addition, if we assume that (3.9) holds on U H then (3.9) and (7.11) give κ + 2εψ n − 1 − κ n + 1 − L C C = n − 3 (n − 2) 2 (n − 1) E + λ 2 2 g ∧ g on U H , where λ 2 is some function on this set. We note that the last equation, by a contraction with the metric tensor g, yields λ 2 = 0. Thus κ + 2εψ n − 1 − κ n + 1 − L C C = n − 3 (n − 2) 2 (n − 1) E on U H . | 2018-12-20T21:22:39.665Z | 2006-12-01T00:00:00.000 | {
"year": 2006,
"sha1": "f48b6c7c78e88b76bf06ab7677b3759a7387081f",
"oa_license": "CCBY",
"oa_url": "https://csmj.mosuljournals.com/article_164054_46d6bf9bd2c8d4597b059d5c795cc6d0.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "8e8c0862dc53777951d1f73f0326f244f88d7795",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
250698709 | pes2o/s2orc | v3-fos-license | A Novel Neural Network Training Method for Autonomous Driving Using Semi-Pseudo-Labels and 3D Data Augmentations
Training neural networks to perform 3D object detection for autonomous driving requires a large amount of diverse annotated data. However, obtaining training data with sufficient quality and quantity is expensive and sometimes impossible due to human and sensor constraints. Therefore, a novel solution is needed for extending current training methods to overcome this limitation and enable accurate 3D object detection. Our solution for the above-mentioned problem combines semi-pseudo-labeling and novel 3D augmentations. For demonstrating the applicability of the proposed method, we have designed a convolutional neural network for 3D object detection which can significantly increase the detection range in comparison with the training data distribution.
Introduction
Object detection is a crucial part of an autonomous driving software since increasingly complex layers are built on the top of the perception system which itself relies on fast and accurate obstacle detections. Object detection is typically performed by convolutional neural networks which are trained by means of supervised learning. Supervised learning is a method where a model is fed with input data and its main objective is to learn a function that maps the input data to the corresponding output. Since convolutional neural networks, the best models for visual domain (except large-data regime where visual transformers [7] excel), are heavily overparameterized, a large amount of annotated data is needed for learning the mapping function. Therefore, a substantial amount of manual effort is required to annotate data with sufficient quality and quantity, which is an expensive and error-prone method. In addition, obtaining precise ground truth data is sometimes impossible due to human or sensor constraints. For example, the detection range of LiDARs limits the annotation of distant objects which presence must be known by an autonomous driving system due to fast position and location change on a highway. Radars could overcome this kind of limitation, but they have a more reduced field of view and only a subset of the interesting object categories are detectable with it. A human limitation is, for example, the inability to accurately estimate the distance of objects in 3D from 2D images without any 3D clue e.g. point clouds collected by a LiDAR or radar detections. Consequently, a novel solution is needed for extending current training methods to overcome this limitation and enable accurate 3D object detection.
Several approaches have been developed to facilitate neural network training. One of the most popular solutions is transfer learning where a neural network is trained on a particular dataset, such as ImageNet [5] and then fine-tuned on another dataset (e.g., a model trained to recognize cars can be trained to classify trucks using transfer learning). Self-supervised learning which utilizes unlabeled data to train a model performing proxy tasks and then fine-tunes it on a downstream task in a supervised manner has resulted in breakthroughs in language modeling. Pseudo-labeling is a simple solution that uses the same model's predictions as true labels during the training. However, none of these solutions helps the model to produce predictions that are not part of the training distribution.
The main motivation of this work is to develop a training method which massively overcomes the limitations of the training dataset and so extends the prediction capabilities of a neural network. To summarize, this paper makes the following three main contributions: -We introduced semi-pseudo-labeling (SPL) as a method where pseudo-labels are generated by a neural network trained on a simpler task and utilized during the training of another network performing a more complex task. -We extended several conventional 2D data augmentation methods to work in 3D. -We described a training method for allowing neural networks to predict certain characteristics from out of training distribution using semi-pseudolabeling and 3D data augmentations.
Related Work
The concept of pseudo-labeling has been introduced by Lee in [9] as a simple and efficient self-supervised method for deep neural networks. The main idea of pseudo-labeling is to consider the predictions of a trained model as ground truth. Unlabeled data, which is typically easy to obtain, can be annotated using the trained model's prediction. Then, the same model is retrained on the labeled and pseudo-labeled data simultaneously. Our proposed method is based on this concept but there is a fundamental difference between the solutions. Pseudolabeling generates labels for the same task using the same model while our semipseudo-labeling method utilizes pseudo-labels generated by a different model for a more complex task. In [4], Chen generated pseudo-labels for object detection on dynamic vision sensor using a convolutional neural network trained on an active pixel sensor. The main difference compared to our solution is that pseudo-labels in the paper are used for the same task, namely two-dimensional bounding box detection of cars on different sensor modalities. The solution described in [19] also uses pseudo-labels for training object detection neural networks. However, the invention described in the patent uses regular pseudo-labeling for training a neural network to perform the same task, namely, 2D object detection as opposed to our solution where the tasks are not the same. In addition, the solution requires the use of region proposal networks which indicates a twostage network architecture that might not fulfill real-time criterion while our 3D object detection network uses a single stage architecture that utilizes semipseudo-labeling during training. Watson et al. used pseudo-labeling in [17] for generating and augmenting their data labeling method. Their proposed solution has created pseudo-labels for unlabeled data while our method enables us to use annotated data created for a simpler task as pseudo-labels and does not exclusively rely on unlabeled data. Transfer learning [1] is the process when a model is trained for performing a specific task and its knowledge is utilized to solve another (related) problem. Transfer learning involves pretraining (typically on a large-scale dataset) a model and then customizing this model to a given task by either using the trained weights or fine-tuning the model by adding an additional classifier on the top of frozen weights. Transfer learning can be effectively used to transfer knowledge between related problems which has similarities with our proposed method which can be considered as an extended version of transfer learning. However, we utilize semi-pseudo-labels to perform a more complex task (e.g. 3D object detection with 2D pseudo labels) which might not be solvable using regular transfer learning. In addition, our solution enables simultaneous learning of different tasks as opposed to transfer learning.
Data augmentation [18] is a standard technique for enhancing training data and preventing overfitting using geometric transformations, color space augmentations, random erasing, mixing images, etc. Most data augmentation techniques operate in image space (2D) [11], but recent work started to extend their domain to 3D [18], [11]. To the best of our knowledge, none of these solutions introduced zoom augmentation in 3D using a virtual camera, as our work proposes. The closest solution which tries to solve the limited perception range is described in [14]. The method in the paper proposes to break down the entire image into multiple image patches, each containing at least one entire car and with limited depth variation. During inference, a pyramid-like tiling of images is generated which increases the running time. In addition, the perception range of the approach described in the paper did not exceed 50 meters.
A Novel Training Method with Semi-Pseudo-Labeling and 3D Augmentations
Two methods have been developed for enhancing training dataset limitations, namely semi-pseudo-labeling (SPL) and 3D augmentations (zoom and shift).
SPL is first introduced as a general, abstract description. Then, the concept and its combination with 3D augmentations is detailed using a concrete example.
Semi-Pseudo-Labeling
The main objective of supervised learning is to define a mapping from input space to the output by learning the value of parameters that minimizes the error of the approximation function [6], formally and where L is an arbitrary loss function, Y is the predictable target, X is the input and M is the model parameterized by θ. For training a model using supervised learning, a training set is required: The pseudo-labeling method introduces another dataset where labels for an unlabeled dataset are generated by the trained model M (X; θ): is the pseudo-label generated by the trained model.
The final model is trained on the union of the annotated and pseudo-labeled datasets. The main objective of semi-pseudo-labeling is to utilize pseudo-labels generated by a model trained on a simpler task for training another model performing a more complex task. Both simple and complex tasks have annotated training sets for their specific tasks: The main differentiator between regular pseudo-labeling and semi-pseudolabeling method is that the simple model M S does not generate pseudo-labels on unlabeled data (although it is a viable solution and might be beneficial in some cases). Rather, pseudo-labels are generated using the input data of the complex model M C . In this way, the label space of complex model can be extended, as can be seen in 7: where x CL i ∈ X CL is the input sample where annotations for the complex task are available, y CL i ∈ Y CL is the ground truth label for the complex task, y SP S i ∈ M S (X CL ; θ S ) is the semi-pseudo-label generated by the simple model on the complex task's input. The final model M C (X CL ; θ C ) is trained on the semipseudo-labeled D C SP S dataset.
3D Augmentations
Issues with Vanilla Zoom Augmentation For training a network predicting 3D attributes of dynamic objects, accurate 3D position, size, and orientation data is required in model space. The principal problem to solve for overcoming the dataset limitations is handling image-visible, non-annotated objects (red squares on Fig. 1) within the required operational distance range (enclosed by red dashed lines in Fig. 1). Gray dashed area presents the non-annotated region and red square is nonannotated object, while green area is the annotated region and blue square is an annotated object. Red dashed horizontal lines represent the required operational domain, in which the developed algorithm has to detect all the objects, while blue dashed horizontal lines are the distance limits of the annotated data. The three columns represent the three options during zoom augmentation. First case (left) is the non-augmented, original version. Second (center) case when input image is downscaled, mimicking farther objects in image space, so the corresponding ground truth in the model space should be adjusted consistently. The third case (right), when input image is upscaled, bringing the objects closer to the camera. The figure highlights the inconsistency between applying various zoom levels, as annotated regions on the transformed cases (second, third) overlap with the original non-annotated region. Figure 1 presents the inconsistencies of applying vanilla zoom augmentation. The green area represents the region where all image-visible objects are annotated, while the gray dashed one is, where our annotation is imperfect and contains false negatives. When applying vanilla zoom augmentation technique to extend the operational domain of the developed algorithm, discrepancies may arise, i.e., when the zoom augmented dataset contains original and down-scaled images (case #1 and #2) on Fig. 1, it can be seen ground truth frames contradict each other. In case #2 it is required to detect objects beyond the original ground truth limit (upper blue dashed line), while in case #1 it cannot be utilized in the loss function, since there is no available information on even the existence of the red object. To overcome the limitations and make zoom augmentation viable in our case, additional information is required to fill missing data, i.e., non-annotated objects at least in image space. Lacking data could be refill with human supervision but this is infeasible since it is not scalable. Pseudo-labeling is a promising solution. However, in our case, the whole 3D information could not be recovered but the existence of 2D information is sufficient to overcome the above-mentioned issues. Therefore, 3D zoom augmentation becomes a viable solution for widening the limits of the dataset and extending the operational domain of the developed detection algorithm. A pretrained, state-of-the-art 2D bounding box network can be used to detect all image-visible objects.
Improving over Existing Augmentations Most 2D data augmentations are easy to generalize to three-dimension. However, zooming is not trivial since changes in the image scale modify the position and egocentric orientation of the annotations in 3D space too. A 3D zoom augmentation using a virtual camera has been developed to resolve this issue. The method consists of two main steps. The first is to either zoom in or zoom out of the image. In this way, it can be emulated that an object moves either closer or farther to the camera. The second step is to modify the camera matrix to follow the 2D transformations and to keep 3D annotations intact. This can be performed by linear transformations and a virtual camera that adjusts its principal point and focal length considering the original camera matrix and the 2D scaling transformation. Changing camera intrinsic parameters mimics the change of the egocentric orientation of the given object, but its apparent orientation, which is the regressed parameter during the training, remains the same.
The 3D zoom augmentation can be implemented as follows. As a first step, a scaling factor between an empirically chosen lower and upper bound is randomly drawn. If the lower and upper bound is smaller than one, a zoom-out operation is performed. If the lower and upper bound are greater than one, a zoom-in operation is performed. If the lower bound is less than one and the upper bound is greater than one, either a zoom-in or zoom-out is performed. The 2D part of zoom works as in the traditional case when one zooms in / out to the image using the above-mentioned scaling factor (in the case of zooming out, the image is padded with zeros for having the original image size). Then, the camera matrix corresponding to the image can be adjusted by scaling the focal length components with the randomly drawn scaling factor. If the 2D image is shifted beside the zoom operation, the camera matrix can be adjusted by shifting the principal point components. Therefore, the augmentations for a corresponding image and 3D labels are performed in a consistent manner.
Applying random shift of the image enforces the decoupling of image position and object distance. Due to this augmentation, the detection system can prevent overfitting to a specific camera intrinsics.
An Example of Training with Semi-Pseudo-Labeling and 3D Augmentations
The semi-pseudo-labeling method combined with 3D augmentations was used for training a 3D object detection neural network to perform predictions that are out of the training data distribution. Figure 2 describes the steps of the utilization of the SPL method. The requirement was to extend the detection range of an autonomous driving system to 200 meters while the distance range of annotated data did not exceed 120 meters. In addition, some detectable classes were missing from the training data. The FCOS [16] 2D bounding box detector has been chosen as the simple model M S (X SL ; θ S ) where the input space X SL contains HD resolution stereo image pairs and label space C S consists of (x, y, w, h, o, c 1 , . . . , c n ) tuples, where x is the x coordinate of the bounding box center in image space, y is the y coordinate of the bounding box center in image space, w is the width of the bounding box in image space, h is the height of the bounding box in image space, o is the objectness score, c i is the probability that the object belongs to the i-th category.
The model M S has performed 2D object detections on the 3D annotated dataset which in our case is the same as X SL , i.e., HD images. The resulting 2D detections found distant objects which are not annotated by 3D bounding boxes and added as semi-pseudo-labels. Finally, the 3D object detector was trained on the combination of 3D annotated data and semi-pseudo-labeled 2D bounding boxes. The label space of the 3D detector is (x, y, w, h, o, c 1 , . . . , c n , P, D, O) tuples, where x is the x coordinate of the bounding box center in image space, y is the y coordinate of the bounding box center in image space, w is the width of the bounding box in image space, h is the height of the bounding box in image space, o is the objectness score, c i is the probability that the object belongs to the i-th category, P is a three-dimensional vector of the center point of 3D bounding box in model space, D is a three-dimensional vector containing the dimensions (width, height, length) of the 3D bounding box, O is a four-dimensional vector of the orientation of the 3D bounding box represented as a quaternion.
A deduplication algorithm is required to avoid double annotations that are included both in the semi-pseudo-labeled dataset and in the 3D annotated ground truth. This post-processing step can be executed by examining the ratio of the intersection over union (IoU) of the semi-pseudo-labeled annotations and 2D projection of 3D bounding boxes. If the ratio exceeds a threshold, the pseudolabeled annotation should be filtered out.
Baseline Neural Network and Training
We have developed a simple singlestage object detector based on the YOLOv3 [12] convolutional neural network architecture which has utilized our semi-pseudo-labeling method and 3D data augmentations during its training. The simple architecture was a design choice taking into account two reasons. First, the model has to be lightweight in order to be able to run real-time in a computationally constrained environment (i.e. in a self-driving car). Second, a simple model facilitates the benchmarking of the Fig. 2: SPL applied in 3D object detection using 2D detection as the simple task. effects of the proposed methods. As the first step of the training, the input image is fed to an Inception-ResNet [15] backbone. Then, the resulting embedding is passed to a Feature Pyramid Network [10]. The head is adapted from the YOLOv3 paper and is extended with channels that are responsible for predicting 3D characteristics mentioned above.
The neural network has been trained using multitask learning [2], 2D (using previously generated semi-pseudo-labels) and 3D detection are learned in a parallel manner. Instead of directly learning the 3D center point of the cuboid, the network was designed to predict the 2D projection of the center of a 3D cuboid. The center point of the 3D bounding box can later be reconstructed from the depth and its 2D projection. Finally, the dimension prediction part of the network uses priors (i.e., precomputed category averages), and only the differences from these statistics are predicted instead of directly regressing the dimensions. This approach was inspired by the YOLOv3 architecture which uses a similar solution for bounding box width and height formulation. 3D zoom augmentation was performed during the training where the lower and upper bound of scaling factor hyperparameters were set to 0.5 and 2.0, respectively.
Loss Functions
The label space of semi-pseudo-labels is more restricted than the 3D label space since SPLs (i.e., 2D detections) do not contain 3D characteristics. The ground truth was extended with a boolean flag that indicates whether the annotated object is a semi-pseudo-label or not. This value was used in the loss function to mask out 3D loss terms in the case of semi-pseudo-labels to not penalize the weights corresponding to 3D properties during backpropagation when no ground truth is known. Due to this solution and the single-stage architecture as well as label space representation described in Section 3.3, we were able to simultaneously train the neural network to detect objects in 2D and 3D space.
As mentioned in Section 3.3, the training of the neural network has been framed as a multitask-learning problem. The loss function consists of two parts, 2D and 3D loss terms. The loss function for 2D properties is adapted from YOLO paper [12]. For the 3D loss term, the loss has been lifted to 3D instead of calculating the loss for certain individual loss terms (e.g., 2D projection of cuboid center point, orientation). The 3D loss is calculated by reconstructing the bounding cuboid in 3D and then calculating the L2 loss of predicted and ground truth corner points of the cuboid. In addition, the method described in [13] has been utilized to disentangle loss terms. As mentioned before, a masking solution has been utilized to avoid penalizing the network when predicting 3D properties to semi-pseudo-labels that do not have 3D annotations. The final loss is the sum of 2D and 3D loss.
Experiments
We have conducted experiments with the neural network described in Section 3.3 on a publicly available dataset as well as on internal data. The main goal of the experiments was not to compete with state-of-the-art solutions rather validate the viability of the proposed semi-pseudo-labeling and 3D augmentation methods. Therefore, the baseline is a model trained using neither semi-pseudolabeling nor 3D augmentations.
Argoverse
Argoverse [3] is a collection of two datasets designed to facilitate autonomous vehicle machine learning tasks. The collected dataset consists of 360-degree camera images and long-range LiDAR point clouds recorded in an urban environment. Since the perception range of the LiDAR used for ground truth generation is 200 meters, Argoverse dataset is suitable to validate our methods for enabling longrange camera-only detections. However, LiDAR point cloud itself was not used as an input for the model, only camera frames and corresponding 3D annotation were shown to the neural network. In order to enable semi-pseudo-labeling, the two-dimensional projection of 3D annotations were calculated and an FCOS [16] model was run on the Argoverse images to obtain 2D detection of unannotated objects. Finally, the deduplication algorithm described in Section 3.3 has been executed for avoiding multiple containment of objects within the dataset.
The performance of the models have been measured both in image and model space. Image-space detections indicate the projected 2D bounding boxes of 3D objects while model space is represented as Bird's-Eye-View (BEV) and is used for measuring detection quality in 3D space. Table 1 shows the difference between the performance of the baseline model and a model trained using our novel training method for the category 'Car'. A solid improvement in both 2D and BEV metrics can be observed. Since category 'Car' is highly over-represented in the training data with small number of image-visible but not annotated ground truth (these objects are located in the far-region), the performance improvement is not as visible as in the case of other, less frequent object categories. The reason for the low values corresponding to BEV metrics is that the ground truth-prediction assignment happens using the Hungarian algorithm [8] based on the intersection over union metric (IoU threshold is set to 0.5). Since the longitudinal error of predictions is increasing as the detection distance increases, the bounding box association in BEV space might fail even though the image space detection and association was successful. Figure 4 depicts some example detection on the Argoverse dataset where distant objects are successfully detected. The image-space and BEV metrics are shown in Table 1 for category 'Large vehicle'. The effect of semi-pseudo-labeling and 3D augmentations can even more be observed than in the case of the 'Car' category. The performance of the baseline model in BEV-space is barely measurable due to the very strict ground truth-prediction assignment rules and heavy class imbalance. This explains the difference between the baseline and the proposed method on BEV Precision metric too. The baseline provides only a few detections in far range with high precision. Our proposed method is able to detect in far-range too (c.f. the difference between 2D and BEV recall of baseline and proposed method), but due to the strict association rules, the BEV precision is low. Overall, the model trained with our method has significantly better performance in BEV-space as well as in image space. Table 2 shows a similar effect in the case of the 'Pedestrian' category. The BEV metrics are omitted since the top-view IoU-based bounding box assignment violates the association rules due to the small object size.
In-house Highway Dataset
Since the operational domain of Argoverse dataset is urban environment and the validation of our method in highway environment is also a requirement, we have performed an in-house data collection method and created 3D bounding box annotations using semi-automated methods. The sensor setup used for the recordings consisted of four cameras and a LiDAR with a 120-meters perception range both in front and back directions. Figure 5 shows the projected cuboids of a semi-automated annotation sample. It can be observed that distant objects (rarely objects in near/middle-distance region too) are not annotated due to the lack of LiDAR reflections. As a consequence of the limited perception range of the LiDAR, a manual annotation step was needed for creating a validation set. In this way, distant objects (up to 200 meters) and objects without sufficient LiDAR reflections can also be labeled and a consistent validation set can be created. The collected dataset consists of Car, Van, Truck, and Motorcycle categories. The model was trained on the semi-automatically annotated data and validated on the manually annotated validation set. Figure 3 shows benchmark results (namely precision and recall metrics) of the neural network trained with our method in a class-agnostic manner. The heatmaps visualize the top-view world space around the ego car where the world space is split into 4 meters by 10 meters cells. The blank cell in the left heatmap indicates the ego car position and can be seen as the origin of the heatmap. The total values on the figure are the average over the heatmap. A prediction is associated with a ground truth if the distance between them is less than 10 meters. The forward detection range is 200 meters while the backward range is 100 meters. It can be observed that the model is able to detect up to 200 meters in forward direction even though the training data did not contain any annotated objects over 120 meters. The low recall value in a near range (-10m, 10m) can be explained by the fact that the model was trained only with front and back camera frames, and objects in this detection area might not be covered by the field-of-view of the camera sensors. The high precision in (180m, 200m) can be attributed to the fact that the model produces only a few detections in the very far range with high confidence (i.e. the model does not produce a large number of false-positive detections in exchange for the larger number of false-negative detections).
Three-dimensional zoom augmentation without semi-pseudo-labeling could not have been able to perform similarly due to the issues described in Section 3.2. However, a limitation can be observed since the detection ability significantly drops over 150 meters, as the heatmap of recall metric shows in Fig. 3.
Conclusion
In this paper, we have introduced a novel training method for facilitating training neural networks used in the autonomous driving domain. The 3D augmentations have the advantageous effect that it is possible to accurately detect objects that are not part of the training distribution (i.e. detect distant objects without ground truth labels). It is true that semi-pseudo-labelling alone can be enough for the detections, however, the 3D properties, especially depth estimation would be suboptimal due to the fact that neural networks cannot extrapolate properly outside of the training distribution. Since our main interest was to validate the viability of the proposed method, we used a simple model for the experiments. A future research direction could be to integrate semi-pseudo-labeling and 3D zoom augmentation into state-of-the-art models and conduct experiments in order to examine the effects of our method. | 2022-07-21T01:15:59.041Z | 2022-07-20T00:00:00.000 | {
"year": 2022,
"sha1": "9f41a044076b229e198e8db4ee10b7192b09f79b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "9f41a044076b229e198e8db4ee10b7192b09f79b",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
237409423 | pes2o/s2orc | v3-fos-license | Distributions of dental freshmen and practicing dentists and their correlations in different regions of Taiwan in 2020
Background/purpose Taiwan is facing the problems of the surplus, the uneven distribution, and the urban-rural gap of dental students and practicing dentists. The main purpose of this study was to evaluate the distributions of dental freshmen and practicing dentists in different regions of Taiwan in 2020. Materials and methods We collected the numbers of dental freshmen and practicing dentists in different regions of Taiwan in 2020 for evaluation of their regional distributions in Taiwan and their relationship by the regression analysis. Results The numbers of dental freshmen and of practicing dentists per 100,000 people in municipalities were higher than those in non-municipalities (P < 0.01 for practicing dentists only). These corresponding numbers in cities with dental schools were significantly higher than those in cities and counties without dental schools, respectively (all P-values < 0.05). In addition, the coefficients of correlation between the dentist index and the dental student index were R2 = 0.7521 (P < 0.05) for municipalities (n = 6), R2 = 0.6332 (P < 0.001) for non-municipalities (n = 15), R2 = 0.9334 (P < 0.05) for cities with dental schools (n = 4), R2 = 0.4925 (P < 0.01) for cities and counties without dental schools (n = 17), and R2 = 0.5025 (P < 0.001) for nationwide (n = 21). Conclusion The dental freshmen and practicing dentists are still more concentrated in municipalities than in non-municipalities and in cities with dental schools than in cities and counties without dental schools in Taiwan in 2020.
Introduction
There has always been a gap in educational resources between urban and rural areas in Taiwan. For the schools of medical personnel, the vast majority of their students come from the cities. Successively, these students after they graduate from the schools also target to practice in urban areas, leading to a chronic shortage of medical personnel in rural areas and creating a vicious circle for the urban-rural gap. According to the statistics from National Taiwan University, in the past 20 years, 83%e88% of their students come from municipalities, and those come from Taipei City account for 30%e38%, which is the highest in Taiwan. Although the population is originally concentrated in the cities, but in fact the population of Taipei City account for only 11% of the total population of Taiwan. Moreover, the rate of students from Taipei City enrolled in National Taiwan University is three times the rate of students from the whole Taiwan area, which shows that the problem of the urban-rural gap in educational resources is very serious. 1 Students who grow up in the cities almost choose to stay in the cities for their life and career, because the life experience and working environment in the cities are more familiar to them. Moreover, they usually cannot adapt to the life of working in rural or remote areas. Therefore, the urban-rural gap between various resources will become more and more serious and fall into a vicious circle.
Due to the regional difference in the resource distribution, in addition to the urban-rural gap, Taiwan's population and medical personnel (including dentists) are increasingly concentrated in the municipalities of the western region, especially the northern cities, such as Taipei City. Our previous studies found that the practicing dentists in the training institutions of postgraduate year training program for dentists (PGYD) (so-called institutional dentists) were unevenly distributed, and the degree of unevenness was more serious than that of the overall practicing dentists. 2,3 By the regression analysis, cities or counties with more dentists would have more institutional dentists. This situation was more obvious in municipalities than in non-municipalities. 2 After the PGYD trainees have completed their training, they are likely to continue to practice in the locations near their training institutions. Therefore, the problem of the uneven distribution of dentists becomes more serious. 4 Furthermore, we also want to explore whether there is a similar phenomenon between dental students and practicing dentists. For instance, such problems may occur at an earlier stage. In fact, Taiwan's dental schools all are concentrated in the metropolitan areas of the municipalities, and most of their dental students also come from the metropolitan areas of the municipalities. 5 Therefore, after they graduate from the dental schools and become dentists in the future, their practice locations are likely to be concentrated in the metropolitan area of the municipalities.
However, there was still no detailed analysis on the relations between distributions of dental students and practicing dentists in each city or county of Taiwan and in different regions of Taiwan in 2020. Therefore, in this study, we examined the distributions of dental freshmen and practicing dentists in each city or county of Taiwan and in different regions of Taiwan in 2020 and evaluated the relationship between dental freshmen and practicing dentists in Taiwan in 2020.
Materials and methods
This study used the secondary data analysis to collect the information about the population and the numbers of practicing dentists and dental freshmen enrolled in dental schools in Taiwan in 2020. This information was open to access and could be collected from the related websites.
We obtained the population data including the total population in cities and counties for May 2020 from the website of the Ministry of the Interior. In addition, the information of overall practicing dentists in cities and counties of Taiwan for May 2020 was available from the Newsletter of Taiwan Dental Association. 6 Based on our previous study, 5 we also obtained the information of dental freshmen enrolled in the northern, central, and southern dental schools in Taiwan in 2020 from the website of the Joint Board of College Recruitment Commission. This information included the dental schools, and examination areas of enrolled dental freshmen. According to the locations of the examination rooms in the examination areas, we could find the locations of the cities or counties where enrolled dental freshmen came from.
All dental schools in Taiwan were divided into three groups according to their locations: northern, central, and southern dental schools. The whole area of Taiwan was also divided into two groups: municipalities and nonmunicipalities or cities with dental schools and cities and counties without dental schools. In addition, the whole area of Taiwan was further divided into five different regions: northern, central, southern, and eastern regions, and offshore islands. The northern region included Taipei For statistical analysis, coefficient of variation (CV) was determined for comparisons of variability. ManneWhitney U test was used for comparisons between two subgroups, and KruskaleWallis test was used for comparisons among three or more subgroups. Furthermore, we defined the dentist index or the dental student index as the ratio of dentists per 100,000 people or the ratio of enrolled dental freshmen per 100,000 people to their corresponding values in the whole area of Taiwan, respectively. Then, the coefficients of correlation were used for comparisons between the dentist index and the dental student index.
Results
Distributions of enrolled dental freshmen per 100,000 people in northern, central, and southern dental schools and in 22 cities and counties in Taiwan in 2020 There were 386 enrolled dental freshmen accepted by the admission system of the Joint Board of College Recruitment Commission in 2020. We calculated enrolled dental freshmen per 100,000 people based on the population of May 2020 for further comparisons. The distributions of enrolled dental freshmen per 100,000 people in 22 cities and counties of Taiwan in 2020 are shown in Table 1. We found that there were 1.64 dental freshmen per 100,000 people nationwide in Taiwan in 2020. Of the 1.64 dental freshmen per 100,000 people, 0.65 was enrolled by the northern dental schools, 0.54 by the central dental schools, and 0.44 by the southern dental schools (Table 1). Chiayi City was the city with the largest corresponding number (7.48), and Nantou County was the county with the smallest non-zero corresponding number (0.20) among 20 cities and counties with non-zero dental freshmen in Taiwan. Chiayi County and Taitung County had no enrolled dental freshmen. However, Lienchiang County had a very sparse population, so its number of dental freshmen per 100,000 people was very high (23.01) and thus was not included in the subsequent statistics for comparisons. There were 14 of the 22 cities and counties with 1.34 or fewer corresponding numbers that were below the number of 1.64 dental freshmen per 100,000 people nationwide (Table 1).
For the northern dental schools, Chiayi City was the city with the largest number (3.37) of dental freshmen enrolled by the northern dental schools per 100,000 people, and Hsinchu County was the county with the smallest non-zero corresponding number (0.18) among 15 cities and counties with non-zero dental freshmen in Taiwan. However, this corresponding number of Lienchiang County was very high (7.67). There were 16 of the 22 cities and counties with 0.61 or fewer corresponding numbers that were below the number of 0.65 dental freshmen enrolled by the northern dental schools per 100,000 people nationwide (Table 1).
For the central dental schools, Chiayi City was the city with the largest number (1.87) of dental freshmen enrolled by the central dental schools per 100,000 people, and New Taipei City was the city with the smallest non-zero corresponding number (0.17) among 16 cities and counties with non-zero dental freshmen in Taiwan. However, the corresponding number of Lienchiang County was very high (15.34). There were 15 of the 22 cities and counties with 0.39 or fewer corresponding numbers that were below the number of 0.54 dental freshmen enrolled by the central dental schools per 100,000 people nationwide (Table 1).
For the southern dental schools, Chiayi City was the city with the largest number (2.25) of dental freshmen enrolled by the southern dental schools per 100,000 people, and Yunlin County was the county with the smallest non-zero corresponding number (0.15) among 15 cities and counties with non-zero dental freshmen in Taiwan. There were 13 of the 22 cities and counties with 0.31 or fewer corresponding numbers that were below the number of 0.44 dental freshmen enrolled by the southern dental schools per 100,000 people nationwide ( Table 1).
Distribution of overall practicing dentists per 100,000 people in each city or county in Taiwan in 2020 According to the statistics of Taiwan Dental Association of May 2020, there were 15,155 practicing dentists in Taiwan in 2020. The distribution of overall practicing dentists per 100,000 people in 22 cities and counties of Taiwan in 2020 are shown in Table 1. We found that there were 64.25 practicing dentists per 100,000 people nationwide in Taiwan in 2020. Taipei City was the city with the largest corresponding number (128.43), and Miaoli County, Nantou County, Yunlin County, Chiayi County, Pingtung County and Taitung County, as well as offshore islands (Penghu County, Kinmen County and Lienchiang County) were the counties with less than 35 practicing dentists per 100,000 people among all cities and counties in Taiwan (Table 1). It should be noted that Chiayi County was the county with the smallest corresponding number (21.94) among all cities and counties in the main island of Taiwan, while Kinmen County was the county with the smallest corresponding number (13.60) among all offshore islands of Taiwan (Table 1). There were 16 of the 22 cities and counties with 60.48 or fewer corresponding numbers that were below the number of 64.25 practicing dentists per 100,000 people nationwide ( Table 1).
Comparisons of enrolled dental freshmen and of practicing dentists per 100,000 people in different regions of Taiwan in 2020 The comparisons of enrolled dental freshmen per 100,000 people in different regions of Taiwan are exhibited in Table 2. We found that the mean number of enrolled dental freshmen per 100,000 people (1.96) in municipalities was higher than that (1.32) in non-municipalities of Taiwan. Moreover, the mean numbers of dental freshmen enrolled by the northern dental schools (0.78), the central dental Journal of Dental Sciences 16 (2021) 1125e1132 schools (0.66), and the southern dental schools (0.52) per 100,000 people in municipalities were higher than the corresponding mean numbers of dental freshmen enrolled by the northern dental schools (0.57), the central dental schools (0.37), and the southern dental schools (0.38), respectively, in non-municipalities of Taiwan. In addition, the mean number of practicing dentists per 100,000 people (74.48) in municipalities was significantly higher than that (29.08) in non-municipalities of Taiwan (P < 0.01, Table 2).
We also discovered that the mean number of enrolled dental freshmen per 100,000 people (2.58) in cities with dental schools was significantly higher than that (1.25, P < 0.05) in cities and counties without dental schools. Moreover, the mean numbers of dental freshmen enrolled by the northern dental schools (1.03) and the southern dental schools (0.69) per 100,000 people in cities with dental schools were significantly higher than the corresponding mean numbers of dental freshmen enrolled by the northern dental schools (0.54, P < 0.05) and the southern dental schools (0.36, P < 0.05) in cities and counties without dental schools, respectively. Although the mean numbers of dental freshmen enrolled by the central dental schools (0.87) per 100,000 people in cities with dental schools was also higher than the corresponding mean number of dental freshmen enrolled by the central dental schools (0.36) in cities and counties without dental schools, the difference was not significant (Table 2). In addition, the mean number of practicing dentists per 100,000 people (81.63) in cities with dental schools was significantly higher than that (34.81) in cities and counties without dental schools of Taiwan (P < 0.05, Table 2).
Furthermore, the largest mean number of enrolled dental freshmen per 100,000 people was 2.33 in the southern region of Taiwan. Moreover, the largest mean number of dental freshmen enrolled by the northern, central, and southern dental schools per 100,000 people was 0.95 in the offshore island region, 0.65 in the southern region, and 0.76 in the southern region of Taiwan, respectively (Table 2). However, the largest mean number of practicing dentists per 100,000 people was 69.93 in the northern region of Taiwan (Table 2). Therefore, our results indicate that the numbers of enrolled dental freshmen per 100,000 people are higher in municipalities than in non-municipalities and in cities with dental schools than in cities and counties without dental schools. However, the numbers of enrolled dental freshmen per 100,000 people were slightly different among the northern, central, southern, eastern, and offshore island regions of Taiwan. Moreover, the corresponding numbers were still low in the eastern region of Taiwan. On the other hand, the numbers of practicing dentists per 100,000 people were also higher in municipalities than in non-municipalities and in cities with dental schools than in cities and counties without dental schools. However, the number of practicing dentists per 100,000 people was highest in the northern of Taiwan (69.93), and these corresponding numbers were still low in the eastern (38.95) and offshore island regions (23.94) of Taiwan. Considering the population factor and the regional differences, the practicing dentists were obviously more concentrated in the northern region than the enrolled dental freshmen, especially the northern municipalities.
Coefficients of variation (CV) of the number of enrolled dental freshmen per 100,000 people in different regions of Taiwan in 2020 The coefficients of variation (CV) of the number of enrolled dental freshmen per 100,000 people were 1.39 for the northern dental schools, 1.18 for the central dental schools, 1.24 for the southern dental schools, and 1.19 for the overall (Table 1). It indicates that the number of enrolled dental freshmen per 100,000 people is not more dispersed nationwide. However, the corresponding CV values for municipalities were 0.91, 0.72, 0.53, and 0.65 for the northern, central, and southern dental schools and the overall, respectively ( Table 2). The corresponding CV values were greatest for non-municipalities, which were 1.66, 1.48, 1.57, and 1.49 for the northern, central, and southern dental schools and the overall, respectively. Furthermore, similar results of CV values were discovered for cities with dental schools as well as for cities and counties without dental schools (Table 2). It indicates that the numbers of enrolled dental freshmen per 100,000 people are all more dispersed in non-municipalities than in municipalities and in cities and counties without dental schools than in cities with dental schools (Table 2). In terms of practicing dentists, similar results were found (Table 2). Coefficients of correlation between the dentist index and the dental student index in 21 cities and counties of Taiwan in 2020 The number of enrolled dental freshmen per 100,000 people in Lienchiang County was very extreme and thus was not included in some of the subsequent statistical analyses. Lienchiang County was still included in the analyses of the numbers of practicing dentists per 100,000 people and the numbers of enrolled dental freshmen per 100,000 people, as well as in the analyses of the dentist index and the dental student index. However, it was excluded in the regression analysis between the dentist index and the dental student index. Each value of practicing dentists per 100,000 people or each value of enrolled dental freshmen per 100,000 people was calculated with an index of 100. The coefficient of correlation between the dentist index and the dental student index was R 2 Z 0.5025 (R Z 0.71, P < 0.001) with a slope of 1.8924 for nationwide (n Z 21, Fig. 1). Moreover, the coefficients of correlation between the dentist index and the dental student index were R 2 Z 0.7521 (R Z 0.87, P < 0.05) with a slope of 1.5964 for municipalities (n Z 6) and R 2 Z 0.6332 (R Z 0.80, P < 0.001) with a slope of 3.1635 for non-municipalities (n Z 15), as well as R 2 Z 0.9334 (R Z 0.97, P < 0.05) with a slope of 1.2736 for cities with dental schools (n Z 4) and R 2 Z 0.4925 (R Z 0.70, P < 0.01) with a slope of 2.6278 for cities and counties without dental schools (n Z 17) (Fig. 1).
Discussion
The system of the Joint College Entrance Examination began in Taiwan in 1954. The Department of Dentistry of National Taiwan University enrolled its dental students through this joint examination process in 1955 for the first time, creating a new era in Taiwan's dental education. 5 The domestic dental schools of general universities have the opportunities to enroll their dental students once a year and the joint examination process have gone through 66 times until 2020. In 1955, there was only one dental school of the general university that enrolled 9 dental students. Up to now, there are 7 dental schools of the general universities with more than 2000 dental students from year 1 to year 6. Moreover, in each year there are about 380 or more dental students enrolled by the joint college entrance system. Furthermore, Taiwan's total population increases from 7.87 million in 1951 to 23.60 million in 2019. Meanwhile, the total number of dentists increases from 538 in 1951 to 15,127 in 2019. 7 In the same period, Taiwan's total population has grown by 3 times, but the total number of dentists has grown by as much as 28 times. The number of people served by each dentist has changed from 14,627 to 1,560, indicating that the structure of Taiwan's dentist manpower has undergone a tremendous change. In the early days, the major dentist occupation problem is the extreme lack of dentists in Taiwan. However, the situation has changed to today's three major dentist occupation problems: a surplus of dentists, an uneven distribution of dentists, and a concentration of dentists in the metropolitan areas. 2,3,8e11 Therefore, in this study we started with the analyses of the distributions of dental freshmen and practicing dentists in each city or county in Taiwan in 2020. In Taiwan, in addition to the military university channel, the formal way to become a dentist is that the students enter domestic dental schools through the college entrance system to become dental students after they graduate from senior high schools. After graduating from dental schools, the dental graduates participate and pass the dentist national examination to obtain a dentist license and become a qualified dentist. However, a large number of foreign dental graduates return to Taiwan to participate in the dentist national examination, obtain a dentist license, and engage in dentist practice. These extra-increased dentists have indeed caused a change in the quality and quantity of dentists. 11 Figure 1 Correlation of dentist index and dental student index according to municipalities and non-municipalities as well as cities with dental schools and cities and counties without dental schools in Taiwan The postgraduate year training program for dentists (PGYD) was implemented in Taiwan in 2010, and thus the increase in the numbers of practicing dentists in the period from 2010 to 2019 had an improvement over the period from 2001 to 2010. However, the growth of the total population was gradually slowing down on the contrary. The dual changes of the practicing dentists and the total population thus increased in the numbers of practicing dentists per 100,000 people from the period of 2001e2010 to the period of 2010e2019, of which the numbers of practicing dentists per 100,000 people were 39.92 in 2001, 50.32 in 2010, and 64.09 in 2019. 7 In fact, the practicing dentists have increased with a clear difference in absolute numbers, resulting in a surplus of dentists. However, the regional differences due to the presence or absence of dental schools have not improved. We found that the dentist index and the dental student index were highly positively correlated among all cities and counties of Taiwan, indicating that cities or counties with more practicing dentists often have more dental freshmen. The practicing dentists and dental freshmen both were obviously concentrated in municipalities and cities with dental schools, resulting in the obvious regional differences. However, both indeed had somewhat differences in geographical distribution. Although they both were concentrated in the western region of Taiwan, practicing dentists were more concentrated in the northern region than dental freshmen, especially the northern municipalities. In contrast, dental freshmen were more concentrated in Taipei City than practicing dentists. Although the government has already implemented a policy to ensure that students from remote or offshore areas can be enrolled in dental schools, there is still a lack of practicing dentists in the remote, eastern, and offshore island regions. 5,12 Therefore, the problems of an uneven distribution of dentists and a concentration of dentists in the metropolitan areas have not been improved.
Our previous study found that there was a spreading out phenomenon about the distribution of dentists in Taiwan. Due to the concentration of resources to a certain extent as well as the competition and market-driven forces, practicing dentists would spread out from concentrated areas to other areas with fewer dentists and competition. Therefore, using Gini coefficient as an indicator, we found that the uneven geographical distribution of dentists in Taiwan did not become worse. 13 However, the actual situation is that practicing dentists (or dental students who become dentists after graduation) may move between municipalities and cities only, such as moving from Taipei City to New Taipei City or Taoyuan City. Therefore, the urban-rural gap of dentists is still serious, and the regional imbalance of dentists still exists. Nevertheless, the above inference needs to be supported by the long-term observations and further empirical studies.
The quality and quantity of required oral health care vary as the demand according to the changes of the population of a city or a country, but any regional gap and imbalance are likely to worsen further, depending on differences in the numbers of new-entry dentists. 14e17 The long-term accumulation of dental students and their entry into the dental service market as dentists after graduation, as well as the slow growth of population has caused the supply of dentists to exceed demand, resulting in a surplus of dentists. In addition, practicing dentists often choose to practice in metropolitan areas with more resources and opportunities as driven by the market forces, which further leads to an uneven distribution of dentists and a concentration of dentists in the metropolitan areas. In addition to the market factors, the self-factors of dentists or dental students also affect their choices of practice locations. They tend to choose the locations where they grow up and the locations of their dental schools or training hospitals as their final practice locations. 4,18 However, in addition to the regional factors, the self-factors of dentists or dental students also include their background factors, e.g. the dental students with certain special background are enrolled to the dental schools.
In Taiwan, due to the good income and high life quality of dentists, the enrollment to dental schools becomes more and more competitive, which is not conducive to the enrollment of disadvantaged students or students from the remote areas to the dental schools. 5 In the past 25 years since the implementation of national health insurance in 1995, our dental students have not only increased in number, but also undergone qualitative changes. Therefore, current dental students mostly come from metropolitan areas, families with high social and economic status, and even families with dentists as their family members. They choose dental schools mostly because of financial incentives or parents' expectations. However, these students often occupy the quota and reduce the chances of disadvantaged students to enter the dental schools. Moreover, after they become dentists, they usually practice in the metropolitan areas and offer the high-charge dental services, which may subsequently lead to another vicious circle in the future. The paying of high cost to enroll in the dental schools has shaped the self-factor of dentists or dental students. For example, the difficulty in obtaining the enrollment in domestic dental schools or going abroad to study in foreign dental schools greatly increases the total cost of becoming a dentist. Therefore, after these dental students become dentists, they usually have to consider how to quickly earn the investment cost back and practice in the metropolitan areas, and offer the high-charge dental services as the main practice items.
The dental schools have become a popular choice for senior high school graduates, and the advantage of this change is that the dental schools can select more excellent dental students, which in turn help improve the overall quality of dentists in Taiwan and promote the advancement of basic dental researches and clinical oral medicine. However, the disadvantage of this change may be the lack of dentist manpower in the remote areas, the insufficient dental services for disadvantaged groups, and the low willingness of dentists to invest in unpopular dental subjects such as oral pathology and oral health care for patients with special needs. To achieve the effectiveness of required oral health care nationwide, the problems of supply and demand as well as the regional and urban-rural imbalances of dentists must be resolved through the total number control of dental students, the even regional composition of dental students, and the reasonable allocation of dental education resources to obtain the regional balance between the numbers of dental schools and their dental students.
Journal of Dental Sciences 16 (2021) 1125e1132
How the dentists' self-factors affect their choices of practice locations is worthy of further studies. If the locations where dentists or dental students grow up, the locations of their dental schools, and the locations of their training hospitals are in the metropolitan areas, these factors may affect them to also practice in the metropolitan areas. However, the locations where dentists or dental students grow up and the locations of their dental schools are inherent. Therefore, we suggest that through the internship system and the PGYD system, dental students and new-entry dentists have the opportunities to learn and to be trained in the remote dental institutions to increase the experience of performing dental services in the remote areas and to improve their self-factors. 3,7,13,19 In addition, through a reasonable screening mechanism, we hope that the dental schools may not only select excellent dental students, but also dental students who are willing to engage in dental care in the remote areas and dental care for the disadvantaged groups, as well as dental students who are willing to invest their careers in unpopular dental subjects, such as oral histology or oral pathology. | 2021-09-05T05:16:56.605Z | 2021-06-26T00:00:00.000 | {
"year": 2021,
"sha1": "7c5317a5805190c5ff6e7388fd06e28520e1f555",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.jds.2021.06.001",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7c5317a5805190c5ff6e7388fd06e28520e1f555",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247071122 | pes2o/s2orc | v3-fos-license | Pro-inflammatory markers in patients with obstructive sleep apnea and the effect of Continuous Positive Airway Pressure therapy
Objectives To evaluate the association of obstructive sleep apnea (OSA) with high-sensitivity C-reactive protein (CRP) and fibrinogen levels and to assess the effect of short-term therapy using continuous positive airway pressure (CPAP). Material and Methods A prospective, open-label, controlled trial was conducted among clinically referred patients at risk for OSA undergoing diagnostic polysomnography (PSG). After PSG, the patients were divided into 3 groups: OSA treatment group (TG) (n=21), untreated OSA group (UOG) (n=19), and non-OSA healthy control group (HCG) (n=24). CRP and fibrinogen levels were measured at baseline and one month after treatment. Repeated-measures (RM) ANOVA and ANCOVA were used to compare changes in CRP and fibrinogen levels among the three groups by analyzing between-subject and within-subject effects as functions of time and adjusting for significant covariates. Results At baseline, OSA subjects had significantly higher CRP [t(52.37)=-2.46, p=0.02)] and fibrinogen levels [t(57)=-2.00, p=0.05)] than HCG subjects. No significant differences in CRP levels [(F(2,58)=2.29, p=0.11)] or fibrinogen levels [(F(2, 58)=1.28, p=0.29)] emerged between TG and HCG subjects after adjusting for the pretest levels. Conclusion CPAP therapy for one month does not affect CRP and fibrinogen levels among moderate-to-severe OSA patients. However, OSA is associated with elevated levels of these inflammatory biomarkers.
INTRODUCTION
Obstructive sleep apnea (OSA) is a common sleeprelated breathing disorder that is characterized by repeated complete or partial collapse of the upper airway during sleep [1][2][3] . OSA results in momentary intermittent hypoxemia and hypercapnia, sleep fragmentation, and poor sleep quality 4 . OSA has clear negative impacts on the quality of life of affected individuals. This condition is also associated with adverse safety and health consequences, including cardiovascular and cerebrovascular diseases, type 2 diabetes, cognitive impairments, depression, and ocular conditions 5,6 . Moreover, OSA is a risk factor for cardiovascular disease, probably due to increases in systemic inflammation and oxidative stress and their injurious effects on the vascular endothelium 7 . This status of increased systemic inflammation may result in initiation and acceleration of underlying atherosclerosis with consequent increases in morbidity and mortality 8,9 . Indeed, the recent literature supports a conceptual framework wherein OSA should be considered a low-grade chronic systemic inflammatory disorder 10 .
Previous studies proposed that OSA modulates the expression and secretion of inflammatory markers from fat and other tissues 10,11 . In fact, independent of obesity, elevated levels of proinflammatory factors, including C-reactive protein (CRP), fibrinogen, tumor necrosis factor-α and interleukin-6, have been reported among patients with OSA 10 . Hence, ongoing inflammatory responses have been suggested to play important roles in the association between OSA and chronic inflammationinduced pathologies, such as atherosclerosis 12 . The activation of inflammatory responses through adaptive pathways in OSA may be an important molecular mechanism underlying the development of cardiovascular disease and a variety of other metabolic diseases 13,14 .
Continuous positive airway pressure (CPAP) remains the most effective therapy to date for improving the polysomnographyderived parameters indicating OSA severity, i.e., the apnea hypopnea index (AHI), oxygen desaturation index 3% (ODI3) and nadir oxyhemoglobin saturation (SaO 2 ) 4,15,16 . Thus, exploring the potential relationships between systemic inflammatory markers in OSA and the effect of CPAP-based treatment is important. There are conflicting data in the literature regarding the efficacy of CPAP in reducing the levels of proinflammatory biomarkers and, consequently, providing an overall protective effect against the development of atherosclerosis and cardiovascular diseases [17][18][19][20][21] . This study was conducted to explore the effect of short-term CPAP therapy on the levels of two well-established proinflammatory biomarkers, namely, CRP and fibrinogen, which are regarded as biological indicators of ongoing systemic inflammation.
Design and setting
This was a prospective controlled trial that was conducted from April 2018 to May 2019. The trial took place at the Sleep Medicine and Research Center (SMRC) at King Abdul-Aziz University Hospital (KAUH), Jeddah, Saudi Arabia.
Population
All patients with a clinical suspicion of OSA who were referred to the SMRC for diagnostic polysomnography (PSG) were included in the study. The exclusion criteria included a past diagnosis of OSA, treatment for OSA, presence of conditions that could affect the levels of inflammatory markers, including history of respiratory and cardiovascular diseases with chronic hypoxia, neuromuscular disorders, infectious diseases, rheumatological diseases, immunological diseases, tumors, peripheral vascular disease, liver or kidney diseases, coagulopathy, psychogenic disorders, and diabetes mellitus; history of trauma or surgery in the past 3 months; recent use (past 3 months) of corticosteroids, antibiotics, immune suppressors, or hormones; and smoking history in the past 6 months.
Initial assessment
All patients referred to the SMRC for a sleep study underwent data collection and physical examination. Demographic data, including age, sex, body mass index (BMI), neck circumference, and blood pressure, were collected. The Epworth sleepiness scale (ESS) was used to assess daytime sleepiness 22 . All eligible subjects were invited to participate and signed an informed consent form that was approved by the local institutional review board (IRB) (reference number 178-18).
Overnight PSG
All participants underwent overnight PSG using a standard montage, as previously described 23 . The sleep stages and respiratory events were scored according to the guidelines of the American Academy of Sleep Medicine (AASM) 24 . The severity of OSA was determined according to the AASM recommendations using the AHI: 5-14, mild; 15-29, moderate; and ≥30, severe 25 .
Group allocation
According to the PSG results, participants were categorized into 3 groups: The first group included those with moderate-tosevere OSA according to the AASM definition 25 , i.e., those with an AHI of 15/hour of total sleep time (hrTST) or greater, who accepted initiation treatment with CPAP (treatment group [TG]). The second group included patients with moderateto-severe OSA who refused CPAP treatment (untreated OSA group [UOG]). The third group included participants without OSA, i.e., those with an AHI less than 5 (healthy control group [HCG]) ( Figure 1).
Initial proinflammatory marker measurements
Fasting blood samples were collected from all study participants to measure the serum and plasma levels of proinflammatory markers in the morning after the diagnostic PSG. Venous blood samples were collected into 5ml tubes with and without added anticoagulation factors, and the serum and plasma were separated and stored at -80°C immediately.
Commercially available assays were used to quantify highsensitivity CRP (Siemens; Germany) and fibrinogen (ACL TOP 550 CTS Instrumentation Laboratory, Italy) levels according to the manufacturer's instructions. The CRP and fibrinogen assays exhibited analytical sensitivities less than 3.4mg/l and 50mg/dl, respectively. Serum CRP concentrations were measured using fully automated BNII nephelometer (Siemens; Germany). The cutoff value for a normal CRP level was 3.4mg/l. Fibrinogen was measured in citrated plasma by an ACL TOP 550 instrument (Instrumentation Laboratory, Italy). The measurement range was between 180 and 350mg/dl. Plasma heparin concentrations below 2U/ml did not affect the test. Concentrations of CRP>3.4mg/l and fibrinogen >350mg/dl were considered elevated.
CPAP THERAPY
All participants with OSA who agreed to initiate treatment underwent a CPAP titration PSG study to determine the optimal CPAP pressure according to the AASM guidelines 19 , and this pressure was then implemented at home.
Follow-up and endpoints
The three groups were followed for one month following the diagnostic PSG. Follow-up included weekly visits to determine whether there were any necessary changes to the medications or events that could affect the levels of the inflammatory markers and to ensure adherence to CPAP treatment in the TG using data from memory cards installed within the CPAP devices 26 . Adherence to CPAP was considered acceptable when the minimum CPAP use was ≥4 hours/70% of the nights. At the end of the one-month follow-up, fasting morning blood samples were drawn to analyze the levels of the proinflammatory biomarkers, which were processed as described above. Participants who did not meet the adherence criteria were excluded.
Statistical methods
All of the data analyses in this study were performed using the Statistical Package for Social Sciences version 26.0 for Windows (SPSS Inc., Chicago, IL, USA). Means, standard deviations, frequencies and percentages were used to describe the data. Chi-square tests, maximum likelihood ratio chisquare tests, and one-way and Kruskal-Wallis tests were used to show differences in the participants' characteristics. CRP and fibrinogen levels were not normally distributed and were therefore log-transformed for further analysis. One-way ANOVA and one-way ANCOVA were used to assess differences in the log-transformed CRP and log-transformed fibrinogen values. Principal component analysis was used to estimate a single common component (age, BMI, neck circumference, AHI, and sex) to use as a covariate to adjust one-way ANCOVA. Binary logistic regression was used to assess association between OSA, inflammatory markers, and other factors. The model was explored with OSA condition (yes/no) as dependent variable and variables such as sex, age, BMI, neck circumference, systolic blood pressure (BP), diastolic BP, ESS score, STOP-BANG classification, log-transformed CRP, and log-transformed fibrinogen as independent variables. A two-tailed p-value<0.05 was considered statistically significant.
Baseline differences in fibrinogen and CRP levels among OSA participants
At baseline, participants with OSA had significantly higher baseline serum log-transformed CRP levels (0.80±0.35mg/dl) than HCG subjects (0.61±0.24mg/dl; actual CRP values: 6 A common component score that could be used as a covariate in the model was generated using principal component analysis for age, BMI, neck circumference, AHI, and sex. This common component score explained 34.21% of the variability in age, BMI, neck circumference, AHI and sex. The homogeneity of the regression condition was determined to be satisfactory, with a non-significant interaction between groups (F(1,39)=2.61; p=0. 12) and log-transformed CRP as the dependent variable. Similarly, there was no significant interaction between groups (F(1,39)=1.76; p=0. 19), with log-transformed fibrinogen as the dependent variable.
Association of inflammatory markers and other variables with OSA at baseline
The binary logistic regression model for OSA was statistically significant when compared to the model with only intercepts; χ 2 (11, N=59)=43.44, p<0.01. The prediction model with 10 factors accounted for 74.1% (Nagelkerke R Square) of the variance in the classification of OSA. The model correctly classified 89.3% of the cases (sensitivity=94.4% and specificity=80.0%), with increasing age being associated with an increasing likelihood of OSA (adjusted odds ratio (AOR)=1.26, 95% confidence interval (CI): 1.07-1.48; p<0.01).
CRP levels: within-group pre-post difference
At the 1-month follow-up, the log-transformed CRP levels were significantly higher in the HCG group than in the baseline group (0.68±0.26mg/dl vs. 0.61±0.24mg/dl; t(19)=-3.07, p=0.01). In addition, CPAP treatment was not associated with any statistically significant differences in CRP levels ( Figure 2).
Effect of 1 month of CPAP therapy on CRP and fibrinogen levels
ANCOVA revealed that there were no significant differences in the log-transformed fibrinogen levels between the treated group and the untreated groups after 1 month of CPAP therapy after adjusting for baseline levels (F(2,58)=1. 28, p=0.29), and similar findings emerged for the log-transformed CRP levels (F(2,58)=2.29, p=0.11).
DISCUSSION
Increased levels of CRP and fibrinogen are detectable among OSA patients, supporting the conceptual framework that OSA is a chronic low-grade systemic inflammatory condition. However, adherence to CPAP treatment for one month did not lead to significant changes in these inflammatory markers. Nevertheless, while a significant increase in the fibrinogen level after 4 weeks of follow-up was observed in both the HCG and untreated OSA group compared to baseline, no such increases were detectable in the CPAP-treated subjects, suggesting that rather than reversing the expression of inflammatory biomarkers, CPAP treatment would be expected to abrogate the temporal progression of inflammation.
The current study supports previous published findings indicating that OSA is a chronic inflammatory condition 10,14 . Nadeem et al. (2013) 10 conducted a meta-analysis that included 51 studies and concluded that the levels of inflammatory markers are higher in patients with OSA than in control subjects. Furthermore, Li et al. (2017) 14 , in another metaanalysis, reported the link between serum CRP levels and OSA and the interactive effects of obesity and the severity of OSA on CRP concentrations. More recently, however, a Korean study involving 1,835 subjects showed that CRP levels were elevated in patients with moderate and severe OSA, independent of other confounders, including obesity 27 . It is worth mentioning that although the severity of OSA is usually classified by the AHI according to AASM 25 , however, other parameters may also determine OSA severity, such as oxygen saturation 28 . Accordingly, CRP levels was reported to correlate significantly not only with AHI but also with oxygen saturation during sleep 29 . In addition, Kim et al. (2016) 27 reported that CRP levels were negatively associated with the SaO 2 nadir. Nevertheless, despite the presence of elevated levels of inflammatory markers, including CRP and fibrinogen, among OSA patients, the association between the levels of inflammatory markers and OSA severity has proven elusive and inconsistent [30][31][32][33] . The tenacious investigation of these biomarkers in the context of OSA is obviously explained by the fact that these markers are strongly associated with cardiovascular risk and endothelial dysfunction, as well as with metabolic syndrome and stroke [34][35][36] .
In the present study, short-term treatment consisting of one month of adherence to CPAP therapy did not lead to obvious reductions in the levels of these inflammatory markers. Several inflammatory markers that are believed to contribute to the pathogenesis of endothelial dysfunction have been previously studied among OSA patients before and after CPAP therapy 18,37 . In line with our findings, a controlled trial tested the effect of CPAP therapy on fibrinogen levels and erythrocyte sedimentation rate (ESR) in patients diagnosed with OSA and cardiac arrhythmias and found no significant differences after 3 and 6 months of treatment compared to the corresponding values in patients who received only pharmacological treatment 38 . The authors concluded that fibrinogen and ESR may not be reliable markers of the efficacy of CPAP therapy 38 . Similarly, a cohort study measured the post-CPAP therapy changes in both fibrinogen and CRP levels in patients with OSA, including those with ischemic heart disease (IHD), and found no differences in the two markers after 3 months of CPAP therapy 39 . However, when analyzing the effect of CPAP treatment on OSA patients without the clinical manifestations of IHD, a statistical trend towards a decrease in mean CRP levels was observed (p=0.05). These findings may suggest that a more beneficial effect of CPAP therapy should be expected among OSA patients without clinically apparent IHD 39 . We should point out that evidence for the reversibility of vascular inflammation after long-term exposure to intermittent hypoxia mimicking sleep apnea was not apparent in a mouse model and was likely explained by epigenetic changes in macrophage pathways underlying sustained inflammatory processes 40,41 .
Conversely, a meta-analysis that included nearly 1,200 OSA patients from 14 cohort studies found standardized mean differences of 0.68 and 0.74 units in CRP levels after 3 and 6 months of CPAP therapy, respectively, compared to the pre-CPAP therapy measurements 19 . The authors concluded that CRP was a reliable indicator of the efficacy of CPAP therapy and that the use of CRP concentrations could assist in the prediction of cardiovascular risk in OSA patients 19 . Another research team investigated the effect of 6 months of nasal CPAP therapy on CRP levels in patients with overlap syndrome, i.e., the coexistence of OSA with chronic obstructive pulmonary disease (COPD). The findings showed a nearly 50% decrease in the mean CRP levels from baseline, and the decline in CRP levels was linearly correlated with the number of hours of CPAP therapy utilized per night 42 . These findings concur with those of a similar trial that tested the effect of CPAP therapy on systemic inflammatory markers in patients with overlap syndrome and in those with OSA alone 43 . Nural et al. (2013) 43 reported significant CPAP-induced decreases in CRP levels in both the overlap syndrome and OSA groups (p=0.04 and p=0.02, respectively). Sex-specific effects of CPAP therapy on CRP levels have also been postulated, whereby a better effect on CRP level may be present among males receiving 3 months of CPAP treatment, as compared to the unaltered CRP levels among females after 3 months of the same treatment. Of note, female patients required 6 months of CPAP therapy to manifest declines in CRP concentrations 44 . The discrepancy between the results of these studies and our current findings could be related to the shorter duration of CPAP in our study or could be ascribed to differences among the participant cohorts, since we excluded patients with factors that could have exacerbated their systemic inflammatory status. Nevertheless, these observations suggest that a favorable effect of CPAP may occur in terms of fibrinogen levels, as well as CRP concentrations, provided that the treatment is adhered to for a much longer period of time.
More importantly, we surmise that the beneficial effects of CPAP therapy may be more apparent when illustrated by CRP levels in specific patient subgroups (i.e., men with relatively severe OSA and without clinically established cardiovascular conditions).
Similar to that of other studies, the purpose of the present study was to determine the therapeutic effect of CPAP therapy on systemic inflammation, with the ultimate goal of reducing the global cardiovascular risk among OSA patients. However, the only favorable finding supporting such assumption resides in the indirect effect of CPAP therapy, whereby treatment was accompanied by the absence of an increase in fibrinogen and CRP levels in the treated group. Notwithstanding, we cannot rule out that such changes may also be due to a cyclic or seasonal variation in inflammation biomarkers that is prevented by CPAP therapy, since notable seasonal variations were reported in a large population-based study of inflammatory biomarkers 45 .
There are several aspects and limitations of this study that should be highlighted. First, the one-month duration of CPAP therapy may not have been sufficient to induce a significant change in the plasma markers of inflammation. Second, the small sample size of the study groups may have hampered the detection of significant differences due to type 2 error. Third, due to the absence of randomization, notable selection bias may have occurred during group allocation, in which older patients and those with more severe OSA were more likely to be included in the CPAP treatment group. Fourth, patients with a CPAP use of ≥4 hours/70% of the nights was considered acceptable and included in the study. However, this adherence variable was not extracted further as level of adherence. Therefore, relationship between OSA and short-term CPAP could not be reported after adjusting for adherence level. Future studies may be better placed to record adherence level of CPAP, severity of OSA in groups, etc. All of these factors may have resulted in a reduction in the relative effect of CPAP therapy.
CONCLUSION
CPAP therapy for one month does not affect CRP and fibrinogen levels among moderate-to-severe OSA patients. However, one month of adherence to CPAP therapy may have a favorable impact on CRP and fibrinogen levels in moderateto-severe OSA patients by preventing temporal increases in such markers. Furthermore, our study confirms that OSA is associated with elevated levels of such inflammatory biomarkers. | 2022-02-24T16:23:12.351Z | 2020-11-12T00:00:00.000 | {
"year": 2022,
"sha1": "965ccd416a9653acafe983b5979e564e04598a18",
"oa_license": "CCBY",
"oa_url": "http://sleepscience.org.br/export-pdf/3142/v15nspea03.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b21f451b9a0961e6c7f736ca3cf29b3202625468",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221865267 | pes2o/s2orc | v3-fos-license | Structure‐Dependent Strain Effects
Abstract Density functional theory calculations of atomic and molecular adsorption on (111) and (100) metal surfaces reveal marked surface and structure dependent effects of strain. Adsorption in three‐fold hollow sites is found to be destabilized by compressive strain whereas the reversed trend is commonly valid for adsorption in four‐fold sites. The effects, which are qualitatively explained using a simple two‐orbital model, provide insights on how to modify chemical properties by strain design.
Structure-Dependent Strain Effects
Elisabeth M. Dietze* [a] and Henrik Grönbeck* [a] Density functional theory calculations of atomic and molecular adsorption on (111) and (100) metal surfaces reveal marked surface and structure dependent effects of strain. Adsorption in three-fold hollow sites is found to be destabilized by compressive strain whereas the reversed trend is commonly valid for adsorption in four-fold sites. The effects, which are qualitatively explained using a simple two-orbital model, provide insights on how to modify chemical properties by strain design.
Heterogenous catalysts [1,2] are commonly realized as nanoparticles dispersed on oxide supports. The nanoparticles expose a variety of different surface sites, such as terrace, edge, and corner sites, with distinct adsorption and reaction properties. [3][4][5] The reactivity is, moreover, affected by the fact that the surface atoms of nanoparticles are strained. One reason for the strain, is the finite size and the arrangement to shapes that minimize the surface energy. [6,7] Nanoparticles may additionally be strained by the lattice mismatch to the support. [8] This type of external strain offers a possibility to modify and, ultimately, tailor reaction properties. [9][10][11] The dependence of adsorption energies on transition metal and type of site has over the past decades been rationalized by different types of scaling relations. [12,13] The d-band model is one important example, which relates the adsorption energy with the position of the d-band center with respect to the Fermi energy. [12] The model, which commonly have been applied to (111) terraces, predict that the adsorption energy increases (decreases) when the d-band center is moved closer to (further away from) the Fermi energy. An alternative scaling relation that instead employ structural properties and enable comparisons between different adsorption sites, is the generalized coordination number. [13,14] Robust scaling relations are important to rationalize trends in adsorption properties and to obtain reactivity properties using Brønsted-Evans-Polanyi relations. [15] The effect of strain on adsorption properties has in the past been rationalized within the d-band model [11,16] focusing on adsorption on (111) surfaces. To maintain the d-band filling, the d-band center e d shifts closer to the Fermi energy for tensile strain, whereas it shifts away from the Fermi energy for compressive strain. From the (111) data, it was concluded that tensile (compressive) strain is increasing (decreasing) the adsorption energy. These trends have, for example, been confirmed experimentally by studies of CO adsorption on strained Pt surfaces. [17] Effects of strain have also been incorporated in an extension of the generalized coordination number, which has been applied to adsorption on strained Pt and Au sites. [18] Scaling relations have mainly been developed in connection to atomic and molecular adsorption on close packed surfaces, such as the (111) surface for fcc-metals. However, nanoparticles expose a variety of surfaces including (100) facets and the kinetic coupling between the different facets could be important for catalytic properties. [19] Here we study the effect of strain on different surfaces and obtain previously overlooked phenomena. We find that the response of strain to the adsorption energy depends on adsorbate, metal and type of site.
Density functional theory calculations are used to investigate the effect of symmetric strain in (111) and (100) surfaces of Pd, Pt and Rh for atomic adsorbates (H, B, C, N, O and F) and molecular fragments (OH, CH and CO) at the three-fold hollow fcc site on the (111) surface and the four-fold hollow site on the (100) surface. The calculations are done with the PBE [20] functional as implemented in VASP 5.4.4. [21][22][23] The surfaces are described by four-layer slabs with either 2�2 or 3�3 surface cells. Adsorption is studied with coverages of 1/4 or 1/9, respectively. The metals and adsorbates are chosen based on their importance in a range of industrial processes, such as fuel synthesis and emission control. H, C, O, OH, CO and CH are, for example, surface species during methane oxidation and CO 2 hydrogenation. [24][25][26] Figure 1 shows the relative adsorption energies with respect to the unstrained surfaces for the three-fold hollow fcc site on (111) and four-fold hollow site on (100) of Rh, Pd and Pt for both compressive and tensile strain. The effect of strain for the (111) surfaces is in agreement with previous studies [11,27] and the trends are similar across all investigated atoms and molecular fragments; Compressive (tensile) strain results in lower (higher) adsorption energy. For atomic adsorbates, the change in adsorbate energy with strain is largest for Pt, followed by the other metals. The situation is less clear for the molecular adsorbates, where no general trend is observed between the metals.
Oxygen adsorption on the three (100) surfaces shows similar trends as the hydrogen adsorption. To avoid too large surface distortions, adsorption on Pt(100) was in this case calculated with a 1/9 coverage. Considering nitrogen adsorption, Rh(100) shows the similar weak dependence as for hydrogen and oxygen with a slight minimum. The strain dependence for nitrogen on Pd(100) is similar to that on Rh(100). However, the adsorption site is in this case unstable with respect to distortions when the compressive strain is larger than 1 %, even for the lower coverage of 1/9. The effect of strain is pronounced for N adsorption on Pt(100) where the energy changes just as much as for the Pt(111) surface, however with reversed sign. In the case of fluorine, the (100) surfaces show the same sign of the strain-dependence as the (111) surfaces, however, with a weaker change in adsorption energy. Carbon shows the largest changes in adsorption energy as a function of strain. Adsorption on Rh(100) has the same dependence as the Rh(111) surface. The dependence on Pd(100) and Pt(100) is weak, having slight minima for tensile (Pd) and compressive (Pt) strain. The results for boron are similar to carbon and given in the SI.
We study two molecular fragments that are related to carbon, namely CO and CH. Both species show a minimum in the strain dependence for Rh(100). The situation is different for Pd(100) and Pt(100) where the adsorbates are stabilized (destabilized) for compressive (tensile) strain. OH adsorption shows weaker but similar trends for the (100) surfaces as for the (111) surfaces.
Our DFT calculations reveal that the effect of strain for the different adsorbates on the (100) surfaces has a larger variation than for the (111) surfaces. In particular, the response to strain could on the (100) surface either strengthen or weaken the adsorbate bond. The similar trend for all (111) surfaces is connected to the smaller site area with respect to the size of the adsorbate. For cases on the (100) surface where the adsorbates are large (OH and F), the same trend as for the (111) surface is observed. It should be noted that the hollow site on (100) is not necessarily the most stable site of the investigated adsorbates. For the investigated atomic adsorbates, B, C, O, and N prefer the hollow positions on Rh(100) and Pd(100). On Pt (100), the hollow position is preferred for B and C. The molecular species CH and OH are preferably adsorbed in the hollow site for all three investigated (100) surfaces. The stable sites within the used computational method for all considered adsorbates are reported in the SI, Tables S1 and S2.
The obtained strain dependences are affected by the considered coverage. Figure S3 compares the dependence for 0.25 and 1 coverage of CH on Pt(100) and shows that the trend could be reversed going to the high coverage limit. As a high coverage reduces the possibility of local distortions, this underlines the importance of structural relaxations as one part of the mechanism for the observed effects.
As already mentioned, trends in the adsorption of atoms and molecules on transition metal surfaces are commonly described with the d-band model. [12] It is based on the assumptions that the hybridization between the metal s-bands with the adsorbate is similar for all transition metals and that differences can be described using the hybridization with the dband only. The hybridization depends mainly on the relative position of the d-band with respect to the Fermi energy, the dband center e d . The d-band center captures both, the relative position of the adsorbate and metal states and the strength of the coupling matrix element. [28] Applying strain to metal surfaces affects the overlap of the metal orbitals, leading to a change in the d-band width and thereby a shift of the d-band. An upward shift of the d-band center leads in this model to a strengthening of the metal-adsorbate bond. [29] Figure 2a In this analysis we used all components of the d-band. It has been suggested [30] that only d-components taking part in the metal-adsorbate bond should be used in the d-band center analysis. Figure S6 reports the strain dependence of each component of the d-band. All components show similar trends, which means that the reversed trend does not depend on the used component.
To elucidate the observed trends and put them in relation to changes in the coupling matrix element, we consider a simple analytical two-state model. [31] Figure 2b shows the energy function of the distance between two s-functions as for the hydrogen-ion molecule. The energy is determined by the two parameters p and q, which describe the exponential decay of the orbitals (f ¼ e À q=r ). Fixing the q-parameter to 1, representing one specific adsorbate state, the p-parameter is varied, representing e d . For a fixed distance, enlarging p corresponds to reducing the extend of the radial function which leads to a reduced overlap to the adsorbate state. Instead, reducing p at a fixed distance, increases the overlap between the two functions. Changing the distance between the orbitals, different p-parameters result in different energy curves as shown in Figure 2b. Figure 2c shows the effect on the binding energy when varying the overlap between the two orbitals for three distances. The p-parameter is in this case varied, which corresponds to different eigenvalues and consequently different d-band centers. The energy is given with respect to the eigenvalue with p = 1 (À 13.6 eV). Depending on the distance between the orbitals, the relative energies change with different slopes. A distance of 1.32 Å corresponds to the optimal distance between the orbitals, whereas d = 1.19 Å is on the repulsive part of the potential energy curve and d = 1.72 Å on the attractive part. d = 1.32 Å shows a dependence with both negative and positive slopes and a minimum. A similar functional dependence is observed for d = 1.19 Å although the dependence is weaker. For d = 1.72 Å, the slope is negative over the entire range. This simple two-orbital picture demonstrates that the functional dependence on the eigenvalue of one of the orbitals (d-band center) is sensitive to the distance between the orbitals. Different functional dependences arise for different distances, which elucidates the DFT-results for the (100) and (111) surfaces.
Clearly, the applied surface strain changes the overlap between the metal atoms, which changes non-linear with strain. [32] For adsorption on metal surfaces, many atoms contribute to the adsorption energy. The adsorbate is relaxed to a position that minimize the total energy, however, the distances in each two-body interaction are not optimized. This has, in particular, consequences for sites where the adsorbate could interact with subsurface atoms. For an adsorbate in the hollow position of the (100) surface, two opposite effects are contributing to the effect of strain on the bonding. Firstly, with increased tensile strain, the distance of the atoms in the surface increases, which leads to a decreased bond strength. Secondly, counteracting this effect, is the simultaneous reduction of the distance between the adsorbate and the subsurface metal atom, which is strengthened upon tensile strain. Thus, dependent on the size of the adsorbate, the interaction with the subsurface layer may influence the bonding, leading to different functional dependences on strain. Note that the effect of strain for the (100) surfaces follow the (111) surfaces when the adsorbates are electronically large (F and OH). These are cases when the effect of the subsurface atom in the four-fold hollow site on (100) is negligible.
The presented results provide new insights into the understanding of strain effects and site engineering of low-index surfaces, demonstrating that strain has a clear but complex site dependence. We find that different sites may have opposite functional dependences on strain. Given that the active phase in heterogenous catalysts generally are metal nanoparticles with a range of different kinetically coupled sites, these effects should be considered. The observed trends offer new possibilities for catalytic site engineering using strain. Moreover, the results have implications when developing general scaling relations taking strain into account as the strain-response could be structure dependent. | 2020-09-24T13:06:24.079Z | 2020-09-23T00:00:00.000 | {
"year": 2020,
"sha1": "f3be741b87f92a6b52c170df975b302050139779",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1002/cphc.202000694",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "5e96da718468d868d1f564b68a4f1a8ad40905ba",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
218569028 | pes2o/s2orc | v3-fos-license | Adaptive Decentralized Output Feedback Tracking Control for Large-Scale Interconnected Systems with Time-Varying Output Constraints
+is paper investigates a novel adaptive output feedback decentralized control scheme for nonstrict feedback large-scale interconnected systems with time-varying constraints. A decentralized linear state observer is designed to estimate the unmeasurable states of subsystems. Time-varying barrier Lyapunov functions are designed to ensure outputs are not violating constraints. A variable separation approach is applied to deal with the nonstrict feedback problem. Moreover, dynamic surface control and minimal parameter learning technologies are adopted to reduce the computation burden, and there are only two parameters for every subsystem to be updated online. +e proof of stability is obtained by the Lyapunov method. Finally, simulation results are given to show the effectiveness of the proposed control scheme.
Introduction
No matter in practical engineering application or theoretical research field, the high-precision trajectory tracking control of the nonlinear system is a very valuable research topic. For example, the welding robot needs to carry out welding according to the given trajectory, and the accuracy of the trajectory tracking control is an important indicator of the performance of the welding robot. e drilling guide system needs to control the well trajectory continuously to ensure the drilling quality. However, it is very difficult to design the control scheme of the nonlinear system due to various uncertainties [1][2][3]. Because the fuzzy logic system has been proved to have universal approximation property, it has become one of the effective measures to solve the uncertainty problem [4]. In [5], an adaptive fuzzy quantized tracking control scheme was proposed for the stochastic nonlinear uncertain strict feedback system. In [6], a direct model reference adaptive fuzzy control was discussed for a class of networked SISO nonlinear uncertain systems. In [7], for the nonlinear system with parametric uncertainties and unknown modelling errors, a robust adaptive control scheme was presented. In [8], an adaptive dynamic surface asymptotic tracking control was proposed for the uncertain nonlinear system. In [9], for the nontriangular stochastic uncertain nonlinear system with unmeasured states, an adaptive robust control was studied. In [10], for the purefeedback nonlinear system with time-varying delay and unknown dead zone, an adaptive fuzzy tracking control was investigated. However, none of the above literature has solved the output constraint problem of the uncertain nonlinear system.
Output constraint is a very important engineering problem. If the system outputs exceed the given range, it will cause control performance degradation, equipment damage, and even endanger the safety of operators and environment. erefore, in recent years, the output constraint problem has become a hot research issue, and a large number of valuable research results have emerged. When the relevant states approach the boundary, the barrier Lyapunov function tends to infinity, so as long as the function value is bounded, the correlation states can be kept in the given constraints.
Because of the above property, the barrier Lyapunov function becomes an effective measure to solve the problem of constraints. In [11], by using a barrier Lyapunov function, an indirect adaptive fuzzy control was designed for the output-constrained nonlinear system. In [12], an adaptive neural network control was proposed for the nonlinear system with full-state constraints. In [13], for a class of nonlinear pure-feedback full-state constraint systems, an adaptive control based on barrier Lyapunov functions was studied. In [14], an adaptive fuzzy backstepping output constrain tracking control was proposed for the uncertain nonlinear system in strict feedback form. In [15], by adopting barrier Lyapunov functions, an adaptive neural network control was presented for nonlinear state-constrained systems with input delay. In [16,17], adaptive fullstate constraint control methods were discussed for stochastic nonlinear systems based on barrier Lyapunov functions. All of the above literature studies considered the static constraint problem, and then the time-varying constraint is more suitable for practical engineering. In [18], an adaptive fuzzy control was proposed for the nontriangular system with time-varying error constraint. In [19], for the uncertain nonlinear system with time-varying prescribed performance, a fuzzy adaptive control based on the observer was presented. In [20], a fuzzy state observer-based adaptive control for the strict feedback system with time-varying constraint was studied. However, for the time-varying output constraints of large-scale interconnected systems, there are no relevant research results.
In practical engineering, many electromechanical systems consist of many subsystems, such as multirobot cooperative control system, offshore platform, drilling system, and aerospace. Because a large-scale interconnected system has the coupling effect between subsystems, the design of its controller is very difficult. erefore, the research on the control scheme of large-scale systems has obvious practical value and theoretical significance. In [21,22], decentralized H ∞ performance control methods were studied for switched and nonswitched large-scale systems, respectively. In [23,24], based on the observer and backstepping technology, adaptive decentralized control methods were presented for uncertain large-scale systems with unmeasured states. In [25], for the switched uncertain large-scale system with dead zones, an adaptive output decentralized tracking control scheme was proposed. However, the output constraints of large-scale systems are not considered in the above literature. Although the control problem of large-scale systems with output constraints has been studied in [26], the system is in strict feedback form, and there are no corresponding research results about the control scheme of nonstrict feedback large-scale systems with time-varying output constraints.
Based on the above results, this paper studies an adaptive output feedback decentralized control for the nonstrict feedback large-scale system with unmeasurable states and time-varying output constraints. Compared with the existed works, there are two contributions in this paper: (1) for the first time, the output feedback control problem of nonstrict feedback large-scale systems with time-varying constraints is studied. e proposed control scheme is quite different from the existed results. e proposed control scheme can not only solve the output feedback tracking control problem for a class of uncertain large-scale systems in nonstrict feedback form but also ensure all the outputs are not violating the time-varying constraints. Moreover, time-varying constraint relaxes the initial conditions of the system. (2) e control method which is proposed in this paper does not need n-order differentiable and bounded conditions of input signals and the monotonically increasing condition of unknown functions, which are common in the existing literature [27,28]. Because of using the dynamic surface control, the "complexity explosion" problem is avoided. Moreover, each subsystem has only two adaptive parameters, and the number of parameters does not increase with the increase of system's order.
Problem Description and Preliminaries
e nonstrict feedback large-scale system considered in this paper has M interconnected subsystems. kth (k � 1, 2, . . . M) subsystem is shown as follows: where x k � [x k,1 , x k,2 , . . . , x k,n k ] T ∈ R n k is the state vector, and only x k,1 can be measured. y � [y 1 , . . . , y M ] ∈ R M is the output vector of the large-scale system. f k,l (x k ) and h k,l (y) (1 ≤ k ≤ M, 1 ≤ l ≤ n k ) are unknown smooth functions, and h k,l (y) represents the coupling effect between M subsystems of the large-scale system. u k ∈ R is the actual control input of the kth subsystem. In practical engineering, the mathematical models of the two-stage chemical reactor, air traffic control, spring connected two-stage inverted pendulum, and other large-scale systems can be expressed as (1) [22,[29][30][31].
Assumption 3 (see [32,33]). e reference signal y k,d (t) and its first derivative are bounded. Control objective: the control objective is to design an adaptive output feedback decentralized control scheme to keep all the outputs y k (t) of the subsystems tracking desired trajectories y k,d (t), respectively. Moreover, the tracking errors can be kept as small as possible, and all the signals of the closed system are bounded.
Fuzzy Logic System and Observer Design
A fuzzy logic system can be written as where ξ(x) is the fuzzy basis function vector and θ ∈ R N is the adjustable weight parameter vector.
Lemma 1 (see [34,35]). If f(x) is a continuous function defined on the compact set Ω, then for any given small constant ε > 0, there exists a fuzzy logic system such that In order to estimate the unmeasured states, we design a linear state observer for the kth subsystem as follows [23]: . . , ℓ k,n k are the observer design parameters. Define observer error vector as Then, from (1) and (3), we can have where We can choose appropriate parameters ℓ k,1 , . . . , ℓ k,n k to ensure A k is a Hurwitz matrix, that is, for any given positive definite matrix Choose the Lyapunov function candidate as [23] Then, we can obtain the time derivative of V 0 : According to Yang's inequalities, we have where f k,j (x k ) is an unknown nonlinear function which can be approximated by a fuzzy logic system. From (8) and (9), we have Based on Assumption 1, we obtain where λ > 0 is the design parameter and λ � λ max 1≤k≤M (‖P k ‖) 2 . Substituting equations (8)-(11) into (7) results in Mathematical Problems in Engineering where Ω k and U k are the compact set of θ k and x k , respectively. en, the minimal error can be written as where |ε k (x k )| ≤ ε * k and ε * k is an unknown positive constant. From (13) and (14), we can obtain where λ min (Q k ) is the minimal eigenvalue of matrix Q k .
Adaptive Control Law Design
Define the tracking error z k,1 , virtual error z k,l , virtual control law α k,l− 1 , and first-order filters as z k,1 � y k − y k,d , where l � 2, . . . , n k . υ k,l− 1 is the time constant of the filter, that is, by letting α k,l− 1 pass through a filter that has the time constant υ k,l− 1 , we can obtain ω k,l− 1 .
Stability Analysis
Define Lyapunov function of the closed-loop system as V ⌢ � V n k , and we can have Let the design parameters satisfy Define C k � min 2π k,1 λ max P k , 2 c k,i − 1 , 2σ k,1 , 2σ k,2 , where i � 2, . . . , n k .
Define C � min k�1,...,M C k , and (75) can be rearranged as (66) en, we have From (70), we can get that all the signals of the closed system, such as x k,i (t), x k,i (t), z k,i (t), a k,i (t), and u k (t), are semiglobally uniformly ultimately bounded (SGUUB). Moreover, the observer error satisfies
Comparisons with Some Previous Results
Comparisons with previous results will be given in this section.
Mathematical Problems in Engineering
However, the control method in this paper is designed for nonstrict feedback large-scale systems (73) that are more complex compared with (71) and (72): It is well known that an interconnected large-scale system comprises some subsystems with obvious interconnections, which lead to the increasing difficulty of controller design and stability proof for the large-scale system. We unable to use the control methods in [11][12][13][14][15][16][17][18][19][20]34] to control the large-scale system due to the coupling effect between subsystems. Moreover, when the controller of the nonstrict feedback large-scale system is designed by the control methods in [22,25,26], the virtual control signal and adaptive law of each subsystem are the functions of full-state variables. Consequently, the algebraic loop problem arises, which makes the controller design of a nonstrict feedback large-scale system very difficult. erefore, the controller design method of nonstrict feedback large-scale system (73) considered in this paper is quite different from that of the controller design methods in [11-20, 22, 25, 26, 34].
(2) [21][22][23][24][25][29][30][31] proposed adaptive control methods for the large-scale system, but output constraints were not considered. ough [32,33] presented the control schemes for constrained systems, the system considered in [32,33] is not a large-scale system, and all states should be measurable. e strict limitation makes these control methods difficult to realize in practical applications. erefore, control methods in [21-25, 32, 33] cannot be used to control a large-scale system with unmeasurable state and output constraints that is discussed in this paper. To the best of our knowledge, by far, no results have been reported on the adaptive control for the nonstrict feedback large-scale nonlinear system with output constraints and unmeasured states. (3) is proposed adaptive control scheme does not need n-order differentiable and bounded conditions of the input signals and a monotonically increasing condition of unknown functions. However, these strict assumptions are common in the existing references [27,28]. Moreover, this control scheme has only 2M adaptive parameters, and the number of parameters does not increase with the increase of system's order n k . erefore, this control scheme not only conforms to engineering practice but also has a simple algorithm and requires a small number of calculations.
Simulations
Consider the following nonstrict feedback large-scale system [23]: e given tracking signals are y 1,d � sin(0.5t) and y 2,d � 0.5 sin(t). e constraints are given as and Choose the parameters as ℓ 1,1 � ℓ 1,2 � ℓ 2,1 � ℓ 2,2 � 10, c 1,1 � 40, c 1,2 � 20, c 2,1 � 20, c 2,2 � 20, λ 1 � λ 2 � 1, τ 1 � τ 2 � 1, c 1,1 � c 1,2 � 1, c 2,1 � c 2,2 � 2, σ 1,1 � σ 1,2 � 0.05, σ 2,1 � σ 2,2 � 0.1, and υ 1,1 � υ 1,2 � υ 2,1 � υ 2,2 � 0.5. To show the superiority and validity of the method proposed in this paper, we compare it with a scheme proposed for large-scale systems without considering the output constraints [23]. e design parameters and initial conditions of the two methods are the same. In the simulation, system 1 is controlled by the method in this paper, whereas system 2 is controlled by the method without output constraints [23]. Figures 1-5 show the simulation results. From Figures 1-4, we can see that the output (y 1,d , y 2,d ) and the tracking error (z 1,1 , z 2,1 ) of system 1 can both be kept within the constraints, whereas the output and the tracking error of system 2 violate the constraints. Figure 5 shows control input signals u k of the two systems, respectively. From the simulation results, it can be seen that the proposed adaptive control approach in this paper not only guarantees the boundedness of all the signals and not violating of output constraints but also achieves better control performance than that of the control method without considering output constraints [23].
Conclusion
is paper proposes an adaptive fuzzy dynamic surface decentralized output feedback control scheme for a class of large-scale interconnected uncertain nonstrict feedback systems with time-varying output constraints. By using a decentralized linear state observer, the unmeasurable states can be estimated. Based on fuzzy logic systems, the uncertain nonlinear functions and interconnected influence between the subsystems can be compensated. e problems of nonstrict feedback and time-varying constraints are solved by variable separation technology and time-varying barrier Lyapunov function, respectively. e proposed scheme can not only achieve good tracking performance but also keep the output trajectories within the given ranges. Finally, the stability of the large-scale interconnected system is proved by using the Lyapunov direct method.
Data Availability
All the data included in this study are available upon request by contact with the corresponding author.
Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper. | 2020-04-23T09:14:34.349Z | 2020-04-21T00:00:00.000 | {
"year": 2020,
"sha1": "55163fa63549be58dad334cac9fbdf0edb8c696e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2020/6760521",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "55d6f2f984ed9ca8667b72ce97c26d27d68fd743",
"s2fieldsofstudy": [
"Mathematics",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
248377119 | pes2o/s2orc | v3-fos-license | Impact of change granularity in refactoring detection
Detecting refactorings in commit history is essential to improve the comprehension of code changes in code reviews and to provide valuable information for empirical studies on software evolution. Several techniques have been proposed to detect refactorings accurately at the granularity level of a single commit. However, refactorings may be performed over multiple commits because of code complexity or other real development problems, which is why attempting to detect refactorings at single-commit granularity is insufficient. We observe that some refactorings can be detected only at coarser granularity, that is, changes spread across multiple commits. Herein, this type of refactoring is referred to as coarse-grained refactoring (CGR). We compared the refactorings detected on different granularities of commits from 19 open-source repositories. The results show that CGRs are common, and their frequency increases as the granularity becomes coarser. In addition, we found that Move-related refactorings tended to be the most frequent CGRs. We also analyzed the causes of CGR and suggested that CGRs will be valuable in refactoring research.
INTRODUCTION
Mining refactorings in commit history is essential to help programmers comprehend code changes and code reviews [16],and this can provide valuable information for empirical studies on software evolution [7,17]. For example, Chávez et al. [8] and Fernandes et al. [11] detected and analyzed refactorings to investigate the refactoring performance in improving internal quality attributes.
Refactoring detectors [10,15,18,21,23,24] detect refactorings by comparing two source code snapshots. Although traditional approaches aim to detect refactorings over releases [18,24], recent detectors such as RefDiff [19,21] and RefactoringMiner [22,23] use a commit as a change unit to detect refactorings, which means that two snapshots before and after a single commit are compared. These methods have achieved high accuracy in detecting refactoring in commits.
However, refactorings that are performed over multiple commits may not be detected. The sample history shown in Figure 1 consists of two commits extracted from the mbassador repository [1], where commit 2ae0e5f is the parent of commit 9ce3ceb. The intention of the developer, as expressed by these two commits, is to decompose the source file Mbassador.java, which contains multiple top-level classes, into multiple source files to ensure that each file contains only one top-level class. In the first commit, the developer copied the implementation of class FilteredAsynchronousSubscription in Mbassador.java to a new file FilteredAsynchronousSubscrip tion.java, and then she/he removed that class from the source file Mbassador.java in the second commit. Overall, she/he moved a class from Mbassador.java to a new source file. A detection based on either of the single commits shown in Figure 1 cannot reveal this kind of refactoring because each commit contains only part of the code changes for detecting Move Class refactoring. However, this refactoring can be detected if we consider a coarse-grained commit generated by merging the changes from the two commits.
The existence of refactorings detected only in the granularity of coarse-grained commits suggests that detectors based on single commits may have missed some refactorings. We conducted an empirical study on 19 open-source Git-based Java repositories to investigate the impact of change granularity in refactoring detection. To change the granularity of commits, we squashed multiple fine-grained commits into one to form a coarse-grained commit. The number of fine-grained commits squashed into one coarsegrained commit is referred to as coarse granularity. Refactoring detection is conducted on both fine-grained and coarse-grained commits using the state-of-the-art tool RefactoringMiner [22,23]. If a refactoring type is detected in the coarse-grained commit but not in the fine-grained commits, which formed the coarse one, this refactoring is defined as a coarse-grained refactoring (CGR).
Our results indicate that CGRs are common, and their frequency increases as the granularity becomes coarser. The type of refactoring that is most likely to be coarse-grained varies in each repository; however, in general, the Move-related refactoring type tends to be CGR.
In summary, our study makes the following contributions: • We propose the definition of CGR.
• We evaluate features of CGRs to understand its effect on refactoring detection. • We analyze the reason for the occurrence of CGR.
The remainder of this paper is organized as follows. The next section explains our study design. Then, we present a preliminary evaluation of 19 open-source projects and the answers to the three research questions in Section 3. Finally, in Section 4, we conclude and state our plans for future work.
STUDY DESIGN
The overview of our study procedure is shown in Figure 2. Our procedure can be divided into two phases: repository transformation and detection and comparison. In the repository transformation phase, squash units that contain multiple fine-grained commits and can be squashed into coarse-grained ones are extracted from the commit history. In the detection and comparison phase, refactoring Squash units determined straight commit sequences offset=0 Figure 2: Overview of the study procedure detection is conducted on both fine-grained and coarse-grained commits, and their results are compared.
Repository Transformation
In this phase, firstly, the Git-based commit history, as a set of finegrained commits (⊆ ), is extracted from the given repository, where is the universal set of commits. By searching the commit history, we can extract straight commit sequences. Each sequence consists of fine-grained commits that excludes merge commits, which have more than one parent, and branch sources, which have multiple children. Merge commits are excluded to avoid duplicate detection of refactoring in the later phase, and branch sources are excluded for simplicity when extracting squash units. A squash unit (⊆ ) is a set of multiple adjacent fine-grained commits that are squashed into a single coarse-grained commit. Here, if a commit is the parent or child of another commit, these two commits are considered adjacent. The adjacent commits are shown as circles next to each other in Figure 2. Different strategies labelled (for appropriate values of and ) are used to extract squash units from straight commit sequences. Here, the granularity level (≥ 1) specifies the size of the squash units, and straight commit sequences are divided into multiple squash units of the specified size. Because each unit is squashed into one coarse-grained commit, this level expresses the coarse granularity of the coarse-grained commits to be generated. The granularity level = 1 exactly produces original fine-grained commits. The offset 0 ≤ ≤ − 1 is the number of commits to be skipped from the beginning of the given straight commit sequence when extracting the squash units to adjust which commits will be merged. For example, the commit 1 in Figure 2 is squashed together with 0 when strategy 2 0 is used, whereas it is squashed together with 2 when strategy 2 1 is used. For each squash unit , sq( ) is used to squash all the commits in into a single coarse-grained commit, which we name .
Detection and Comparison
Refactoring detection is conducted on each commit in all extracted squash units and on coarse-grained commits, and the results are compared for each pair of commits. From commit , a set of refactorings ref ( ) (⊆ ) are detected, where is the universal set of refactorings. The detection result for one commit contains: 1) the refactoring type, 2) a description of how this refactoring is conducted, and 3) the location where this refactoring is applied in the source code. Because the location and description of a refactoring may change owing to squashing, we conservatively compared only the type of detected refactorings. Refactorings detected with invalid locations were excluded. For a squash unit and its coarse-grained commit = sq( ), we judged refactoring ∈ ref ( ) as coarsegrained if and only if no refactoring of its type .type was found in the detected refactorings from each fine-grained commit in . More specifically, the set of CGRs of can be explained as (1) A squash unit is regarded as an effective squash when at least one CGR is detected from it: When the coarse granularity is set to , the set of squash units for the repository is where unit ( ) denotes the squash units extracted from according to strategy .
PRELIMINARY EVALUATION 3.1 Research Questions
Our objective in this study is to investigate features of CGRs. We answer the following research questions (RQs) to better achieve this goal.
• RQ 1 : How frequently do CGRs appear because of granularity change? • RQ 2 : Which types of refactorings tend to be coarse-grained? • RQ 3 : What are the reasons for the occurrence of CGRs?
A quantitative analysis is provided for RQ 1 and RQ 2 . We manually examine the experiment results to present a qualitative explanation for RQ 3 .
Experimental Setup
We used the Git repository rewriting tool git-stein [14] to change the granularity and the latest version of RefactoringMiner (ver. 2.2) to detect refactoring in 19 open source Git-based Java repositories.
3.2.1 Data Collection. The repositories that we selected are from a dataset collected by Silva et al. [20], containing 185 GitHub-hosted Java projects. Refactorings exist in these projects, some of which have been identified by RefactoringMiner, studied, and confirmed by researchers. On account of computation time, we chose 19 repositories whose number of commits is no more than 7,000 from the dataset. To be specific, the number of commits ranges from 342 (mbassador) to 6,955 (redisson [6]).
RQ 1 : How frequently do CGRs appear because of granularity change?
3.3.1 Study Design. The techniques introduced in Section 2 are applied to the selected repositories to extract squash units, change the granularity of commits, and compare the refactoring detection results to find CGRs. The frequency of CGRs in the commit history can be expressed as the ratio of the number of squash units that can generate at least one CGR: We calculate Frequency for our dataset when the coarse granularity is set to 2, 3, and 4, respectively.
3.3.2 Results and Discussion. Figure 3 shows box plots of the CGR frequency at different levels of coarse granularity in the 19 repositories. The minimum values of all three box plots are greater than zero, indicating that CGRs were detected in all the repositories at all levels of coarse granularity. We can conclude that the CGR is a common phenomenon in refactoring detection. The highest frequency was observed in the repository goclipse [4], which was 0.071, 0.135, and 0.178 when the coarse granularity was set to 2, 3, and 4, respectively. The box plots show that the more the coarse granularity increases, the more the frequency increases in all repositories. The minimum increase in the frequency when the coarse granularity was changed from 2 to 3 was in the repository baasbox [3], which increased by 14.1%, whereas the maximum increase was 331.9% in javapoet [5]. The average increase for all repositories was 129.4%. When the coarse granularity increases from 3 to 4, a minimum increase of 24.4% appears in seyren [2], a maximum increase of 147.6% appears in mbassador, and the average increase is 65.6%. The average frequencies for all the repositories were 2.0%, 4.3%, and 6.9% when the coarse granularities were 2, 3, and 4, respectively. The observed tendency of frequency to increase as the coarse granularity increases can be explained as follows. The CGR detected in the commits with finer granularity may also exist in those with coarser granularity. In addition, a new CGR may be detected in coarser-grained commits because more code changes are transferred into these commits through the granularity change. However, we also observed that not all CGRs detected in commits of finer granularity could be detected in a coarser-grained one. Code changes in other commits may hinder the currently detected CGR when those commits are squashed with the current coarse-grained commit.
CGR is a common phenomenon in all repositories. The average frequencies of CGR for all repositories were 2.0%, 4.3%, and 6.9% when the coarse granularities were 2, 3, and 4, respectively. CGRs are more frequent when coarse granularity increases.
3.4 RQ 2 : Which types of refactorings tend to be coarse-grained?
3.4.1 Study Design. To investigate this RQ, we calculate the appearance ratio of a specific CGR type at all the three granularity levels. The ratio expresses the average number of CGRs in one effective squash. For a certain refactoring type in commit history , the ratio can be expressed as follows: (5) We calculate the ratio of each type of CGR in our dataset.
3.4.2 Results and Discussion. The CGR type with the highest ratio for each repository is listed in Table 1. Among the 19 repositories, we found that Change Class Access Modifier occurs at the highest ratio (2.00) in mbassador, and Move Attribute in HikariCP reaches 1.82.
We find that the CGR type with the highest ratio varies with repositories. In our dataset, we also find that Move-related refactoring types, e.g., Move Class and Move Attribute, appear most frequently for eight repositories. By calculating the average ratio over our dataset for all types of refactorings, we observed that the top three highest-ratio refactoring types were Move And Rename Class (0.46%), Move Method (0.34%), and Move And Inline Method (2.9%).
As a result, we can conclude that Move-related refactoring types are most likely to be coarse-grained. A possible explanation for this is that in Move-related refactoring, Move on the refactored object is not performed directly but is performed in two steps. First, an object is copied to the destination and is potentially followed by other changes, e.g., renaming, inline, or no change. Second, the original object is removed. These two steps may be included in separate commits. Another possible reason is that Move-related refactoring can be combined with other refactoring, such as Rename or Inline.
Considering the average ratio over the whole dataset, the top three types are Move And Rename Class, Move Method, Move And Inline Method. We conclude that Move-related refactoring types are most likely to be coarse-grained.
3.5 RQ 3 : What are the reasons for the occurrence of CGRs?
3.5.1 Study Design. The git diff command is used to extract code changes from the fine-grained and coarse-grained commits. After extraction, we manually compare and analyze the changes and refactorings detected.
3.5.2 Results and Discussion. The reasons for the occurrence of CGRs are categorized into two types according to their composition: Generation and Combination.
Generation. This type of CGR is generated from non-refactoring changes. The example shown in Figure 1 belongs to this type; the Move Method refactoring is generated by two non-refactoring changes: 1) copy the class implementation to a new file 2) remove the origin class. Another example is in repository javapoet. In the parent commit 6a3595c, the attribute body is defined, and the method call methodWriter.write() is removed. In child commit 4ff9adf, the developer adds method call body.write(). In the coarse-grained commit, the above code changes are detected as Rename Variable with Attribute; the variable methodWriter is renamed to attribute body.
Combination. In contrast with Generation, this type is the combined result of multiple refactorings detected in finer-grained commits. Figure 4 shows an example of this type. For clarity, only part of the package hierarchy of the repository is shown in the figure.
In the parent commit ce2a9e9, the developer moves class Proper tyMailSender under package services to package core.util, which is detected as refactoring Move Class. In child commit 989bf50, she/he split package core.value into core.util.email and another one, and then she/he move class PropertyMailSender to the package core.util.email, which are detected as Split Package and Move Class. In terms of result, she/he applied Merge Package to merge part of the package core.value and the entire package services into a new package core.util.email. Figure 4: Example of coarse-grained Merge Package.
Generation type CGRs will influence judgments of whether a module is refactored or not. We note that this type may also occur because of developers' awareness of refactoring; developers do not realize that the conducted code changes belong to refactoring operations. Supporting tools to guess developers' manual edits and recognize refactoring activities [12,13] may assist them in development. Because the Combination type may influence type-based refactoring studies, such as investigations on frequently-performed refactoring types, researchers may reconsider their results by covering coarse-grained types.
We found reasons for two categories. Generation refers to new refactorings generated by non-detected fine-grained ones. Combination is a high-level refactoring combined with detected finegrained ones.
CONCLUSION AND FUTURE WORK
In this study, we investigated the impact of refactoring detection on different granularities of commits in 19 open source Git-based Java repositories. We observed that it is common for a CGR to occur, and its frequency increases as the granularity becomes coarser. Moverelated refactoring types tend to be coarse-grained. We analyzed the causes of CGR and categorized them into two types according to their composition: Generation and Combination. The studied list of CGR is attached as a supplemental material [9]. We suggest that refactoring detectors should cover CGRs. For future work, we plan to extend the current experiment by comparing different refactoring detection tools on a larger dataset. | 2022-04-26T06:48:26.677Z | 2022-04-24T00:00:00.000 | {
"year": 2022,
"sha1": "65b1636b4ddc27ef975ea8f7a89dfd2255bb0a0d",
"oa_license": null,
"oa_url": "https://dl.acm.org/doi/pdf/10.1145/3524610.3528386",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "1e604c47e7c735b0e072d412dc29ad17b3b743e0",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
235212038 | pes2o/s2orc | v3-fos-license | Freeform nanostructuring of hexagonal boron nitride
Hexagonal boron nitride (hBN)-long-known as a thermally stable ceramic-is now available as atomically smooth, single-crystalline flakes, revolutionizing its use in optoelectronics. For nanophotonics, these flakes offer strong nonlinearities, hyperbolic dispersion, and single-photon emission, providing unique properties for optical and quantum-optical applications. For nanoelectronics, their pristine surfaces, chemical stability, and wide bandgap have made them the key substrate, encapsulant, and gate dielectric for two-dimensional electronic devices. However, while exploring these advantages, researchers have been restricted to flat flakes or those patterned with basic slits and holes, severely limiting advanced architectures. If freely varying flake profiles were possible, the hBN structure would present a powerful design parameter to further manipulate the flow of photons, electrons, and excitons in next-generation devices. Here, we demonstrate freeform nanostructuring of hBN by combining thermal scanning-probe lithography and reactive-ion etching to shape flakes with surprising fidelity. We leverage sub-nanometer height control and high spatial resolution to produce previously unattainable flake structures for a broad range of optoelectronic applications. For photonics, we fabricate microelements and show the straightforward transfer and integration of such elements by placing a spherical hBN microlens between two planar mirrors to obtain a stable, high-quality optical microcavity. We then decrease the patterning length scale to introduce Fourier surfaces for electrons, creating sophisticated, high-resolution landscapes in hBN, offering new possibilities for strain and band-structure engineering. These capabilities can advance the discovery and exploitation of emerging phenomena in hyperbolic metamaterials, polaritonics, twistronics, quantum materials, and 2D optoelectronic devices.
previously unattainable structures for control of photons, electrons, and excitons would become possible.
To address this challenge, we have utilized thermal scanning-probe lithography due to its precise surface-structuring capabilities 19,20,34 . We spin-coat a thermally sensitive polymer resist over an hBN flake ( Fig. 1a; Methods). A heated cantilever with a sharp tip is then raster-scanned across the film, removing polymer during its motion to form the desired freeform nanostructure in the resist. The resulting profile is subsequently transferred into the underlying hBN flake with reactive-ion etching.
The control offered by this approach allows the final structure to be designed using simple formulas. A grayscale bitmap controls the tip depth ( ) at each in-plane pixel ( , ) during the scan. Thus, by converting mathematical profiles to high-resolution bitmaps, desired patterns can be easily fabricated. For example, Fig. 1 demonstrates an hBN flake structured with a portion of the Mandelbrot set. We chose this challenging fractal pattern as it requires features with continuously varying depth over a wide range of length scales. The design bitmap ( Fig. 1b; Methods) assigns one of 256 depth levels (8 bit) to each 15×15 nm 2 pixel. Figure 1c presents an optical micrograph of the Mandelbrot pattern transferred into hBN (see also Extended Data Fig. 1). Colors arise in reflection due to thickness-dependent optical interference from the flake. However, because the pattern contains features beyond the resolution of our optical microscope, scanning-electron microscopy (SEM) is required to reveal the intricate self-similar features that persist down to tens of nanometers (Fig. 1d).
Steps in of ~1 nm (3-4 atomic monolayers) are clearly visible in the SEM image as faint lines between terraces, consistent with the bitmap.
The limiting resolution in , for such a pattern is set by several factors. Most importantly, the in-plane resolution decreases as the depth increases due to the conical shape of the probe 35,36 . A fresh probe tip has a radius of curvature down to ~3 nm and an estimated half-angle of 15-30°, which, combined with mechanical deformation in the resist, sets the minimum in-plane feature size for a given depth 35,36 (design rules are provided in Methods). Furthermore, the profile is initially written in the polymer film, which is ideally thin to avoid prolonged etching during the subsequent transfer step into hBN. However, if the polymer is too thin, unwanted thermal transport from the tip to the underlying substrate increases, limiting pattern quality. This trade-off sets a lower limit on the polymer film depth and, consequently, the pattern roughness that is accumulated during etching, which affects the minimum feature size in the final hBN structure.
Considering the above factors, we identified conditions for high-resolution patterning of hBN (Methods). We then designed and fabricated a 'freeform resolution target' that contains a controlled range of pattern depths and spatial frequencies. Specifically, as the spatial frequency increases the depth decreases due to the probe shape. The left half of Fig. 2a shows the bitmap (Methods); the right half presents atomic force microscopy (AFM) data of the experimental surface profile in hBN. The side-by-side comparison reveals good qualitative agreement. This is further supported by an SEM image of a full resolution target in hBN (Fig. 2b). To extract quantitative information, Fig. 2). The fabricated profile follows the target well, even for shallow depth modulations (~2 nm amplitude) and increasing spatial frequencies (~150 nm period) at the edge of the pattern (Fig. 2c).
As this pattern had not yet reached the limit of our process, we extended the freeform resolution target in Fig. 2a at its corners to higher-resolution features at shallow modulation depths (~5 nm). Figure 2d shows an SEM image of the resulting hBN flake. This region spans periodicities from ~95 (bottom-left corner) to ~50 nm (top-right corner). Clear lattices persist over these length scales. Thus, to determine the ultimate resolution, we fabricated a series of structures, each with a fixed spatial frequency on a 2D square lattice (with periodicities from 35 to 25 nm). Figure 2e shows measured topography (AFM) for an hBN lattice with 29 nm periodicity and a depth of ~5 nm, which is close to the theoretical limit imposed by the probe geometry (Methods). This represented our limiting resolution for a high-quality pattern, based on a Fourier analysis of the topography data (Methods). Although smaller periodicities were possible to pattern (20 nm in the polymer; 25 nm in hBN), increasing disorder became apparent.
These results establish that we can create freeform nanostructures in hBN on optical and electronic length scales. We now demonstrate photonic microelements designed with simple mathematical expressions. Figure 3a shows an optical micrograph of an hBN flake with an array of phase plates, each defined by a continuous spiral height profile (Methods).
Such patterns, which are unattainable with standard lithographic techniques, induce a twisted phase-front on transmitted light, producing optical orbital angular momentum 37 useful for controlling interactions between photons and electrons. Our structures are designed to impart a spiral 2 phase modulation on deep-ultraviolet photons at ~200 nm, where hBN shows lasing 1 . Figure 3b compares our bitmap (center) with the measured topography (AFM, outer region). By fitting the design function to the experimental profile, we obtain an RMS error of 4.0 nm (3.5%) (Methods; Extended Data Fig. 3). Such a low value for a continuously varying height profile confirms the fidelity of our approach. Furthermore, the array of phase plates arranged on the hBN flake verifies the potential for straightforward integration of multiple elements.
To characterize the optical quality of our photonic structures, we designed and fabricated a spherical hBN lens with a 100 μm radius of curvature (optical micrograph, Fig. 3c). The bitmap and the measured topography (AFM) are compared in Fig. 3d. By fitting a spherical function to the experimental profile, we extracted (Methods; Extended Data Fig. 4) a radius of curvature of 95 μm with an RMS error of 4.5 nm (2.8%). An impinging beam of collimated light should be focused by this structure. To observe this effect, we transferred the lens between two planar mirrors to form an optical microcavity 21 (Methods). Figure 3e depicts the cavity, which consists of a bottom distributed Bragg reflector (DBR), the hBN lens (blue), and a top moveable DBR. The red beam represents a stable cavity mode with transverse confinement due to the focusing of the lens. 2D electronics can also benefit from hBN patterns with freeform profiles at shorter length scales. In particular, the propagation and interactions of electrons in nearby active layers can be manipulated. This possibility, known as electronic band-structure engineering, can be implemented through specific modulations of the hBN profile. While dielectric superlattices have recently been explored for this 14,16,23,24 , the possible lattice structures were constrained to basic patterns (for example, arrays of holes) approachable by standard lithography. We lift these constraints by patterning hBN at nanoscale resolutions using mathematically defined freeform profiles. As a specific class of such structures, we introduce electronic Fourier surfaces 22 , which superimpose a set of sinusoidal profiles to precisely control the spatial frequencies. Figure 4a shows a bitmap of a hexagonally symmetric electronic Fourier surface, defined by summing three sinusoids with 50 nm period, but rotated in plane by 0, 60, and 120° (Methods). The inset shows the fast Fourier transform (FFT) of the bitmap, revealing the hexagonal lattice symmetry. This pattern is then written in the polymer resist; Fig. 4b plots the topography measured during this process. From the real-space profile and FFT (inset), we see that the wavy hexagonal lattice is accurately reproduced. After reactive-ion etching, the same profile is replicated in hBN (Fig. 4c).
More importantly, such structures can be extended to more sophisticated profiles. which can cover large areas (10×10 µm 2 , Extended Data Fig. 7), will modulate the electric field felt in a nearby active layer (Extended Data Fig. 8). Beyond electronic Fourier surfaces, our approach is amenable to any bitmap design within the limitations discussed above.
The high-fidelity structuring presented here exploits the simple combination of thermal scanning-probe lithography and reactive-ion etching to accurately replicate freely varying mathematical landscapes in hBN. In addition to integrated photonic microelements that modify photon flow, such control can modulate mechanical, electrostatic, and electromagnetic environments for 2D materials. For example, researchers are currently exploiting moiré periodicities induced by the rotation angle between two stacked monolayers of graphene (twisted bilayers) 11 . Freely patterned hBN should provide a more flexible and integrated approach to engineer strain, electronic band-structure, and cavity quantum electrodynamics. Thus, combining freeform hBN flakes with other 2D materials could provide a platform to access, discover, and exploit exotic states of matter in quantum materials. For all structural design parameters, see Extended Data Table 1. The Mandelbrot set used in Fig. 1 was calculated by iterating the nonlinear equation: the magnitude of +1 to be greater than or equal to 2:
Online content
where deep blue (white) is a larger (smaller) number. The black region inside the boundary represents solutions that have a complex magnitude less than 2 after 500 iterations. When the number of iterations is taken to infinity, the values inside the boundary correspond to the Mandelbrot set.
The bitmap in Fig. 1b represents a portion of the Mandelbrot set (see red box in Extended Data Fig. 1a). It was calculated using the same procedure as above, except the center of the image is now located at -0.5135 -0.5765 , and the -axis, Im{ }, ranges from The freeform resolution target in Fig. 2a was calculated with the expression: where is the amplitude at the origin, is the slope that describes the linearly decreasing amplitude away from the origin, = 2 Λ ⁄ is the spatial frequency at the origin with Λ = 12.5 μm, and is the vertical offset. The lateral size of the pattern was chosen to be 15.03×8.49 μm 2 , mapped onto a 10×10 nm 2 pixel grid.
The spiral phase plates in Fig. 3a,b were calculated using: where is the slope describing the linearly decreasing height of the spiral phase plate as a function of the polar angle , and is the vertical offset. The lateral size of an individual phase plate was chosen to be 5×5 μm 2 , mapped onto a 10×10 nm 2 pixel grid.
The spherical lens in Fig. 3c,d was calculated with: where is the radius of curvature of the lens, chosen to be 100 μm, and is the vertical offset. The lateral size of the pattern was chosen to be 20.02×20.02 μm 2 , mapped onto a 20×20 nm 2 pixel grid.
The high-resolution pattern in Fig. 2e, the electronic Fourier surfaces in Fig. 4, and the large-area pattern in Extended Data Fig. 7 were calculated using: where , , and correspond to the amplitude, spatial frequency, and in-plane rotation angle, respectively, for component . is the vertical offset. For the high-resolution pattern in Fig. 2e, the lateral size was 580×580 nm 2 , mapped onto a 2.9×2.9 nm 2 pixel grid. For the electronic Fourier surfaces in Fig. 4, the lateral size was 1×1 μm 2 , mapped onto a 5×5 nm 2 pixel grid. For the large-area electronic Fourier surface in Extended Data Fig. 7, the lateral size was 10×10 μm 2 , mapped onto a 5×5 nm 2 pixel grid.
The photonic grating couplers in Extended Data Fig. 6 were calculated using: where and correspond to the amplitude and spatial frequency, respectively. is the vertical offset. The lateral size was 14×14 μm 2 , mapped onto a 10×10 nm 2 pixel grid. For the parameters used in all formulas in this section, see Extended Data Table 1.
Bitmap generation. The mathematical expressions were converted into bitmaps, where the overall dimensions for the structure were chosen and the pattern was mapped onto a pixel grid (see above). The normalized depth of the pattern in the -direction was discretized into 256 levels, corresponding to 8-bit precision. The physical depth of the patterns was assigned during patterning with the thermal scanning probe, where the total pattern depth was taken as an input, and the thermal scanning-probe software mapped the total depth onto the 8-bit depth levels.
Materials.
Large-size bulk hBN crystals used in this work were purchased from 2DSemiconductors Inc., except the flake in Fig. 3a The sample was then placed on the stage of the thermal scanning-probe lithography tool (NanoFrazor Explore, Heidelberg Instruments Nano); the flake of interest was centered and rotationally aligned under the optical microscope of the tool. A cantilever was loaded into the cantilever holder, which was then attached to the Nanofrazor scan head. The tip was brought near the sample surface, and an automated approach function was used to find the sample surface and bring the tip into contact. For all freeform patterns other than the high-resolution structures shown in Fig. 2e and Fig. 4, the tip was then moved away from the flake to perform calibration scans. After calibration, the tip was optically aligned over the flake of interest. Next, the thermal scanning probe performed a topography scan of the polymer surface on top of the flake for fine alignment of the pattern and to ensure that the surface was relatively flat and smooth in the local pattern area. The thermal scanning probe was set to an initial temperature of 950 °C, and it then started fabricating the desired pattern, allowing the feedback to adjust the patterning conditions until the fabricated pattern matched the design pattern. The scan proceeded until the entire pattern was written in the polymer resist. Afterwards, the thermal scanning probe was available to create the next pattern on either the same flake or a different flake on the same chip.
For the high-resolution patterns, a fresh cantilever was loaded as in the procedure above, however, no calibration scans were performed. This minimized the contamination that builds up on the tip during patterning, which limits the resolution. The fresh cantilever was positioned directly over the flake of interest, and the patterning was initiated, but in this case the feedback was turned off. This was done to have maximum control over the patterning conditions. The starting temperature was set between 950-1000 °C, and a minimal value of writing force was applied between the tip and substrate to observe conditions where no pattern was generated. The onset of patterning was initiated by repeatedly writing the same structure, where the write force was slowly increased for each scan until the desired pattern was observed in the polymer resist. Further adjustments to the temperature and write force were implemented manually and iteratively, until optimal conditions were found for the high-resolution pattern of interest. Once these conditions were identified, the patterns of interest were consecutively written in the polymer film. This typically resulted in a few tens of patterns before the cantilever had to be changed due to contamination build-up on the tip.
Once thermal scanning-probe lithography was completed, the pattern was transferred to the underlying hBN flake via inductively coupled plasma (ICP) etching 43 (Oxford Instruments, PlasmaPro) using a gas content of 50 sccm SF6. The etching was performed with a chamber pressure of 40 mTorr, a forward power of 75 W, and at a rate of ~2 nm s −1 until the polymer resist was removed. The pattern was transferred to the underlying hBN with approximately 1:1 depth, indicating little to no pattern amplification. After etching, the sample was sonicated for 2 min in acetone, rinsed with IPA, and blown dry with N2 gas.
Design rules for in-plane resolution versus pattern depth. The conical shape of the thermal scanning probe, combined with mechanical deformations in the polymer resist, set the lower limit on in-plane periodicity for a given depth. This limit can be estimated as follows.
A fresh probe has a tip diameter at the apex as low as 6 nm and half-angle of 15-30°. We note that these quantities vary from probe to probe due to fabrication tolerances. Thus, the probe width is a function of the distance from the tip, set by the pattern depth. For a periodic structure, the relationship between the minimum periodicity, , and the pattern depth, , can be written as: where ( ) is the width of the indent, 0 is the probe width at the apex, half is the opening half-angle of the probe tip, and represents additional feature broadening beyond the probe shape due to mechanical deformations. A detailed discussion of this topic is available 36 . The prefactor of 2 arises from the assumption that a periodic structure will have a period twice as wide as the indent caused by the probe. We note that in practice contamination will increase the size of the probe, which can further increase .
Furthermore, roughness accumulated during etching can additionally increase for the final pattern in hBN.
Surface-topography characterization. The topography of the patterns in the polymer resist was measured by the thermal scanning probe during the writing process. The final patterns in hBN were measured with an AFM (Bruker, Dimension FastScan, NCHV-A cantilever) using tapping mode in ambient conditions. The topography data was processed via a custom MATLAB script that performed row alignment, plane levelling, and function fitting to extract structural parameters, RMS roughness, and error values.
The high-resolution (25-35 nm periodicity) square lattices were measured using an AFM (Veeco Dimension V with Nanosensors PPP-NCHR probes) in non-contact mode. To extract a quantitative measure of the high-resolution lattice quality, Fourier analysis was used on the measured topography data. The 2D FFT of the topography data revealed prominent peaks (along at = 0, and along at = 0) that correspond to the fundamental spatial frequency of the lattice. The ratio of the fundamental peak height to the next-highest peak in the Fourier spectrum was taken as a quantitative metric for the lattice quality. We chose a threshold of 5 for this ratio as our criterion for a high-quality lattice. The lattice with 29 nm periodicity was the shortest period that had a ratio greater than 5 (5.05).
Thus, 29 nm was taken as our limiting spatial resolution.
Transfer of hBN lens. After etching and topography characterization, the hBN lens was transferred to the DBR substrate using a standard dry polymer transfer method 44 under inert atmosphere in a glove box.
Optical cavity measurements. After the hBN lens was transferred to the first DBR substrate, another moveable DBR was brought close to the first (distance of ~33 μm), forming an optical microcavity with two planar mirrors. The cavity modes were excited with a broadband source (Fianium supercontinuum laser, NKT Photonics) at a range of incident angles, where the transmitted light was collected and the Fourier plane was spectrally dispersed onto a liquid-nitrogen-cooled charge-coupled-device (CCD) camera (Extended Data Fig. 5a). The control cavity spectra for two planar mirrors without the lens was taken by passing the excitation beam through the flat part of the hBN flake (Extended Data Fig. 5b). In this case, unstable, low-Q longitudinal modes are observed. The cavity spectra for the hBN lens was taken when the excitation beam was passed through the lens, resulting in the cavity transmission spectra shown in Fig. 3f and Extended Data Fig. 5c.
where {q, n, m} is a set of integers labelling the longitudinal and transverse mode orders, respectively, eff is the effective cavity length, 1 = 1 for the flat DBR mirror, and imaging both grating couplers simultaneously. As a light source, we used a broadband halogen lamp. A small, circular aperture located in the real-space image plane before the objective ensured that only the first grating coupler was illuminated and that no light was incident on the second grating coupler. After the aperture, the light was reflected onto the sample using a beamsplitter, where it passed through the objective lens and was focused on the sample. Therefore, in this configuration, the first grating coupler was illuminated with broadband light under all possible incident angles (limited by the NA of the objective).
To observe guided-mode coupling, the outcoupled light at the second grating coupler was collected by the same objective, transmitted through the same beamsplitter and passed through another small circular aperture located in the real-space image plane. This second aperture was used to ensure that light was only collected from the second grating coupler on the hBN flake. The back focal plane of the microscope objective was then imaged onto the entrance slit of an imaging spectrograph (Andor Shamrock 303i) and captured by a digital camera (Andor Zyla PLUS sCMOS).
Dispersed k-space measurements 46 were performed by inserting a grating (150 lines mm −1 blazed at 500 nm) into the imaging path of the spectrometer, such that the outcoupled light was spectrally dispersed along one axis of the camera pixel array. A slit was closed to a width of 100 μm along the axis at ≈ 0. Here, is the wavevector direction that corresponds to the modulated direction of the two grating couplers. Thus, the setup allowed for an angle-and wavelength-resolved measurement of the light coupled out at the second grating coupler with a single image. To eliminate the effects of background and stray light incident on the camera, a reference measurement on the flat portion of the same hBN flake was performed and subtracted. Furthermore, a linear polarizer was placed in the collection path to selectively measure s-or p-polarized light. We note that s-polarized light corresponds to a transverse-electric (TE) waveguide mode, and p-polarized light corresponds to a transverse-magnetic (TM) mode. A schematic of the optical setup is shown in Extended Data Fig. 6b.
Extended Data Fig. 6c shows a dispersed k-space measurement, where we observe two branches corresponding to outcoupled light that originates from two guided modes of the hBN flake. The lower branch corresponds to the TE0 mode, and the upper branch corresponds to the TE1 mode. We note that the broken inversion symmetry around = 0 occurs because the light is coming from a single direction, starting at the first (incoupling) grating coupler and moving towards the second (outcoupling) grating coupler. We confirmed that if the light propagation direction was reversed, the data shown in Extended Data is the wavevector associated with the period of the grating coupler with g equal to 282 nm.
Therefore, each guided mode in the measurements of Extended Data Fig. 6c will appear as a branch that follows the dispersion relation ( ) = ℏ eff but at an in-plane momentum shifted by − g into the light cone. Here, ℏ is the reduced Planck constant and is the speed of light in vacuum. Extended Data Fig. 6c shows the theoretical expectation of the TE0 and | 2021-05-28T01:16:10.112Z | 2021-05-27T00:00:00.000 | {
"year": 2021,
"sha1": "147a43fd707afcb85391599397adc2aa3ab6f777",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "147a43fd707afcb85391599397adc2aa3ab6f777",
"s2fieldsofstudy": [
"Materials Science",
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
261545607 | pes2o/s2orc | v3-fos-license | Hunting Site Behaviour of Sympatric Common Buzzard Buteo buteo and Rough-Legged Buzzard Buteo lagopus on Their Wintering Grounds
Simple Summary We compared the foraging techniques of Common Buzzards and Rough-legged Buzzards on their wintering grounds in east-central Poland. Both buzzard species spent the most time standing on the ground, less perching on trees and even less perching on fence posts. The difference in the hunting behaviors of the two species is associated with the use of small fence posts around pastures as hunting sites, which were conspicuously avoided by the Rough-legged Buzzards. Snow cover was the only weather factor in both buzzard species that affected foraging behavior and possibly intensified interspecific competition. Abstract Birds wintering in the northern Palearctic compensate for substantial energy losses and prepare for a food deficit in winter by adjusting their foraging behavior. Apart from weather conditions, interspecific competition also drives hunting strategies. To describe this phenomenon, we observed the behavior of two sympatrically wintering raptor species: the Common Buzzard and the Rough-legged Buzzard. The study was carried out in east-central Poland during four seasons on a study plot where the densities of both species were high. Interspecific differences were detected in the use of available hunting sites. Rough-legged Buzzards conspicuously avoided using fence posts for scanning the surroundings and spent the most time standing on the ground. Common Buzzards more often used trees for this purpose when the snow cover was thick. Thicker snow cover resulted in fewer attempted attacks on prey in both species and caused Common Buzzards to change their hunting sites less frequently. The study also showed that the more often a bird changed its hunting site, the greater the number of attempted attacks. The outcome is that the ultimate effectiveness of hunting is mediated by the overview of the foraging area from different heights and perspectives, not by the type of hunting site. Snow cover was the most important factor in modifying foraging behavior and possibly intensifying interspecific competition.
Introduction
Winter is a difficult time in the life of many organisms.In addition to the negative impact of low temperatures, the lack or limited availability of food also leads to considerable energy losses in animals and possibly their death [1][2][3][4].This is why an adequate supply of energy in the form of high-calorie food [5][6][7] or sufficient amounts of other food [8] is so important during this season.Vertebrates wintering in the northern regions of the Northern Hemisphere adjust their foraging behavior accordingly to compensate for great losses of energy and/or to prepare for winter food deficiency.A frequent behavioral adaptation to the hardships of winter life is to store food [9], which is common in various species of mammals [10][11][12].Birds likewise hoard food in winter, storing their stocks in tree hollows and nest boxes [13][14][15][16].Other means of surviving the winter involve accumulating Animals 2023, 13, 2801 2 of 9 energy reserves in the form of fat tissue in summer and early autumn when food is still plentiful [17].Losses of energy in winter can also be avoided by reducing total diurnal physical activity [18].During this time, some birds of prey employ the energy-saving sit-and-wait hunting strategy [1,3,[19][20][21][22].
Another factor crucial for the winter survival of birds is the weather.Heavy snowfall in mid-winter immediately reduces the population, which is associated with a mosaiclike farming landscape [23].For raptors living in such a landscape, thick snow hampers or prevents the detection and pursuit of rodents [1,24].Common Buzzards Buteo buteo from central Europe react to low temperatures by migrating southwards [25,26].At the same time, there is an influx of Rough-legged Buzzards Buteo lagopus from the north, for which central Europe is a warmer region than central and southern Scandinavia [27].In response to deteriorating weather conditions, raptors reduce their energy expenditure, particularly when prey items are in short supply [28], and resort to the sit-and-wait hunting strategy [1,3,[19][20][21].Sit-and-wait is obviously less energy-demanding than other foraging techniques, but it is also time-consuming [19].For both European buzzard species, this strategy means using various kinds of hunting sites in farmland, including man-made ones, especially if natural ones are not available [19,20,29].Common Buzzards can adjust their choice of hunting sites to weather conditions [30]: after heavy snowfall, they congregate along roads and railway lines, where hunting opportunities are better, and they can find carrion [19,31].
Interspecific competition between birds in which feeding niches overlap is another factor that can affect winter survival [32,33].Closely related species of animals often share resources such as space and food to minimize competition and enable the divergence of their ecological niches [34,35].The two species of wintering buzzards, which employ the same hunting strategy for the same source of food, provide a good example of these processes [36].
We compared the foraging techniques of Common Buzzards and Rough-legged Buzzards on their wintering grounds in east-central Poland.Both species employed an energyconserving sit-and-wait strategy [1] and fed on small mammals, mostly on the Common Vole Microtus arvalis [37].The main aims of this study were (1) to investigate which sites were used for hunting and (2) to determine factors that mediated the number of attacks on prey in both buzzard species.We hypothesized that the time spent at each type of hunting site would differ between Rough-legged Buzzards and Common Buzzards.We also expected that species and time spent on different hunting sites, as well as the number of changes in hunting sites and weather conditions, would affect the number of attacks on prey.Knowledge of the differences in hunting techniques and the factors that improve foraging success between these two buzzard species may contribute to a better understanding of how competition is reduced and how niche differentiation strategies develop in morphologically and ecologically similar species [35,38,39].
Study Plot
The study plot (area 18.9 km 2 ) was situated in the upper valley of the River Liwiec in central Poland in a complex of hay meadows and pastures of diverse humidity, criss-crossed by numerous drainage ditches (Figure 1).Fencing posts and trees provided potential hunting sites for buzzards.Most of these posts were located in meadows, which covered about 94.5% of the study plot [40], with the average area of a fenced meadow being about 1 ha (range 0.2-2.5 ha) [41].Thus, fencing posts were fairly evenly distributed throughout the plot.Single trees, as well as clumps of trees (willow shrubs), grew along the drainage ditches, so their distribution was also even.This particular area supports the highest densities of wintering buzzards in east-central Poland.The density of Common Buzzards in the study plot was 6.84 ind./km 2
Data Collection
Observations were carried out in this study plot during four winter seasons-2007/2008, 2008/2009, 2011/2012, and 2012/2013-from the first days of November to the end of February.Birds were monitored at roughly two-week intervals on days without rain or snow.Around 9 to 14 visits were made in each season, a total of 44.The birds were always counted during the same hours of the day (07:30-13:30).
Behavioral observations were made from vantage points with good visibility using a 20-60 × 100 spotting scope.Individual birds were tracked for a minimum of 10 min and a maximum of 30 min.If a buzzard had disappeared before 10 min passed, its observation ceased, and another bird was chosen.Each visit involved an average of 65 min of observations (range 20-130 min.).The analyses treated each 10-min sequence as a separate sample.A total of 1140 min (114 sequences) was spent observing Common Buzzards and 1610 min (161 sequences) on Rough-legged Buzzards.
The time spent on each hunting site was recorded to within one second using a dictaphone.Additional information was also assigned to each recording: the date and time of the recording and snow thickness.Mean daily temperatures (°C) and mean wind speeds (km/h) were obtained from the Siedlce weather station (52°25′ N; 22°26′ E), situated about 17 kilometers away.The thickness of the snow cover was measured at 4 randomly selected sites within the study plot during each observation.Because both buzzard species spent most of their time at their hunting sites (Common Buzzard-96.5% of the time, Rough-legged Buzzard-91.4%),only this type of activity was analyzed.In accordance with the recommendations of Bohall and Collopy [43], Wuczyński [20], and Bylicka et al. [21], which we slightly modified, three types of hunting sites were considered: trees, fence posts, and the ground.All the sites were classified into these three categories.Behavioral observations were made from vantage points with good visibility using a 20-60 × 100 spotting scope.Individual birds were tracked for a minimum of 10 min and a maximum of 30 min.If a buzzard had disappeared before 10 min passed, its observation ceased, and another bird was chosen.Each visit involved an average of 65 min of observations (range 20-130 min.).The analyses treated each 10-min sequence as a separate sample.A total of 1140 min (114 sequences) was spent observing Common Buzzards and 1610 min (161 sequences) on Rough-legged Buzzards.
The time spent on each hunting site was recorded to within one second using a dictaphone.Additional information was also assigned to each recording: the date and time of the recording and snow thickness.Mean daily temperatures ( • C) and mean wind speeds (km/h) were obtained from the Siedlce weather station (52 • 25 N; 22 • 26 E), situated about 17 kilometers away.The thickness of the snow cover was measured at 4 randomly selected sites within the study plot during each observation.Because both buzzard species spent most of their time at their hunting sites (Common Buzzard-96.5% of the time, Roughlegged Buzzard-91.4%),only this type of activity was analyzed.In accordance with the recommendations of Bohall and Collopy [43], Wuczy ński [20], and Bylicka et al. [21], which we slightly modified, three types of hunting sites were considered: trees, fence posts, and the ground.All the sites were classified into these three categories.
Statistical Analyses
A generalized linear mixed model (GLMM) with logit link function and binomial error variance was applied to compare the times spent on the three types of hunting sites (tree, fence post, and ground) by the two buzzard species (Table 1).The dependent variable was the species of the compared pair (binomial variable: 0-Common Buzzard, 1-Rough-legged Buzzard).A second generalized linear mixed model with Poisson distribution and logit link function was used to analyze the number of attacks by the buzzards.The time spent on the three types of hunting sites, the number of changes of hunting sites, the mean temperature, mean wind speed, and snow cover were treated as numerical predictors.The third model, where the dependent variable was the number of hunting site changes, analyzed three weather parameters (mean temperature, mean wind speed, and snow cover), together with the interaction between species and the number of changes of hunting site, using a Poisson distribution and logit link function.The birds were not individually marked, so some may have been recorded more than once.The inclusion of the observation number as a random effect in the two models addressed the question of pseudoreplication.The second random factor was the winter season.The differences in snow cover between the hunting sites were tested using Tukey's post-hoc test.Student's t-test was applied to assess the differences in the number of changes of hunting sites between two snow cover categories (present and absent).Prior to the parametric analyses (Student's t-test), all the dependent variables were log (x + 1) transformed to obtain a normal distribution.All the statistical analyses were performed using R software 4.2.2 [44].
Results
Both buzzard species spent most of their time standing on the ground, less perching on trees, and the least perching on fence posts (Figure 2).Only the difference between perching time on fence posts was significant between species (Table 2).Rough-legged Buzzards conspicuously avoided such hunting sites (Figure 2).
The number of attacks on prey was influenced by two predictors: the number of hunting site changes and the snow cover (Table 3).The number of changes in the hunting site positively affected the number of attacks.The thickness of snow cover was the only weather factor that significantly and negatively influenced the number of attacks on prey.There were no differences between the species in either the number of attacks or the time spent at particular hunting sites.However, when the influence of snow cover was analyzed together with species, there was a difference in this weather parameter between the use of hunting sites by Common Buzzards and Rough-legged Buzzards (Table 4).Common Buzzards perched on trees more frequently than on the other two types of hunting sites when the snow cover was significantly thicker (Tukey's post-hoc test, p < 0.001 for both comparisons, Figure 3).There was no difference in the snow cover between the use of the ground or fence posts for hunting sites (Tukey's post-hoc test, p = 0.881).The number of attacks on prey was influenced by two predictors: the number of hunting site changes and the snow cover (Table 3).The number of changes in the hunting site positively affected the number of attacks.The thickness of snow cover was the only weather factor that significantly and negatively influenced the number of attacks on prey.There were no differences between the species in either the number of attacks or the time spent at particular hunting sites.However, when the influence of snow cover was analyzed together with species, there was a difference in this weather parameter between the use of hunting sites by Common Buzzards and Rough-legged Buzzards (Table 4).Common Buzzards perched on trees more frequently than on the other two types of hunting sites when the snow cover was significantly thicker (Tukey's post-hoc test, p < 0.001 for both comparisons, Figure 3).There was no difference in the snow cover between the use of the ground or fence posts for hunting sites (Tukey's post-hoc test, p = 0.881).The number of changes in the hunting site depended on the snow cover analyzed as a separate variable and on the interaction of the snow cover and the species (Table 4).Common Buzzards changed their foraging sites less often in the presence of snow cover than when there was no snow, which was a significant difference (Student's t-test, t = 3.49, p < 0.001, df = 111).No influence of snow cover on the number of changes in hunting sites was found for Rough-legged Buzzards (Student's t-test, t = 1.09, p = 0.277, df = 158).
Discussion
Our study showed that the hunting time differed between the two buzzard species only for fence posts.Snow cover, but no other weather conditions, influenced the number of attacks on prey and the number of changes in hunting sites.However, the behavior of Common Buzzards appeared to be more dependent on the occurrence of snow cover than the hunting techniques used by Rough-legged Buzzards.
A significant difference in hunting behavior between the two species is the use of small fence posts in pastures as hunting sites, which the Rough-legged Buzzards conspicuously avoided.This may occur because this species repeats its breeding-ground foraging strategies in its wintering areas.Potapov [45] draws attention to the fact that The number of changes in the hunting site depended on the snow cover analyzed as a separate variable and on the interaction of the snow cover and the species (Table 4).Common Buzzards changed their foraging sites less often in the presence of snow cover than when there was no snow, which was a significant difference (Student's t-test, t = 3.49, p < 0.001, df = 111).No influence of snow cover on the number of changes in hunting sites was found for Rough-legged Buzzards (Student's t-test, t = 1.09, p = 0.277, df = 158).
Discussion
Our study showed that the hunting time differed between the two buzzard species only for fence posts.Snow cover, but no other weather conditions, influenced the number of attacks on prey and the number of changes in hunting sites.However, the behavior of Common Buzzards appeared to be more dependent on the occurrence of snow cover than the hunting techniques used by Rough-legged Buzzards.
A significant difference in hunting behavior between the two species is the use of small fence posts in pastures as hunting sites, which the Rough-legged Buzzards conspicuously avoided.This may occur because this species repeats its breeding-ground foraging strategies in its wintering areas.Potapov [45] draws attention to the fact that across the Rough-legged Buzzard's breeding area, there are practically no signs of human activity.This might explain why Rough-legged Buzzards did not use man-made objects as hunting sites to the extent that Common Buzzards did.Unfortunately, determining the hunting success of the two buzzard species was difficult, so it was not possible to analyze this factor in relation to the type of hunting site.
The preference of both species of buzzards for standing on the ground in our study plot may be caused by several factors that result indirectly from its grassland character.Firstly, low vegetation in hay meadows and pastures during winter favors this hunting technique [20,21].Secondly, hay meadow habitats support a large abundance of Common Voles Microtus arvalis, the main prey of the buzzards' winter diet [37].Standing on the ground may also reduce the effects of inter and intra-specific competition since a bird hunting on the ground immediately swallows captured rodents whole without tearing them into pieces, thus avoiding the risk of being robbed [20].Foraging on the ground is an energetically profitable means of acquiring food, especially when food is plentiful.In winter, every return flight to a perch is energetically costly [20].Müller et al. [46] and Gamauf [47] described foraging on the ground as an exceptionally effective method of hunting rodents, particularly suitable for young, inexperienced Common Buzzards and habitats with the best food supply.Dare [48] notes that foraging on the ground, especially in autumn and winter, is a common method used by Common Buzzards when hunting for beetles, earthworms, and other ground-dwelling invertebrates.
We found that both species performed a similar number of attacks on prey.This is explained by their use of similar foraging techniques.Unlike species actively seeking prey, such as the Kestrel Falco tinnunculus and Hen Harrier Circus cyaneus, both buzzard species ambush their victims from a sit-and-wait position [20,21,31].Nonetheless, a far more important factor mediating the numbers of attempted attacks on prey by both species appeared to be individual decisions, manifested by the number of changes of hunting site per time unit.Our study showed that buzzards were able to improve their hunting success by moving from one hunting site to another close by, thus increasing the controlled area [31].On the other hand, we found that the time a buzzard spent on hunting sites of different types was not related to the number of attempted attacks.Therefore, it seems that the kind of hunting site is less important than the hunting buzzard's view of its foraging area from different heights and perspectives.Our study also showed that the thicker the snow cover, the fewer attacks were attempted on prey in both buzzard species.A further consequence of persistent thick snow cover is the decrease in numbers of both buzzard species on the wintering grounds and their south-westward migration [23,24,40,47,49].Smaller numbers of Common Buzzards in the natural open habitats, associated with the thicker snow cover, could also result from local movements toward roads, where Rough-legged Buzzards are not recorded [25,30,31].
Our analysis also showed that Common Buzzards changed their hunting sites less often if the ground was covered with snow.The energy expense necessary for moving to higher hunting sites, giving a better view of potential victims on the snow, is thus reduced [1].No such relationship was found in the Rough-legged Buzzard.This species probably tolerates a thicker snow cover at its wintering grounds and will, therefore, not alter its behavior.More frequent use of trees as hunting sites by Common Buzzards when the snow was thicker further confirms that pattern.This type of behavior might be interpreted as the effect of interspecific competition when deteriorating weather conditions drive arrivals of larger numbers of Rough-legged Buzzards from the regions to the north of our study plot [42,50], whereas, in spring, the return migration of Rough-legged Buzzards is synchronized with the northwards progression of snowmelt [51,52].
Conclusions
We showed that the Rough-legged Buzzard seems to be more conservative in its use of hunting sites and less likely to change its foraging behavior when weather conditions deteriorate.In contrast, the Common Buzzard varies its hunting techniques in response to the appearance of snow cover, attempting fewer attacks on prey and using trees as hunting sites more frequently.Under conditions of reduced food availability, changes in hunting strategies may result from increased interspecific competition.Given the Rough-legged Buzzard's larger body size, the Common Buzzard may have to resort to less effective hunting strategies.However, this is a supposition that requires further behavioral studies on wintering grounds used by both species.
2. 2 .
Data Collection Observations were carried out in this study plot during four winter seasons-2007/2008, 2008/2009, 2011/2012, and 2012/2013-from the first days of November to the end of February.Birds were monitored at roughly two-week intervals on days without rain or snow.Around 9 to 14 visits were made in each season, a total of 44.The birds were always counted during the same hours of the day (07:30-13:30).
Figure 2 .
Figure 2. Mean time spent by Common and Rough-legged Buzzards hunting on three types of sites.Whiskers indicate 95% confidence limits.
Figure 2 .
Figure 2. Mean time spent by Common and Rough-legged Buzzards hunting on three types of sites.Whiskers indicate 95% confidence limits.
Figure 3 .
Figure 3. Mean snow cover during the use of hunting sites by Common and Rough-legged Buzzards.Whiskers indicate 95% confidence limits.
Figure 3 .
Figure 3. Mean snow cover during the use of hunting sites by Common and Rough-legged Buzzards.Whiskers indicate 95% confidence limits.
Table 1 .
Factors used to explain differences in hunting sites between Common Buzzard and Roughlegged Buzzard and the number of attacks by the buzzards.
Table 2 .
Results of a binomial generalized linear mixed model comparing the times spent on the three types of hunting sites between Common Buzzard and Rough-legged Buzzard.
Table 3 .
Results of a generalized linear mixed model testing the influence of different factors on the number of attacks on prey by the Common Buzzard and Rough-legged Buzzard.
Table 2 .
Results of a binomial generalized linear mixed model comparing the times spent on the three types of hunting sites between Common Buzzard and Rough-legged Buzzard.
Table 3 .
Results of a generalized linear mixed model testing the influence of different factors on the number of attacks on prey by the Common Buzzard and Rough-legged Buzzard.
Table 4 .
Results of a generalized linear mixed model testing the influence of different factors on the number of hunting site changes.Interactions between factors are marked with ×. | 2023-09-06T15:14:14.383Z | 2023-09-01T00:00:00.000 | {
"year": 2023,
"sha1": "3954f26da5e6212f639661b04f9077f4c210fb61",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2615/13/17/2801/pdf?version=1693804339",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f1b81efe16c6456066c0cbd03e80cf683b46bf71",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
41281896 | pes2o/s2orc | v3-fos-license | Kojic acid-derived tyrosinase inhibitors : synthesis and bioactivity
Introduction Melanin is a dark pigment produced by about 10% of skin cells in the innermost layer of the epidermis (1). This compound is a heteropolymer of indole derivatives and is produced inside melanosomes through a series of oxidative reactions involving the amino acid tyrosine in the presence of the enzyme tyrosinase (Fig. 1). The type and amount of produced melanin in the melanosomes generates the actual color of the skin (2). Melanogenesis is a physiological process, which plays an important role in the prevention of sun-induced skin injury. Although the melanin production in human skin is a major defense mechanism against ultraviolet (UV) light, the accumulation of an excess of epidermal pigments can causes various hyperpigmentation disorders, such as melasma, age spots, and sites of actinic damage (3). Tyrosinase (EC 1.14.18.1), also known as polyphenoloxidase (PPO), is a coppercontaining bifunctional enzyme with a molecular weight of approximately 60–70 kDa in mammals and is found exclusively in melanocytes (1,4). It catalyzes two distinct reactions of melanin synthesis (Fig. 1); the hydroxylation of Abstract Tyrosinase is a key enzyme for melanin biosynthesis, catalyzing the oxidation of L-tyrosine to L-dopaquinone. The tyrosinase inhibition is an effective approach to control hyperpigmentation in human skin and enzymatic browning in fruits and vegetables. Kojic acid is a naturally-occurring tyrosinase inhibitor which has been clinically used to treat the hyperpigmentation of skin. However, kojic acid as a hydrophilic small-molecule has insufficient inhibitory activity and stability, with considerable toxicity. To overcome these drawbacks, synthetic derivatives of kojic acid were developed, which exhibited enhanced tyrosinase inhibitory activity and more favorable stability relative to kojic acid. In this context, the synthesis and biological activity of kojic acid derivatives as tyrosinase inhibitors have been highlighted.
Introduction
Melanin is a dark pigment produced by about 10% of skin cells in the innermost layer of the epidermis (1). This compound is a heteropolymer of indole derivatives and is produced inside melanosomes through a series of oxidative reactions involving the amino acid tyrosine in the presence of the enzyme tyrosinase (Fig. 1). The type and amount of produced melanin in the melanosomes generates the actual color of the skin (2). Melanogenesis is a physiological process, which plays an important role in the prevention of sun-induced skin injury. Although the melanin production in human skin is a major defense mechanism against ultraviolet (UV) light, the accumulation of an excess of epidermal pigments can causes various hyperpigmentation disorders, such as melasma, age spots, and sites of actinic damage (3). Tyrosinase (EC 1.14.18.1), also known as polyphenoloxidase (PPO), is a coppercontaining bifunctional enzyme with a molecular weight of approximately 60-70 kDa in mammals and is found exclusively in melanocytes (1,4). It catalyzes two distinct reactions of melanin synthesis (Fig. 1); the hydroxylation of L-tyrosine to form 3,4dihydroxyphenylalanine (L-DOPA) by monophenolase action and the oxidation of L-DOPA to the corresponding odopaquinone by diphenolase activity (5). Dopaquinone is highly reactive and can polymerize spontaneously to form melanin in a series of reaction pathways (6). The tyrosinase can be considered as a ratelimiting enzyme in the melanin biosynthesis (7). Accordingly, tyrosinase inhibitors significantly reduce pigmentation in melanosomes and avoid excessive melanogenesis. Some tyrosinase inhibitors have useful application in cosmetics and pharmaceutical products for the prevention of the overproduction of melanin in the epidermis (8). On the other hand, melanogenesis affects the color quality and flavor of foods. The enzymatic action of tyrosinase causes the browning in fruits and vegetables. Thus, tyrosinase enzyme plays an important role in loss of nutritional and market values of foods. In the food industry, tyrosinase inhibitors, especially from natural sources have great potential in controlling the quality and economics of fruits and vegetables (9). Many efforts have been spent in the search for effective and safe tyrosinase inhibitors, and a large number of naturally occurring and synthetic tyrosinase inhibitors were extensively reported (10-12). However, only few of them are practically applicable due to their weak intrinsic activities or safety concerns. Therefore, it is still necessary to search and develop novel tyrosinase inhibitors with potent activity and lower side effect (4).
Kojic acid Kojicacid
(5-hydroxy-2-(hydroxymethyl)-4Hpyran-4-one, 1) (Fig. 2) is one of the metabolites produced by various fungal or bacterial strains such as aspergillus and penicillium and has been used in many countries as a skin-whitening agent because of its tyrosinase inhibitory activity on melanin synthesis. The biological activities of kojic acid are attributed to its γ-pyranone structure that contains an enolic hydroxyl group. If the enolic hydroxyl group is protected, its tyrosinase inhibitory activity is completely lost. It acts by chelating copper at the active site of the tyrosinase enzyme (13). Melanocytes that are treated with kojic acid become nondendritic and have decreased melanin content (14). Kojic acid was reported to have a highsensitizing potential and to potentially cause irritant contact dermatitis. However, it is useful in patients who cannot tolerate hydroquinone and it may be combined with a topical corticosteroid to reduce irritation (15). Additionally, it also acts as an antioxidant and scavenges reactive oxygen species that are released excessively from cells or generated in tissue or blood (16). The reaction of kojic acid with metal salts of aluminium, chromium, cobalt, copper, gold, indium, iron, nickel, manganese, palladium, vanadium, and zinc results in the formation of stable metal kojate complexes (17)(18)(19)(20). Due to its iron chelating activity, kojic acid and its derivatives play an important role in the management of iron-overload diseases such as β-thalassemia or anemia (21)(22)(23)(24)(25). Moreover, various biological effects including antibacterial (26,27), antifungal (28,29), antiviral (30), anti-inflammatory (31), antineoplastic (32)(33)(34), pesticide (35), radioprotective (20), antidiabetic (36), and anticovulsant (37,38) which were made on the kojic acid structure.
Conversion of γ-pyranone to 4-pyridinone: O-1 modification
The replacement of oxygen in the γpyranone ring with nitrogen resulted in 4pyridinone analogs of kojic acid (Fig. 3).
Accordingly, a series of hydroxypyridinone-L-phenylalanine conjugates 5 (Fig. 4), starting from kojic acid was synthesized by Li et. al. and evaluated against mushroom tyrosinase (39). It was found that compound containing 1-octyl moiety (R= n-C 8 H 17 ) had potent inhibitory effect against mushroom tyrosinase. MTT assay indicated that this compound was non-toxic to tested cell lines. For the synthesis of compounds, firstly kojic acid (1) was O-benzylated by benzyl chloride, and then reacted with an appropriate alkylamine to give compound 3. The alcoholic compound 3 was conjugated with N-protected Lphenylalanine by using EDC and DMAP.
Finally, N-deprotection was carried out by hydrogenation at 30 psi H 2 for 5−6 h at room temperature to give target compounds 5 (39). Saghaie et al. synthesized a series of 3-hydroxy-4-pyridinones 9 starting from kojic acid in high yield (Fig. 5), and evaluated them for their inhibitory activity toward tyrosinase enzyme using dopachrome method (40). As illustrated in Fig. 5, the amine insertion in the O-benzyl kojic acid (2) resulted in compound 6, which subsequently oxidized to aldehyde 7. Condensation of aldehyde 7 with aniline derivatives gave Schiff base 8. Reduction of C=N bond and O-debenzylation in compound 8 by using Pd/C hydrogenation afforded final compounds 9. Their biological results show that all synthesized compounds have inhibitory effect on tyrosinase activity. Among compounds studied those containing two free hydroxyl group were more potent than their analogues with one hydroxyl group. Also, substitution of a methyl group on position N 1 of the hydroxypyridinone ring seems to confer more inhibitory potency (40).
Esterification of 2-(hydroxymethyl) group of kojic acid
The primary alcoholic group of kojic acid can be esterified with different acids. However, the convenient method for preparation of kojic esters is via chloro-kojic acid (10) and subsequent nucleophilic substitution with a suitable carboxylate salt (Fig. 6). Nitric oxide (NO) is an important inflammatory mediator, synthesized by inducible nitric oxide synthase (iNOS). Fig. 6, the reaction of kojic acid with which is conveniently O-methylated to give compound 13 using dimethylsulfate and potassium carbonate in acetone under reflux conditions. Chlorides 10 and 13 react with potassium salts of benzoic acids or of cinnamic acids in DMF at 110-120 ºC to give the corresponding ester derivatives 12 (Fig. 7 corresponding ester derivatives 16 ( Fig. 8) (42).
The obtained biological results revealed that 3,4-methylenedioxy cinnamic acid ester of kojic acid (12c) exhibited more potent inhibitory effect on tyrosinase than kojic acid. The structure of compound12c ( Fig. 9 between kojic acid and the 2-hydroxy benzoic acid moiety. In another study, benzoate ester derivatives of kojic acid, with and without adamantyl moiety were synthesized (Fig. 10).
Benzoate derivatives 17 that did not contain an adamantyl moiety showed potent tyrosinase inhibitory activities. In contrast, compounds 18 showed potent depigmenting activity without tyrosinase inhibitory activity. This is the first study showing the depigmenting activity of kojic acid derivatives without tyrosinase inhibitory activity (43). Cho and co-workers have synthesized cinnamate derivatives of kojic acid by various esterification methods, for use as depigmenting agents. In this report, to obtain the cinnamate ester of kojic acid (compound 12c), the nucleophilic addition of the potassium salt of cinnamic acid to kojyl chloride was carried out (Fig. 11). Interestingly, the side product (20) showed (12c) (20) Byproduct more potent depigmenting activity (IC 50 = 23.51 μM) than compound 12c (IC 50 > 100 μM) which is the parent compound of the side product. However, it has no tyrosinase inhibitory activity (44). A novel kojic acid derivative containing trolox (21), namely (±)-5-hydroxy-4-oxo-4H-pyran-2yl-methyl 6-hydroxy-2,5,7,8tetramethylchroman-2-carboxylate (22a, Fig. 12), was synthesized (45). Indeed, the two biologically active compounds, kojic acid and trolox, were conjugated via an ester bond as they are expected to have dual action. The antioxidant activity and the tyrosinase inhibitory activity of kojic acid derivative 22a on melanogenesis were evaluated. Compound 22a exhibited potent tyrosinase inhibitory activity and radical scavenging activity. Limited structureactivity relationship (SAR) investigations indicated that the tyrosinase inhibitory activity may originate from the kojic acid moiety, and the radical scavenging activity may be due to the phenolic hydroxyl group of trolox. Compound 22a also exhibited potent depigmenting activity in a cell-based assay. The limited SAR investigations revealed that the depigmenting activity of 22a may be due to the synergistic activities of kojic acid and its trolox moiety (45). As presented in Fig. 13, kojyl chloride derivatives 10 or 13 reacted with potassium salts of trolox, 4-hydroxybenzoic acid or 6hydroxynaphthoic acid in DMF at 110-120 ºC to give the corresponding ester derivatives 22a-d (45). Kojic acid derivatives conjugated with amino acids In a report by Noh et al. (46), kojic acid has been coupled with amino acids to obtain kojic acid-amino acid amide conjugates 24 as new stable tyrosinase inhibitors (Fig. 14). Firstly, the primary alcohol of kojic acid was reacted with 1,1′-carbonyldiimidazole (CDI) and then coupled to the resin-bounded amino acids. In this reaction, the kojyl moiety is connected to the amino acid via a carbamate linker. After cleavage of the kojic acid-amino acid amide (24, KA-AA-NH 2 ) from the resin, it was characterized by MALDI-TOF mass spectroscopy. The conjugates of different amino acid amides with kojic acid were evaluated for their inhibitory activity on mushroom tyrosinase. The results showed that most of conjugates had better inhibitory activity than the parent molecule kojic acid. When amino acids such as phenylalanine, tryptophan, tyrosine, and histidine, which possess aromatic side chains, were conjugated to kojic acid, the tyrosinase inhibitory activity was enhanced dramatically. Noh et al. suggested that the aromatic residue of mentioned amino acids may contribute to the binding of the inhibitor to the hydrophobic pocket of the enzyme. Further studies showed that kojic acid-phenylalanine amide conjugate (24a, KA-F-NH 2 , Fig. 15) showed the strongest inhibitory activity, which was maintained for over 3 months at 50 ºC, and acted as a noncompetitive inhibitor (46). Kim group synthesized a series of kojic acidtripeptides by solid-phase parallel synthesis and evaluated them as tyrosinase inhibitors (47). As depicted in Fig 16, the resin-bound tripeptides reacted with activated kojic acid 23. After cleavage, the kojic acid-tripeptide conjugates 26 were obtained in good yields. Most of the kojic acid-tripeptide conjugates exhibited more potent tyrosinase inhibitory activities than kojic acid. The most potent compound (kojic acid-FWY) was about 100-fold more potent than kojic acid.
Furthermore , it was less toxic than kojic acid and its storage Fig 17) were synthesized to improve the tyrosinase inhibitory activity of kojic acid (48).
The N-kojic-amino acid 27 were synthesized starting from kojic acid and appropriate amino acid by using DSC (N,N'disuccinimidylcarbonate) and DMAP (4dimethylaminopyridine). Subsequent esterification of another molecule of kojic acid with compound 27 gave the target compounds 28. Almost all synthesized compounds were more active than kojic acid. In general, N-kojicamino acid-kojiate 28 was found to have a higher inhibitory activity than N-kojic-amino acid 27. Among them, the N-kojic-Lphenylalanylkojiate was the most potent compound. It was 380 times more potent than kojic acid. The inhibition mechanism of these derivatives is considered to be non-competitive which is similar to that of kojic acid (48). In another study, two molecules of kojic acid were connected by various linkers containing chemical bonds such as ester, amide and thioether (Fig. 18) (49). Chlorokojic acid (10) was reacted with sodium azide in DMF and subsequently converted to kojyl amine HBr 29. The coupling of compound 29 with succinyl chloride in the presence of triethylamine in THF afforded dikojylsuccinic amide 30. The nucleophilic substitution of chlorokojic acid (10) with potassium salt of kojyl succinic acid gave di-kojylsuccinate 31. The reaction of compound 10 with dithiols in the presence of TEA afforded thioethers 32 (Fig. 18). The synthesized dimmers of kojic acid (30)(31)(32) were evaluated against tyrosinase enzyme and melanin formation in melan-a melanocytes.
Among them, dithioether derivatives (32a-c) showed the highest inhibitory activity. The obtained results showed that the dithioether linker and its flexibility are important for improving anti-melanogenic activity. The propylene thioether compound 32b with IC 50 value of 1.97 µM was the most active inhibitor against tyrosinase enzyme. It was about 25fold more potent than kojic acid. In malan-a cell based assay, butylene dithioether derivative 32c exhibited superior inhibitory activity of melanin synthesis, being approximately 1000 times more potent than kojic acid (49). Moreover, compound 32b exhibited the most potent inhibitory activity of NO production in LPS activated macrophages (50). Rho et al. further investigated the structureactivity relationship of kojic acid thioethers by preparing mono-kojyl thioethers 33, sulfoxides 34, and sulfones 35 (Fig. 19) (51). Kojyl intermediates 37 and 40 were derived from kojic acid as depicted in Fig. 20. The enzymatic assay revealed that the tyrosinase inhibitory activity of compound 42 was about 8 times more potent than that of kojic acid. This compound also exhibited significant melanin synthesis inhibitory activity in cell-based assay. The obtained results for dimeric compound 42 compared to kojic acid indicate that the connection of two pyrone rings of kojic acid through a suitable linker can be an useful strategy for finding new potent tyrosinase inhibitors (3).
C-2
Side chain-modified kojic acid derivatives Chemically, the 2-(hydroxymethyl) side chain in the kojic acid structure is a good site for oxidation to related aldehyde. With a key intermediate aldehyde in hand, diverse derivatives can be prepared. As shown in Fig. 20, for oxidation of primary alcohol in kojic acid structure, the enolic OH should be protected with a suitable group such as p-methoxybenzyl. The oxidation of protected kojic acid 36 with MnO 2 resulted in protected aldehyde 37. Kang et al. used (53). The synthesis of the target compounds 52a,b was outlined in Fig. 23. compounds 52a,b against commercial mushroom tyrosinase revealed that Omethylated compound 52a showed no inhibitory activity, while thiosemicarbazone analog 52b bearing a free enolic group exhibited high activity against mushroom tyrosinase (IC 50 = 11 µM). The latter compound was about 9fold more potent than the parent compound kojic acid (53).
Miscellaneous derivatives
The topical formulations of kojic acid are used as skin-lightening agent. However, it is hardly absorbed through the lipid membrane of its target sites, the melanocytes due to its hydrophilic character (54). In some investigations, it has been attempted to connect the kojic acid to a suitable carrier. Kim (54) in the presence of TEA in CHCl 3 /EtOH, followed by hydrolysis in acidic medium of H 2 O/MeOH (Fig. 24). Interestingly, the alcoholic OH group of kojic acid did not take part in reaction with compound 54. The effects of compound 56 on tyrosinase activity and melanin synthesis were evaluated by Kim and co-workers. The masked form of kojic acid 56 displayed higher stability than kojic acid. Also, its permeation through the skin was about 8times more than kojic acid. Compound 54 showed no tyrosinase inhibition effect compared with kojic acid in vitro, however displayed the same inhibitory effect as kojic acid on melanin synthesis in mouse melanoma and normal human melanocytes.
It seems that compound 56 is converted to kojic acid in living cells (55). In another study, Manosroi and co-workers have investigated the entrapment of kojic acid and its oleate ester. Kojic oleate (57) was prepared starting from kojic acid and oleic acid in CH 2 Cl 2 by using DCC (N,N'dicyclohexylcarbodiimide) and DMAP (4-(N,N-dimethylamino)pyridine) (Fig. 25). In this study, the entrapment efficiencies of kojic acid and kojic oleate in the vesicles were investigated by dialysis and column chromatography, respectively. The obtained results indicated that kojic oleate could be intercalated in the bilayer structure of the vesicles composed of amphiphile (Span 60, Tween 61 or DPPC)/cholesterol/dicetyl phosphate at molar ratio of 9.5:9.5:1.0. In general, they concluded that the esterification of kojic acid improved its entrapment in the vesicles (56).
Conclusion
Kojic acid is a small-molecule with tyrosinase inhibitory activity, which has been used a skin-lightening agent. This agent is the most intensively studied inhibitor of tyrosinase; however it has unsatisfactory inhibitory activity, insufficient stability band unwanted side effects. To overcome these disadvantages, researchers have attempted to design new analogs of kojic acid with higher potency, satisfactory stability and safety. Diverse modifications on this small-molecule have been made to find new tyrosinase inhibitors. The main modifications were conversion of the γ-pyranone to 4pyridinone, esterification of 2-(hydroxymethyl) group, C-2 side chainmodification, and conjugation of kojic acid with amino acids.
Conflict of interest statement
The authors claim that they have no conflicting interest in this study. | 2017-08-27T08:36:57.830Z | 2015-01-10T00:00:00.000 | {
"year": 2015,
"sha1": "2e7bee4a2aec35bb145ce4a5330f8ad110d64385",
"oa_license": "CCBYNC",
"oa_url": "http://pbr.mazums.ac.ir/files/site1/user_files_88c428/danial-A-10-26-1-921b99d.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2e7bee4a2aec35bb145ce4a5330f8ad110d64385",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
45323147 | pes2o/s2orc | v3-fos-license | The multiple carrier model of nonribosomal peptide biosynthesis at modular multienzymatic templates.
Gramicidin S synthetase 1 and 2 were affinity-labeled at their thiolation centers either by thioesterification with the amino acid substrate or by specific alkylation with the thiol reagent N-ethylmaleimide in combination with a substrate protection technique. The labeled proteins were digested either chemically by cyanogen bromide or by proteases. An efficient multistep high pressure liquid chromatography methodology was developed and used to isolate the active site peptide fragments of all five thiolation centers of gramicidin S synthetase in pure form. The structures of these fragments are investigated by N-terminal sequencing, mass spectrometry, and amino acid analysis. Each of the active site peptide fragments contains the consensus motif LGG(H/D)S(L/I), which is specific for thioester formation in nonribosomal peptide biosynthesis. It was demonstrated that a 4'-phosphopantetheine cofactor is attached to the central serine of the thiolation motif in each amino acid-activating module of the gramicidin S synthetase multienzyme system forming the thioester binding sites for the amino acid substrates and catalyzing the elongation process. Our data are strong support for a "multiple carrier model" of nonribosomal peptide biosynthesis at multifunctional templates, which is discussed in detail.
Gramicidin S synthetase 1 and 2 were affinity-labeled at their thiolation centers either by thioesterification with the amino acid substrate or by specific alkylation with the thiol reagent N-ethylmaleimide in combination with a substrate protection technique. The labeled proteins were digested either chemically by cyanogen bromide or by proteases. An efficient multistep high pressure liquid chromatography methodology was developed and used to isolate the active site peptide fragments of all five thiolation centers of gramicidin S synthetase in pure form. The structures of these fragments are investigated by N-terminal sequencing, mass spectrometry, and amino acid analysis. Each of the active site peptide fragments contains the consensus motif LGG(H/D)S(L/I), which is specific for thioester formation in nonribosomal peptide biosynthesis. It was demonstrated that a 4-phosphopantetheine cofactor is attached to the central serine of the thiolation motif in each amino acid-activating module of the gramicidin S synthetase multienzyme system forming the thioester binding sites for the amino acid substrates and catalyzing the elongation process. Our data are strong support for a "multiple carrier model" of nonribosomal peptide biosynthesis at multifunctional templates, which is discussed in detail.
Microbial organisms produce a variety of structurally diverse, low molecular weight bioactive peptides, depsipeptides, peptidolactones, and lipopeptides (1). These secondary metabolites exhibit valuable properties qualifying them for biotechnological and medical uses as antibiotics, antiviral and antitumor agents, immunomodulators, as well as biosurfactants, for example. In general, the biosynthesis of these compounds is accomplished nonribosomally by large multienzyme systems. More than 20 years ago, the multienzymatic thiotemplate model was proposed by several groups (2)(3)(4)(5). According to this mechanism, amino acid activation occurs in a two-step process including (a) aminoacyl adenylation similar to the amino acid activation process catalyzed by tRNA ligases, and (b) aminoacyl thioesterification at specific reactive thiol groups of the multienzyme (thiotemplates). In analogy to fatty acid synthase, peripheral cysteines have been proposed as the thiotemplate sites. A central thiol-group of an intrinsic 4Ј-phosphopantetheine carrier was assumed to interact with the thioesterified substrate amino acids managing a step-by-step elongation of the peptide product in a series of transpeptidation and transthiolation reactions (2)(3)(4)(5).
Sequence alignments of peptide synthetases revealed that they are composed of modular homologous building blocks, which comprise 1000 -1500 amino acid residues. Each of these modules acts as an independent enzyme, which catalyzes the selection, activation and, in some cases, modification of its specific amino acid substrate (for recent reviews, see Refs. 9 and 10, and references herein). In agreement with the polyenzyme model proposed by Lipmann in 1954 (11), the specific linkage of the modules defines the amino acid sequence of the peptide product. To evaluate structure/function relationships of these multienzymes, our research is focused on the investigation of the thioester binding sites of gramicidin S synthetase 1 and 2. To characterize these essential structural elements in detail in previous studies, we affinity-labeled both enzymes either with a radioactive substrate amino acid (12) or the thiol inhibitor N-ethylmaleimide (13). The resulting complexes were digested with cyanogen bromide or proteases. The radioactively labeled thiotemplate site peptide fragments were isolated in pure form by multistep reversed phase HPLC. All of them contained the highly conserved thioester binding motif Leu-Gly-Gly-(His/Asp)-Ser-(Leu/Ile). Apparently the serine residue of this motif is involved in the thioester formation of gramicidin S synthetase with its amino acid substrates (12). The core of the thioester binding motif Asp-Ser-Leu has been identified as a binding site for a 4Ј-phosphopantetheine (Pan) cofactor in acyl carrier proteins/domains of fatty acid and polyketide synthases (12), implying that each module of gramicidin S synthetase would be equipped with a separate Pan-prosthetic group. Recently we provided evidence by mass spectrometric-and amino acid analysis that such a cofactor is indeed attached to the active serine of the thioester binding sites of GS2 for Lvaline (13) and GS1 for phenylalanine (14). In this paper we present for the first time a thorough study of the structures of all thiotemplate sites of a peptide-forming multienzyme. Our results demonstrate that each amino acid-activating module of gramicidin S synthetase is equipped with a separate 4Ј-phosphopantetheine prosthetic group forming the thioester binding site for the amino acid substrate and catalyzing the elongation process. These data are strong support for the hypothesized "multiple carrier model" of nonribosomal peptide biosynthesis at multifunctional protein templates, which is discussed in detail.
Enzyme Purification and Assays-Gramicidin S synthetase 1 and 2 were purified as published by Vater et al. (16). Assays for thioester formation of these enzymes with substrate amino acids (16,17) and biosynthesis of gramicidin S (16, 18) were performed as described previously. The protein concentration was measured using the procedures of Warburg and Christian (19) and Bradford (20).
Specific Labeling of the Thioester Binding Sites of GS1 and 2 with N-[ 3 H]Ethylmaleimide-5-30 mg of the multienzyme in 3 ml of 20 mM phosphate buffer (pH 7.2) containing 1 mM EDTA (buffer P) were specifically protected at their thiotemplate sites by incubation with 2 mM ATP, 10 mM MgCl 2 , and a saturating substrate amino acid concentration of 15 M for 10 min at 37°C. Reactive groups at the bulk of the synthetase-substrate-thioester complex were saturated by incubation with 2 mM NEM for 30 min at 37°C. The resulting complexes were isolated by gel filtration on Sephadex G-25 at 3°C with buffer P as the eluent and concentrated to a final volume of 3 ml. By incubation with 2 mM dithioerythritol for 60 min at 37°C, the substrate amino acid was removed from the synthetase. The enzymes were again isolated by gel filtration (Sephadex G-25; 3°C; eluent: buffer P) and concentrated by ultrafiltration to 3 ml. For all concentration steps, an Amicon Ultrafilter XM 50 was used. Finally, the respective thioester binding sites of the synthetases were specifically labeled by incubation of the deprotected multienzyme with 23 M [ 3 H]NEM at 37°C for 30 min. The labeled protein was isolated by gel filtration on Sephadex G-25 at 3°C with buffer P (pH 8.2) as the eluent, concentrated in a Speed Vac concentrator until a final protein concentration of approximately 2 mg/ml, and digested with trypsin.
Affinity Labeling of the L-Leucine-thiotemplate Site of Gramicidin S Synthetase 2-GS2 was specifically labeled at the thioester binding site for L-Leu by acid-stable incorporation of L-[ 14 C]leucine using the procedure of Schlumbohm et al. (12). Five milligrams of highly purified GS2 was incubated with 2 mM ATP, 10 mM MgCl 2 , 1 mM EDTA in 20 mM phosphate buffer (pH 7.2) and saturating L-Leu concentrations of 15 M for 10 min at 37°C. The residual cysteines of the GS2-substrate complex were alkylated by incubation with 5 mM N-ethylmaleimide. The labeled and modified synthetase was isolated by gel filtration on Sephadex G-25 at 3°C. In the eluent a pH of 2 was adjusted by addition of formic acid, because the GS2-substrate amino acid complexes were stable in acidic medium only. Approximately 0.8 mol of L-[ 14 C]Leu/mol of GS2 were bound.
Separation of Peptides by Reversed Phase HPLC-A preparative C 18 -EnCaPharm as well as the analytical C 18 -ODS Hypersil (5 m) columns from Shandon and Knauer were used for separation of the peptide fragments of the synthetases.
Peptide mixtures were dissolved in 300 -500 l of 10 -20% eluent B, loaded onto the columns which were equilibrated in 10 -20% B, and eluted with linear gradients of acetonitrile.
Characterization of the Radioactively Labeled Thiolation Site Peptide Fragments of Gramicidin S Synthetase-The investigation of the active site peptides by N-terminal sequencing, amino acid analysis, and mass spectrometry (fast atom bombardment and electrospray ionization MS) was performed as described previously by Stein et al. (13).
RESULTS
The thioester binding sites of gramicidin S synthetase 2 for L-Pro-, L-Val-, and L-Orn as well as gramicidin S synthetase 1 for Phe were specifically labeled with N-[ 3 H]ethylmaleimide using a substrate protection technique (13). In the first step, the thiolation center of the peptide synthetase was protected by its cognate amino acid substrate, followed by alkylation of the reactive residues at the bulk of the multienzymes with high concentrations of non-radioactive NEM (2 mM). After removal of the substrate amino acid from the thiolation site by incubation with dithioerythritol, the reactive thiol of this reaction center was specifically labeled with low concentrations of [ 3 H]NEM (23 M). After each step, the resulting modified synthetase complexes were isolated from the substrates by G-25 gel filtration. To obtain the radioactively labeled, active site peptide fragments of the thiolation sites of gramicidin S synthetase, the [ 3 H]NES-alkylated enzymes were digested with trypsin. In the case of GS2, this proteolysis should result in a very complex mixture of 391 peptides, 27 single lysine, and 14 arginine residues assuming quantitative fragmentation of the Lys-Xaa or Arg-Xaa peptide bonds. To purify the labeled active site peptide fragments from these mixtures to homogeneity, high resolution separation procedures were required. As summarized in Fig. 1, we developed a multistep reversed phase HPLC methodology using preparative and analytical C 18 -columns and acetonitrile-containing eluent systems with different composition and pH-value of their aqueous components. If necessary this procedure was repeated after additional cleavage(s) with other proteases. This technique can be utilized as a very efficient general strategy for the isolation of a target peptide from very complex peptide mixtures containing several hundred contaminants.
As indicated in Fig. 2A, 5-10-mg portions of the tryptic peptide mixtures of the [ 3 H]NES-alkylated GS2 complexes were separated in the first step by reversed phase chromatography on a C 18 -EnCaPharm column applying an linear acetonitrile gradient from 10 to 80% eluent B in 350 min of eluent system 1. As an representative example of the developed separation procedure, the purification of the peptide fragment of the L-ornithine thiolation site of GS2 is shown in Fig. 2 (B and C). The radioactively labeled peptide fraction containing this active site peptide fragment with a retention time of 250 -260 min (60 -62% eluent B, Fig. 2A) was rechromatographed by reversed phase HPLC on Hypersil ODS with an acetonitrile gradient of eluent system 3. As is apparent from the chromatogram, the radioactivity eluted in one peak with a retention time of 119 -123 min (42% eluent B, Fig. 2B). Because the absorbance profile of this chromatography implies that the radioactive peptide fraction contained at least 2-3 contaminant tryptic peptides of GS2, it was subsequently digested with endoproteinase GluC from Staphylococcus aureus V8 and fractionated again by reversed phase HPLC on Hypersil using an acetonitrile gradient of eluent system 3. The fraction in Fig. 2C with the main portion (80%) of the 3 H activity was detected at a retention time of 217-227 min (42% B). Two minor peaks eluted at retention times of 172-175 (37% B) and 185-188 min (38% B) each with 10% of the radioactivity. Liquid phase sequencing of these three radioactive peptide fractions demonstrated that all of them contained an active site peptide fragment of GS2 for L-Orn thiolation. The active site peptide of GS2 for L-Orn thiolation contained no glutamic acid residue. Therefore, a fragmentation of the peptide by GluC proteolysis at pH 4.0 in ammonium acetate buffer did not occur (main product). However, unspecific fragmentation of the Asp-Xaa peptide bonds of this peptide led to the minor products. In addition, the contaminating peptides were cut into smaller pieces which could efficiently be removed by HPLC.
After the initial preparative EnCaPharm chromatography, the tryptic fragment of the L-Val thiolation site of GS2 was purified to homogeneity in only one additional reversed phase HPLC step (Fig. 1). This strategy allowed a much more efficient preparation of the active site peptide than the previously reported four-to five-step purification procedure including two or three fragmentation steps (13). The L-Leu thiotemplate site of GS2 was affinity-labeled with L-[ 14 C]Leu. Here the purification procedure described by Schlumbohm et al. (12) was optimized (Fig. 1).
Each of the isolated active site peptide fragments of the thiolation sites of gramicidin S synthetase was investigated by N-terminal sequencing using a liquid phase sequencer (Edman degradation). The obtained sequences of the thiotemplate sites of GS 1 for L-Phe (D564-K575) and GS2 for L-Pro (I983-K1008), L-Val (I2029-R2044), L-Orn (V3075-K3090, main radioactive product), and L-Leu (F4120-L4132) are in accordance with the corresponding sequences derived from the grsA and grsB1-4 gene segments (7,8). All peptide fragments contained the highly conserved thioester binding motif LGG(H/D)S(L/I). In each case the radioactivity was eliminated from the peptide during the first Edman degradation step. Instead of the invariant serine residue of the thiolation motif claimed by the gene derived sequence, a dehydroalanine was found at this position. To prove our hypothesis that the radioactive tracer is not directly bound at the reactive serine and that an additional structural element, a 4Ј-phosphopantetheine carrier, is attached to the reactive serine of the (H/D)S(L/I) core, all isolated thiotemplate site peptide fragments of gramicidin S synthetase were investigated by mass spectrometric techniques.
As an example the extensive mass spectrometric analysis of the active site hexadecapeptide fragment of GS2 for L-Orn thiolation is demonstrated in Fig. 3. A quasimolecular ion [M ϩ H] ϩ was determined for this fragment by high field fast atom bombardment mass spectrometry (FAB-MS) at m/z ϭ 2207.9 daltons, as shown in Fig. 3A. This mass is appreciably higher than 1742.9 daltons, the mass calculated for the free hexadecapeptide derived from the grsB3-derived DNA sequence (7) active site peptide fragment was determined by interpretation of the fragmentation data (Fig. 3B) according to rules defined by us in earlier work (21,22). The fragmentation pattern observed is a series of N-and C-terminal sequence ions (b, c, y, and z) determining the structure of the active site peptide substituted with Pan-NES. The signals at m/z ϭ 1822 and 1724 Da can be attributed to the phosphorylated peptide (M P ) as well as the dehydrated species (M ⌬ ) bearing a dehydroalanine in position 14, which originate from partial or complete elimination of the NES-Pan adduct.
The results of electrospray mass spectrometry measurements of the thiolation site fragment of GS2 for L-ornithine are shown in Fig. 4. The ESI mass spectrum (Fig. 4A) doubly charged a, b, c, and x, y, z series) of the phosphorylated hexadecapeptide as well as the dehydrated species were observed. From these data the structures shown in Fig. 4C were derived, which are consistent with the amino acid sequence determined for this active site peptide fragment by gas phase sequencing and FAB-MS. The electrospray data demonstrate that the phosphoryl group is located at the serine residue in position 14 of the hexadecapeptide corroborating the attachment of the Pan carrier at this position.
The interpretation of an attached Pan prosthetic group to the active site peptide fragments is supported by investigation of their amino acid content after total hydrolysis. A total of 1-1.3 mol of 3-aminopropionic acid (-alanine), a constituent of 4Јphosphopantetheine were found per mole of the active site peptides. The spectra were acquired on an Opus Data system and calibrated using CsI clusters, yielding molecular masses based on accurate atomic numbers (13). A quasi-molecular ion [M ϩ H] ϩ was determined at a mass to charge ratio (m/z) of 2207.9 Da (m/z ϭ 2208.9 Da, the 13 Cisotope is the most abundant species in the cluster). B, structure of the [ 3 H]NES-Pan-adduct of the active site hexadecapeptide fragment as determined from the fragmentation pattern, which represents a series of N-and C-terminal sequence ions (b, c, y, and z).
DISCUSSION
In a previous paper (11), we provided evidence from chemical and genetic studies that an active serine is involved in covalent binding of the substrate amino acids at each reaction center of gramicidin S synthetase 2, instead of a cysteine as proposed in the original version of the thiotemplate mechanism (2)(3)(4)(5). This serine is part of a strictly conserved LGG(H/D)S(L/I) motif, which has been detected as an essential structural element of each amino acid-activating domain of multifunctional peptide synthetases the gene sequences of which have been determined so far. However, the chemical features of the reactive intermediates are consistent with the existence of thioester bonds (2)(3)(4)(5)11). Therefore, it seems unlikely that the substrate amino acids of GS2 are directly bound to these serine residues at the reaction centers. This conclusion is supported by the strong similarity of the (H/D)S(L/I) core of the thiotemplate motifs of GS2 with 4Ј-phosphopantetheine binding sites of fatty acid and polyketide synthetases. To clarify this fundamental question in detail, we isolated thiotemplate site peptide fragments of gramicidin S synthetase and investigated them by N-terminal sequencing, mass spectrometry, and amino acid analysis.
Each of the five thiotemplate site peptide fragments of gramicidin S synthetase was investigated by FAB-and ESI-MS. As is apparent from Table I, the molecular masses of all labeled thiolation site peptide fragments are appreciably higher than the values calculated from the amino acid sequence of GS1 and GS2 (7,8). Each of the observed mass differences indicate a covalent substitution of the peptide moiety with a 4Ј-phosphopantetheine cofactor, which is either alkylated with [ 3 H]NES or thioesterified with the amino acid substrate in the case of the L-Leu thiolation site fragment of GS2. The nature, covalent linkage, and site of location of the Pan substituent attached to the serine residue of the thiolation motif was proven by interpretation of the fragmentation data obtained from FAB-MS as well as collision-induced dissociation ESI-MS, as demonstrated for the active site peptide of GS2 for the thioesterification of L-ornithine in Figs. 3 and 4. The nature of the substituent is supported by amino acid analysis. One mole of each active site peptide fragment of gramicidin S synthetase contained approximately 1-1.3 mol of -alanine, which is a constituent of 4Ј-phosphopantetheine.
Our results, summarized in Table I, give evidence that each of the five amino acid-activating modules of gramicidin S synthetase is equipped with a separate 4Ј-phosphopantetheine prosthetic group esterified to the active site serine in the (H/ D)S(L/I) core of their thiolation motifs. The cysteamine thiolgroups of the cofactors represent the thioester binding sites for the substrate amino acids, instead of cysteine residues as proposed in the original version of the thiotemplate hypothesis (2)(3)(4)(5). Our study of the structure of all thiotemplate sites of gramicidin S synthetase demonstrates for the first time that peptide-forming multienzymes contain multiple Pan-cofactors, one at each reaction center of their amino acid-activating modules. Our data are strong support for the multiple carrier model of nonribosomal peptide biosynthesis.
Since the first gene sequences of peptide-forming multienzymes have been elucidated within the last 6 years, important progress has been achieved concerning the analysis of their structure-function relationships (for recent reviews, see Refs. 9 -10). Peptide synthetases are composed of homologous building blocks comprising 1000 -1500 amino acid residues, which are distinguished by a linear array of highly conserved sequence motifs representing the reactive structures of functional domains for substrate binding and catalysis of all intermediate steps in peptide biosynthesis as demonstrated for gramicidin S synthetase in Fig. 5. Each of these modules functions as an independent enzyme, which catalyzes the selection, activation, and in some cases modification of its amino acid substrate. By affinity labeling of peptide-forming multienzymes as well as by site-directed mutagenesis and specific dissection of gene structures coding for these enzymes, followed by functional analysis, some of these motifs could be attributed to specific functions. As illustrated in Fig. 5 for gramicidin S synthetase, each of the amino acid-activating modules is organized into specific functional domains for aminoacyl adenylation (500 -600 amino acid residues; gray) (24 -30), thioester binding (80 -100 amino acid residues, black) (12)(13)(14)30), and elongation of the peptide product (E, 300 -400 amino acids; white) (31)(32)(33)(34)(35). Epimerizing/racemizing modules as in the case of GS1 contain an elongation domain with significant differences in comparison to non-epimerization modules (E-Epi, 300 -400 amino acids; light gray) (14,(31)(32)(33)(34)(35). For interacting protein components of peptide-forming multienzymes as GS1 and GS2, a specific domain at the N terminus of the acceptor enzyme is observed showing significant homologies to the elongation domains (E N ; light gray) (10,31,33). At the C-terminal end of each peptide-forming multienzyme system like GS2, a sequence homologous to thioesterase II (T) is found instead of the elongation domain, which is observed in all other modules (34,35).
From the present knowledge of structure-function relationships of peptide synthetases, we imply that three specific sites for the Pan carrier within an amino acid-activating module are involved in the biosynthetic process: a charging position for thioester formation with the substrate amino acids as well as a peptidyl-acceptor and a peptidyl-donator site. Most probably the elongation domain contains the structural elements for interaction between their Pan carriers, including the peptidyldonator site for the first one and the peptidyl-acceptor site for the subsequent cofactor. This transpeptidation reaction resembles the peptidyl-transfer process between aminoacyl-tRNA and peptidyl-tRNA within the ribosomal A and P site. This model allows a much simpler and straightforward description of the biosynthetic process than the old version (2)(3)(4)(5), because the assumption of a central carrier and the transthiolation reactions required for the transport of the peptide intermediates could be omitted. In addition, the charging of a multienzyme with all intermediates of the growing peptide chain, not understandable by the former hypothesis, can be easily explained by the multiple Pan carrier concept. As an representative example for the multiple carrier model, the first steps of the biosynthesis of gramicidin S leading to the formation of the tripeptide D-Phe-L-Pro-L-Val are shown in Fig. 6. The amino acid-activating modules catalyze aminoacyl adenylation and thioesterification of their cognate amino acids to their Pan carrier (Fig. 6A) at the charging position within the adenylation domain (C amino acid ). The energy for this process is provided by hydrolysis of an ATP ␣- linkage resulting in the release of FIG. 5. Schematic presentation of the modular architecture of gramicidin S synthetase as an representative example of peptide-forming multienzymes. Each of the amino acid-activating modules of gramicidin S synthetase is organized into functional domains for amino acid adenylation (gray), thioester binding (black), epimerizing (light gray), and elongation of the peptide product (white). For details see the text. The molecular masses of the peptides were calculated from the gene-derived sequence (7,8). b Mass was calculated as the sum of the molecular masses of the peptide moiety, the 4Ј-phosphopantetheine substituent that is covalently attached to the serine residue, and the radioactively labeled tracers (shaded boxes, NES: N-ethylsuccinimido and L-leucine, respectively) bound to the reactive thiol group of the Pan cofactors.
c Results of the investigation of the active site peptide fragments by electrospray mass spectrometry (ESI-MS).
AMP and pyrophosphate (PP i ). As illustrated in Fig. 6B, the elongation cycle starts with the interaction of gramicidin S synthetase 1 (GS1) and gramicidin S synthetase 2 (GS2). The Pan carrier of GS1 transports phenylalanine to the elongation site (E 1 -Epi), where epimerization of Phe also occurs. The dipeptide D-Phe-L-Pro (F-P) is formed in E 1 in a transpeptidation reaction by nucleophilic attack of the imino group of L-Pro (Pan Pro carrier in its acceptor site) at the thioester-activated carboxyl C-atom of D-Phe (Pan Phe carrier in its donator site). Most probably the domain E N of GS2 is involved in this process. A free thiol group of the Pan Phe carrier is recovered in this reaction, which is able to bind Phe again and to start a new cycle of elongation. As demonstrated in Fig. 6C, the Pan carrier of the L-Pro module of GS2 thioesterified with the dipeptidyl intermediate (F-P) is translocated to its donator site. By interaction with the L-valyl-Pan Val -carrier in its acceptor site, the tripeptide D-Phe-L-Pro-L-Val is formed in a second transpeptidation reaction. In a similar manner the tetra-and pentapeptide intermediates D-Phe-L-Pro-L-Val-L-Orn and D-Phe-L-Pro-L-Val-L-Orn-L-Leu are assembled involving the individual Pan carriers of the ornithine-and leucine-activating domains of gramicidin S synthetase 2 (data not shown). Finally the decapeptide gramicidin S is formed by cyclization of two pentapeptide moieties.
For the experimental verification of the multiple carrier model, more detailed investigations of structure-function relationships of peptide synthetases are necessary. In particular, the three sites attributed by us to the functions of the Pan carrier have to be localized within an amino acid-activating module and investigated. For this purpose the functional characterization of the highly conserved consensus motifs, especially in the putative elongation domain, will provide the clue to understand the process of peptide bond formation. The existence of specific recognition elements for the growing peptide chain within this part of a module is a possible explanation for the unidirectional assembly of the peptide product, presumably in combination with conformational changes induced during the transpeptidation reactions. Approaches to clarify these essential questions, such as site-directed mutagenesis of putative functional amino acids within the conserved motifs in combination with elongation studies as well as the determination of the three-dimensional structure of peptide-forming multienzymes, are in progress. | 2018-04-03T06:02:43.858Z | 1996-06-28T00:00:00.000 | {
"year": 1996,
"sha1": "01f7f0646bc4f17f42f66fdf9f7dc42c07ad3ba4",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/271/26/15428.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "b17856f2bef2f6bc7d4d7bdfc24161c06bd2ca2f",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
270439713 | pes2o/s2orc | v3-fos-license | KPC-luciferase-expressing cells elicit an anti-tumor immune response in a mouse model of pancreatic cancer
Mouse models for the study of pancreatic ductal adenocarcinoma (PDAC) are well-established and representative of many key features observed in human PDAC. To monitor tumor growth, cancer cells that are implanted in mice are often transfected with reporter genes, such as firefly luciferase (Luc), enabling in vivo optical imaging over time. Since Luc can induce an immune response, we aimed to evaluate whether the expression of Luc could affect the growth of KPC tumors in mice by inducing immunogenicity. Although both cell lines, KPC and Luc transduced KPC (KPC-Luc), had the same proliferation rate, KPC-Luc tumors had significantly smaller sizes or were absent 13 days after orthotopic cell implantation, compared to KPC tumors. This coincided with the loss of bioluminescence signal over the tumor region. Immunophenotyping of blood and spleen from KPC-Luc tumor-bearing mice showed a decreased number of macrophages and CD4+ T cells, and an increased accumulation of natural killer (NK) cells in comparison to KPC tumor mice. Higher infiltration of CD8+ T cells was found in KPC-Luc tumors than in their controls. Moreover, the immune response against Luc peptide was stronger in splenocytes from mice implanted with KPC-Luc cells compared to those isolated from KPC wild-type mice, indicating increased immunogenicity elicited by the presence of Luc in the PDAC tumor cells. These results must be considered when evaluating the efficacy of anti-cancer therapies including immunotherapies in immunocompetent PDAC or other cancer mouse models that use Luc as a reporter for bioluminescence imaging.
Pancreatic ductal adenocarcinoma (PDAC) is a malignancy with high incidence and mortality, and it is characterized by its aggressive biology and poor prognosis 1 .The current treatments for PDAC are limited, highlighting the importance of finding novel strategies.Murine cancer models play a central role in understanding PDAC progression and pathophysiology, as well as in testing novel treatments 2,3 .As most PDAC patients show mutations in the KRAS and Tp53 genes 4 , the KPC (Kras G12D/+ ; Trp53 R172H/+ ; P48-Cre) mouse is the most frequently used genetically engineered model in pancreatic cancer research, due to its reproducibility of the immune microenvironment of human PDAC, which allows the evaluation of preclinical therapeutic agents including immune therapy [5][6][7] .The tumors that arise spontaneously in the KPC mice have histological similarities to human PDAC, tending to be highly stromal with dense desmoplasia.Still, they can take a long time to form, increasing the costs of the experiment 8 .It is therefore common practice to employ transplantation models using human or mouse PDAC cells isolated from primary tumors that are implanted into recipient mice via subcutaneous, intravenous, or orthotopic injection.In particular, orthotopic models are attractive because the tumors grow in the native organ, and distant metastasis occurs spontaneously and rapidly throughout the abdomen in a manner consistent with clinical human disease 9 .Additionally, orthotopic implantation of PDAC cells recapitulate the tumor microenvironment of human PDAC more accurately than subcutaneous tumor models, is cost-effective, and at the same time reduces the degree of tumor heterogeneity seen in genetically engineered mouse models 7,10 .
In vivo live imaging is a valuable non-invasive tool for investigating cancer progression and unveiling the evasive response patterns of tumors.In preclinical research, cancer cells can be labeled with optically detectable markers to identify the location and putative spread of tumors.This methodology commonly employs
Cell proliferation assay
To measure cell proliferation in vitro, KPC and KPC-Luc cells were harvested and resuspended in complete DMEM. 1 × 10 5 cells were plated in a T-25 flask.Cells were counted manually using a hemocytometer and the assay was repeated three times.To determine cell viability, 1 × 10 5 KPC or KPC-Luc cells were plated in a final volume of 100 µl in a 96-well plate for 48-72 h.After this time, the MTS assay was performed: 20 µl per well of CellTiter 96 ® AQueous One Solution Reagent (Promega) was added and the plate was incubated at 37 °C for 2 h.The amount of soluble formazan produced by cellular reduction of the tetrazolium compound [3-(4,5-dimethylthiazol-2-yl)-5-(3-carboxymethoxyphenyl)-2-(4-sulfophenyl)-2H-tetrazolium, inner salt; MTS] was measured by the absorbance at 490 nm using a plate reader (BioTek).
Animals and orthotopic PDAC mouse model
All animal experimental procedures were performed in compliance with the European (2010/63/EU) and German regulations on Animal Welfare and were approved by the administration of Lower Saxony (LAVES; Nr. 33.19-42502-04-20/3527).All authors complied with the ARRIVE guidelines.Male C57BL/6 mice (12-15 weeks old) were kept under 12 h dark: light cycle with ad libitum access to food and water.For the orthotopic implantation, mice received 20-30 µl analgesia subcutaneously (s.c.) (Carprofen, Rycarfa 5 mg/kg; diluted 1:10 in 0.9% NaCl), and were anesthetized via inhalation of 2-3% isoflurane.Next, a small incision was made in the midline to access the pancreas, and 5 × 10 4 KPC/Luc cells were injected into the head of the pancreas in 15 µl of Phosphate Buffered Saline (PBS) using an insulin syringe.Separate sets of sutures were used to close the peritoneum and skin (Ethicon, 4.0, 22 mm).Animals received analgesia for three days post-implantation and were monitored daily for weight loss and signs of distress following the surgery.Mice were sacrificed with an overdose of isoflurane, followed by cervical dislocation and the pancreatic tumors were excised and measured by a caliper.Blood was extracted by cardiac puncture.The spleen, peritoneal organs, lymph nodes, and lungs were excised and visually inspected for macroscopic metastases.The degree of metastatic spread was assessed by applying a metastasis score (Suppl.Table 1).For each organ, a number from 0 to 3 was given according to the number of metastases macroscopically present.The total metastases score was calculated by adding the individual organ scores.Each animal was scored individually.
In vitro and in vivo bioluminescence imaging (BLI)
To confirm luciferase expression in the cells, either 1 × 10 5 KPC or KPC-Luc cells were plated in a 96-well plate and incubated overnight.The next day, after washing the cells with PBS, d-luciferin solution was added to the wells (0.5-5 µM) and the bioluminescent signal was measured using the optical imaging scanner IVIS Spectrum (Perkin Elmer).For in vivo bioluminescence measurements, mice were injected with 150 mg/kg body weight
Enzyme-linked immunosorbent assay (ELISA)
A standard sandwich ELISA was performed to determine the IFN-γ levels in the splenocyte culture supernatants (mouse Kit 88-7314-88, Invitrogen) and for the detection of IgG antibodies in the serum.Briefly, a 96-well ELISA plate was coated with the capture antibody (or Luc peptide; 10 µg/ml) in coating buffer and incubated at 4 °C overnight.Then the plate was washed three times with wash buffer (PBS-0.05% Tween).After blocking for 1 h, the plate was washed and incubated with the samples and standards for 2 h at RT.The plate was then incubated with the biotinylated detection antibody for 1 h, followed by streptavidin-peroxidase incubation for 30 min, and finally, incubated with 3,3′,5,5′-Tetramethylbenzidine (TMB) substrate for approximately 10 min at RT.The reaction was stopped with 2 N H 2 SO 4 and the absorbance was assessed with a plate reader (BioTek) at 450 nm.
Immunofluorescence staining
Dissected tumors were fixed in 4% paraformaldehyde (PFA) in PBS overnight, then embedded in paraffin, and cut in tissue sections of 4 μm thickness using a microtome.Tumor sections were subjected to immunofluorescence staining for the detection of cytotoxic T cells and macrophages.For this purpose, slides were deparaffinized with xylol and rehydrated with increasing concentrations of EtOH, blocked with fish serum (37527, Invitrogen) for 20 min and incubated overnight with rabbit primary monoclonal antibody against CD8b (1:500 dilution, ab228965, Sigma) or polyclonal antibody against CD68 (1:500 dilution, ab125212, Sigma), followed by 1 h incubation with a goat anti-rabbit secondary antibody conjugated with Alexa Fluor 647 (1:200 dilution, a21244, Invitrogen), and counterstained with DAPI for 30 min for nuclei staining.Images were acquired with a Leica SP8 scanning confocal microscope using 20× magnification, 1024 × 1024 image resolution, 12-bit depth (approximately 6-12 images per tumor depending on the size).Images were analyzed by Fiji to quantify the number of positive cells.After automatic adjustment of the threshold, the background was eliminated by noise despeckling, the watershed mask was applied for cell segmentation, and finally, the number of cells was counted by the function "analyze particles".
Cytotoxicity assay
KPC and KPC-Luc cells were co-cultured with splenocytes from KPC-Luc tumor-bearing mice that developed for 13 days to measure the cytotoxicity of immune cells against the PDAC cells.For this purpose, cells were seeded (3 × 10 3 cells/per well) on the day before co-culture.Splenocytes were cultured for 24 h, in the presence of Luc peptide (LMYRFEEEL, 5 µg/ml) and an adjuvant (R848, 2 µg/ml) or normal medium.After this period, 6 × 10 4 splenocytes/per well were added to the KPC/KPC-Luc cells for 48 h.The confluence of the tumor cells was measured over time by a live-cell imaging system (Incucyte SX5, Sartorius).After 48 h of co-culture, a viability assay was performed according to the manufacturer's protocol (CellTiter-Glo, Promega).
Statistics
Statistical analysis was performed using GraphPad Prism software.All data are presented as mean ± SD.Unpaired Student's t-test, or two-way ANOVA followed by Sidak's was used for multiple comparisons.Differences between groups were considered significant at p < 0.05.
Luc expression does not alter cell proliferation and viability in KPC cells
Luc expression in the murine KPC-Luc tumor cells was assessed by adding increasing concentrations of d-luciferin to the wells.After 5 min, KPC-Luc cells showed intensification of the bioluminescence signals in a dosedependent way (Fig. 1A).KPC control cells, that do not express the Luc gene, were used as a negative control and did not show any signs of bioluminescence in response to d-luciferin in vitro.
To investigate whether the expression of Luc affects the proliferation kinetics of KPC cells, we assessed and compared the growth rates of KPC and KPC-Luc cells in vitro.Both cell lines were seeded at the same density (1 × 10 5 cells) in T-25 flasks and the number of cells was manually counted after 3, 5, and 7 days.Our data show that KPC and KPC-Luc cells have comparable proliferation rates in vitro (Fig. 1B).Using the MTS viability assay, we demonstrated that both cell lines had similar percentages of viability at both 48 and 72 h after seeding in a 96-well plate (Fig. 1C).
Luc expression inhibits tumor development by KPC cells in vivo
The impact of Luc expression on KPC tumorigenicity and tumor growth was evaluated in vivo.For this, either 5 × 10 4 KPC or KPC-Luc cells were injected into the head of the pancreas of C57BL/6 male mice and BLI was performed every third day after surgery to monitor tumor growth (Fig. 2A).All mice that were implanted orthotopically with KPC-Luc cells showed bioluminescent signals from the third day post-surgery onwards, which increased until day 9. On day 12 post-cell implantation, the BLI signals drastically dropped (Fig. 2B,C).KPC tumor-bearing mice injected with d-luciferin did not show any bioluminescence, as expected (Fig. 2C).All animals were sacrificed on day 13 after cell implantation and tumors were extracted.We determined that KPC-Luc tumors were significantly smaller (8.73 ± 12.2 mm 3 ) than KPC tumors (85.8 ± 29.4 mm 3 ; Fig. 2D).In contrast to KPC-Luc implanted mice, we observed a higher number of metastasis in KPC-tumor-bearing mice, at sites of adjacent organs such as liver and kidneys, as well as at the mesentery.The final metastasis score of KPC-Luc tumor-bearing mice was significantly lower (0.55 ± 0.7) than the one assessed in KPC tumor mice (3.97 ± 1.5; Fig. 2D).
Mice with KPC-Luc tumors display altered immunophenotype and their splenocytes show an increased immune reaction against Luc peptide in vitro
Because KPC-Luc cells only had small tumors at dissection, it was not possible to analyze the immune cells within the tumor microenvironment by flow cytometry.Due to the very limited amount of tumor tissue, we performed immune profiling in the peripheral blood and the spleen of tumor-bearing mice.At 13 days of tumor growth, we determined a decrease in the number of macrophages (CD45 + CD11b + CD64 + cells), and CD4 + T cells (CD45 + CD3 + cells) in the blood of KPC-Luc tumor-bearing mice, compared to KPC tumor mice (Fig. 3A).In addition, in the blood and the spleen of KPC-Luc mice the number of natural killer (NK) cells (CD45 + CD3 − NK 1.1 + cells) increased, in comparison to KPC tumor mice (Fig. 3A).
Humoral response against Luc was evaluated by determining the levels of IgG antibodies in the serum of tumor-bearing mice.Our data showed no differences in the quantity of IgG antibodies measured by ELISA between KPC and KPC-Luc tumor mice (Fig. 3B), indicating that B cell response did not contribute to the immunogenicity against Luc in the tumors.
We hypothesized that the inhibition in tumor development and metastasis demonstrated by KPC-Luc cells might be due to an immune response against luciferase.To confirm this, splenocytes from KPC and KPC-Luc tumor mice were isolated and re-stimulated in vitro with a peptide that represents the immunodominant CD8 T cell epitope of Luc that is recognized by C57BL/6 mice 22 (Fig. 3C).Using intracellular staining, we observed that the number of activated CD8 + T cells (CD45 + , CD3 + cells) that express IFN-γ was higher in splenocytes from KPC-Luc tumor mice than in the KPC tumor group.No differences were found in the number of CD4 + IFN-γ + cells (CD45 + , CD3 + ) between the groups (Fig. 3D).Furthermore, Luc-specific IFN-γ response was significantly higher in the KPC-Luc tumor mice, than in the KPC tumor group, as measured in the supernatant of splenocytes by ELISA (Fig. 3E).
KPC and KPC-Luc cells develop tumors of similar sizes at an early stage of tumor progression
We demonstrated that by day 12 after cell-implantation, bioluminescence intensity decreased, and on day 13 KPC-Luc tumors were much smaller or even absent when compared to KPC tumors.Furthermore, 13 days following cell implantation, mice displayed an immune response against Luc.We hypothesized that at an earlier time point, when the bioluminescent signal was stable, KPC-Luc tumors were still developing before an anti-Luc immune response occurred.To examine this hypothesis, we induced KPC and KPC-luc tumors as before but sacrificed the mice earlier at day 9 after cell implantation (Fig. 4A).As expected, KPC-Luc injected animals showed bioluminescence from day 3 post-surgery onwards (Fig. 4B).By the 8 th day the signal was still measurable in all animals (Fig. 4C).Interestingly, at day 9 KPC and KPC-Luc implanted mice had developed tumors of a similar size, displayed comparable metastatic spread in the liver and showed similar scores for metastasis (Fig. 4D).These data confirmed that at early time points after implantation, KPC-Luc cells develop solid tumors in the pancreas, but later, due to a spontaneous immune reaction, tumors start to regress, along with the number of metastasis, as we demonstrated for day 13 post surgery.
Mice with KPC-Luc tumors display altered immunophenotype and no immune reaction against Luc peptide in vitro at an early stage of tumor progression
To compare the immune profile of KPC and KPC-Luc mice 9 days after cancer cell implantation, blood and spleen of tumor-bearing mice were collected for flow cytometry analysis.As previously observed in the group that was sacrificed 13 days after surgery, we found a decrease in the number of macrophages (CD45 + CD11b + CD64 + cells) in the blood and spleen of KPC-Luc tumor mice, compared to KPC tumors as determined.Moreover, we observed a decreased number of CD4 + T cells (CD45 + , CD3 + cells) in the spleen of KPC-Luc mice, compared to the KPC group.In the blood of KPC-Luc tumor-bearing mice, there was a tendency to a higher number of NK cells, but without reaching significance (Fig. 5A).There were no differences in IgG antibody levels in the serum of KPC and KPC-Luc cells implanted groups (Fig. 5B).
To confirm the involvement of CD8 + T cells in tumor regression, we analyzed the activation response of T cells to Luc peptide, using splenocytes from tumor-bearing mice (Fig. 5C).9 days post-cell implantation, the number of CD8 + IFN-γ + cells, or the levels of IFN-γ in the supernatants did not differ in the two groups (Fig. 5D,E), which corresponds to the appearance of similar tumor sizes in the KPC and KPC-Luc cell implanted mice.These data indicate that at this time point of tumor development (9 days) the immune reaction has not yet impacted the growth rate of tumors, due to low cytotoxic T cell activation.The much stronger immune response, that was observed at a later stage of tumor development, suggests a direct impact on the partial or complete tumor regression.In addition, the cytotoxicity of immune cells from KPC-Luc tumor-bearing mice against KPC and KPC-Luc cells was analyzed.In comparison to KPC, KPC-Luc cells showed significantly reduced confluence and viability when co-cultured with splenocytes from KPC-Luc tumor-bearing mice for 48 h (Suppl.Fig. 3).
KPC-Luc tumors show higher infiltration of T cells than KPC tumors
Our next step was to evaluate how Luc expression in cancer cells can influence immune cell infiltration in pancreatic tumors.For this purpose, we analyzed the presence of cytotoxic T cells (CD8 + ) and macrophages (CD68 + ) in primary tumor samples by immunofluorescence.We observed a significant increase in the number of CD8 + T cells in the KPC-Luc tumors grown over 13 days, compared to KPC tumors, but not in tumors from mice sacrificed at 9 days.Moreover, immunofluorescence staining showed that in the KPC-Luc group, T cell infiltration was higher at 13 days of tumor development compared to day 9 tumors, indicating increasing immunogenicity in the tumors over time (Fig. 6A and B).This increase in T cell infiltration over time was not observed in the KPC tumor sections (Fig. 6A).On the other hand, the number of CD68 + macrophages in the tumor was not significantly altered in any group at any time point analyzed (Fig. 6C and D), despite showing a decrease in blood and spleen samples.These data confirm that an anti-Luc response was mediated mainly by CD8 + T cells.
KPC-Luc tumors regress and do not regrow after 70 days
Next, we investigated if KPC-Luc tumors continue to regress over time or regrow in due course.For this purpose we injected 5 × 10 4 KPC-Luc cells in the head of the pancreas of five C57BL/6 mice and monitored tumor growth by BLI once or twice per week for a period of 70 days (Fig. 7A).All five examined mice showed a high intensity of bioluminescence over the tumor area by the first week and, as expected, the signal dropped over the second week.Following this decrease of bioluminescence in all mice, only one animal showed tumor regrowth by BLI at 20 days after the surgery.On day 70, the bioluminescence signal was still present in this mouse (Fig. 7B,C), which developed a rather large primary KPC-Luc tumor of 892.2 mm 3 .Immunofluorescence staining of this pancreatic tumor (Fig. 7D) revealed a substantial infiltration of CD8 + T cells (average of 204.1 ± 90 positive cells per 0.25 mm 2 ) and CD68 + cells (317.5 ± 169.1 per 0.25 mm 2 ).Moreover, in the four mice without bioluminescence signals no tumor was found in the pancreas and no visible metastasis were present upon autopsy.
Discussion
Imageable reporters, such as Luc have been widely applied in in vivo cancer studies and are essential for accurately tracking cancer development and progression.These models can demonstrate in real-time the antitumor and antimetastatic response of novel therapeutic agents against malignancies by BLI over time and are thus valuable tools in preclinical research 13,23,24 .In this study, we demonstrate for the first time in PDAC that the expression of luciferase in murine KPC-Luc cells, a widely used reporter for PDAC growth in mouse models, induces a potent immune response between day 9 and 13 after orthotopic tumor cell implantation in immunocompetent mice, that results in permanent tumor regression up to 70 days.
Although in vitro proliferation of KPC and KPC-Luc cells were comparable, we show that 13 days after cell implantation KPC-Luc tumors present significantly smaller volumes than KPC tumors and fewer metastases in the liver.The effect of luciferase expression in tumor cells on cancer development has been controversial, as in most studies Luc did not impact tumor development.Likewise, KPC-luc cells were used previously for tumor implantation in the investigation of immunotherapy for PDAC 21,25,26 .In contrast to our results, these studies reported an increased tumor volume by BLI from 7 to 28 days in untreated female C57BL/6 mice or male albino C57BL/6 mice following injection of 2 × 10 5 -1 × 10 6 KPC-Luc cells in the tail of the pancreas.We hypothesize that the influence of luciferase expression in inducing an immune reaction and inhibiting tumor progression might be partially model-dependent.In our PDAC mouse model, the orthotopic implantation was performed in the head of the pancreas and in male mice, whereas the other studies used different genders, different pancreas locations and/or higher amounts of cells for implantation.Considering this, it is possible that our contrasting results could be due to differences in these methodological approaches, highlighting the importance of selecting an appropriate tumor mouse model for the assessment of tumor progression by BLI imaging.Moreover, in line with our observations, some studies using cancer cells of different tumor entities reported that Luc expression altered tumor progression in vivo.For example, mice with bioluminescent GL261 glioma tumors showed longer survival in comparison to mice bearing their non-bioluminescent control tumors 27 .Lewis lung carcinoma (LL/2) cells transduced with dTomato and luciferase, decreased tumorigenicity, compared to non-transduced cells 18 .Furthermore, in our study, the mouse that had a KPC-Luc tumor for 70 days did not show any signs of distress at this period, whereas in studies using orthotopic KPC cell implantation, mice do not commonly survive for more than 30 days 21,28 , which indicates that KPC-Luc tumors are not as aggressive as KPC tumors, as mice present prolonged survival when implanted with bioluminescent cells.
Our study provides evidence of the role of CD8 + T cells in exerting an anti-tumor response against luciferase that culminated in tumor regression.Firstly, we showed that Luc expression in KPC-Luc cells induces a potent immune response, indicated by a higher number of infiltrated cytotoxic T cells in the tumors, and NK cells in the blood and spleen of KPC-Luc tumor-bearing mice in comparison to KPC tumor-bearing mice.Such an immunophenotype of KPC-Luc tumor samples corresponds to an anti-tumoral profile of the immune cells observed by others 29,30 .Since KPC tumors are poorly immunogenic, and are known for evading immunosurveillance 31 , the immune responses we found in KPC-Luc-bearing mice between day 9 and 13 are surprising and most likely explain the inhibition of tumor growth we observed on day 13 after cell implantation.It is well-known that cytotoxic CD8 + T cells are the most powerful effectors in the anticancer immune response 32 .Along with NK cells, they promote tumor regression via the release of the cytolytic content of their granules such as perforin, granzymes, and the cytokine IFN-γ 33 .Our data correlates with similar observations in other tumor mouse models, such as for glioblastoma, where the injection of CT2A-Luc cells in the brain drastically increased the number of T cells locally, compared with wild-type controls 34 , and murine lung carcinoma, as seen by increased tumor-infiltrating lymphocytes (TILs) and decreased tumor-induced myeloid-derived suppressor cells (MDSCs) in Luc-expressing tumors, in comparison to non-Luc tumors 18 .
Secondly, we observed an increase in specific CD8 + T cell response against Luc peptide in splenocytes derived from KPC-Luc tumors on day 13 after induction, when compared to splenocytes from KPC tumors.A similar response was reported by Limberis et al. 22 , or in mice bearing 4T1Luc-expressing tumors where there was a higher IFN-γ response to the dominant cytotoxic T lymphocyte (CTL) epitope of Luc, compared to mice bearing non-Luc tumors 16 .Likewise, mice submitted to pcDNA3.1-Flucvaccination and later implantation of Luc-expressing CT26/Luc cells showed higher levels of IFN-γ in draining lymphoid cells and its secreted supernatant 19 , suggesting a specific response to the FLuc protein.In addition, we found higher cytotoxicity of splenocytes from KPC-Luc tumor-bearing mice against KPC-Luc cells, in comparison to KPC, suggesting a memory response by the immune cells.The expression of foreign proteins, such as luciferase in cancer cells could act as a foreign antigen that is targeted by the immune system, thus altering the adaptive immune response.Consequently, the use of such imaging techniques might interfere with the results from immunotherapeutic studies, since the immunogenic anti-tumor responses may be augmented.
Interestingly, regression of KPC-Luc tumors starts 9 days after orthotopic cell injection, as KPC and KPC-Luc implanted mice still displayed similar tumor volumes at this time point.Coincidentally, no specific T cell activation was observed after ex-vivo stimulation of splenocytes from both KPC and KPC-Luc tumor groups with Luc peptide.Similar observations were reported in a breast cancer model, where only a weak Luc-specific production of IFN-γ was induced 9 days after tumor cell implantation in BALB/c mice, but which increased to significant levels by day 23 16 .Our data suggest that at an early stage of tumor development, and prior to immune system activation, KPC-Luc cells can develop solid tumors in the pancreas.However, a later response against the presence of a foreign antigen in the tumor cells, in our case luciferase, led to permanent tumor regression, as seen in the mice sacrificed 70 days after cell implantation, that did not present tumors anymore.Another study reported a comparable effect of green fluorescent protein (GFP) in 4T1 breast cancer cells, where 11 days 14:13602 | https://doi.org/10.1038/s41598-024-64053-0www.nature.com/scientificreports/
Figure 1 .
Figure 1.KPC-Luc cells show d-luciferin concentration-dependent bioluminescence and have similar proliferation rates and viability as KPC cells.(A) KPC-Luc cells show increasing dose-dependent bioluminescent signals after adding 0.5-5 μM of d-luciferin.(B) KPC and KPC-Luc cells display similar proliferation rates within 7 days of seeding of 1 × 10 5 cells.(C) KPC and KPC-Luc cells reveal similar viability 48 and 72 h after seeding 1 × 10 5 cells, assessed by MTS assay.Four to five replicates per group were used.Data is presented as mean ± SD.
Figure 2 .
Figure 2. KPC-Luc tumors regress 13 days after orthotopic cell implantation in mice.(A) Scheme of the in vivo workflow.After injection of 5 × 10 4 cells tumor cells in the head of the pancreas, tumor growth was monitored by bioluminescence imaging using an IVIS Spectrum every third day.Mice were sacrificed on day 13 post-cell implantation.(B) In KPC-Luc tumors the bioluminescent signal increased progressively until day 9 postimplantation, then it drastically dropped on day 12. Bioluminescence intensity is presented as photons/second.(C) Representative images of KPC non-bioluminescent and KPC-Luc bioluminescent tumors on day 9 and 12 after tumor cell implantation are shown, demonstrating a decrease in the bioluminescence signal of KPC-Luc tumors by day 12. (D) KPC-Luc tumor mice presented significantly smaller primary tumors at day 13, and resulted in a lower mean score for metastasis, compared to KPC tumor mice.Data is presented as mean ± SD.Unpaired t-test *p < 0.0001; n = 9.
Figure 3 .
Figure 3. KPC-Luc tumor-bearing mice show an anti-tumor immune profile 13 days after tumor implantation.(A) Spleen and blood samples were analyzed by flow cytometry for the presence of dendritic cells (DCs), macrophages, B cells, natural killer (NK) cells, CD4 + and CD8 + T cells; n = 5. (B) IgG levels in serum were similar between KPC and KPC-Luc tumor mice, measured by ELISA.(C) Splenocytes were isolated from the spleens of KPC and KPC-Luc tumor-bearing mice and further stimulated with 20 µg/ml Luc peptide (LMYRFEEEL).Levels of IFN-γ were analyzed by flow cytometry and ELISA.In vitro, stimulation of the splenocytes led to (D) an increased number of CD8 + IFN-γ + cells from KPC-Luc tumor-bearing mice, compared to KPC mice, evaluated by intracellular staining, and (E) increased levels of IFN-γ in the supernatant of splenocytes from mice that developed KPC-Luc tumors, measured by ELISA; n = 9; Data is presented as mean ± SD.Unpaired t-test between the groups.*p < 0.05, **p < 0.01, ***p < 0.001.ns: not significant.
Figure 4 .
Figure 4. KPC and KPC-Luc cells develop tumors with comparable volumes in mice 9 days after cell implantation.(A) After the injection of 5 × 10 4 tumor cells in the head of the pancreas, tumor growth was monitored by BLI imaging using the IVIS Spectrum every third day.Mice were sacrificed on day 9 post-tumor implantation.(B) In KPC-Luc tumors, the bioluminescent signal was present until day 8, without any significant decrease.(C) Representative images illustrating bioluminescent signals over the tumor areas on day 8 after cell implantation.(D) Sizes and metastatic scores of KPC and KPC-Luc tumors assessed 9 days after orthotopic implantation.Note the similar tumor volumes and comparable metastatic sores at this time point.Data is presented as mean ± SD.Unpaired t-test; ns: not significant; n = 9.
Figure 5 .
Figure 5. KPC-Luc tumor-bearing mice have a higher number of macrophages and T helper cells but do not show an immune response against Luc peptide 9 days after tumor implantation.(A) Spleen and blood samples of tumor-bearing mice were analyzed by flow cytometry for dendritic cells (DCs), macrophages, B cells, NK cells, CD4 + , and CD8 + T cells; n = 4-6.(B) Serum IgG levels were similar between KPC and KPC-Luc tumor mice, measured by ELISA.(C) Splenocytes were isolated from the spleens of KPC and KPC-Luc tumor-bearing mice.The cells were stimulated with 20 µg/ml of Luc peptide (LMYRFEEEL) in vitro and levels of IFN-γ were analyzed by flow cytometry and ELISA.In vitro stimulation of the splenocytes did not show any differences in (D) the number of IFN-γ + splenocytes evaluated by intracellular staining, or in (E) the levels of IFN-γ in the splenocyte supernatant measured by ELISA; n = 4-8; Data is presented as mean ± SD.Unpaired t-test; *p < 0.05, **p < 0.01; ns: not significant.
Figure 6 .
Figure 6.KPC-Luc tumors have increased immune cell infiltration when compared to KPC tumors.Immunofluorescence staining of immune cell infiltration in the tumor (red).Nuclei are stained by DAPI (blue).White arrows point to positive stainings.(A) Representative images of immunofluorescence staining of KPC and KPC-Luc tumors 9 and 13 days after cell implantation, showing infiltration of CD8 + T cells.(B) Quantitative analysis depicted a significantly increased number of CD8 + T cells infiltrated in the tumors of KPC-Luc cells implanted mice, when compared to KPC cells implanted mice, at 13 days of tumor growth, but not at 9 days.Moreover, in the KPC-Luc group, tumor-infiltrated CD8 + T cells increased between days 9 and 13 after cell implantation; n = 3. (C) Representative images of immunofluorescence staining of KPC and KPC-Luc tumors 9 and 13 days after cell implantation, showing infiltration of CD68 + cells (macrophages).(D) Quantitative analysis showed no significant differences in the number of CD68 + cells infiltrated in the tumor of KPC and KPC-Luc implanted mice at 9 or 13 days of tumor growth; n = 3; Data is presented as mean ± SD.Two-way ANOVA followed by Sidak's multiple comparisons; *p < 0.01.Scale bars in (A) and (C) represent 50 µm.
Figure 7 .
Figure 7.After regression, KPC-Luc tumors do not regrow.(A) Mice were injected orthotopically with 5 × 10 4 KPC-Luc cells and monitored for tumor growth once or twice per week by IVIS Spectrum.70 days after cell implantation, mice were sacrificed and analyzed for the presence of tumors.(B) Representative images of bioluminescent tumors are shown 7 and 70 days after tumor cell implantation.The majority of mice lost the bioluminescence signal by day 70.(C) Graph showing bioluminescence signals of 5 individual mice over time.The signal was present until the second-week post-implantation before it decreased.In four out of five mice, no bioluminescence signal could be detected from day 3 to 70 after cell implantation.Only one mouse had a constant increase of signal from 20 days after tumor cell injection that persisted until the day of sacrifice.(D) Immunofluorescence staining showed infiltration of CD8 + T cells and CD68 + cells in a representative section of a tumor obtained 70 days after KPC-Luc implantation.White arrows point to positive staining of the immune cells.Scale bars in (D) represent 50 µm. | 2024-06-14T06:17:39.565Z | 2024-06-13T00:00:00.000 | {
"year": 2024,
"sha1": "d4a2c035078edf2cadfab1d80850d5906c185b19",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "655e266952da2918d6ad771f71b2e67cd3cffa4e",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
148995817 | pes2o/s2orc | v3-fos-license | READINESS FOR LEARNER AUTONOMY OF PROSPECTIVE TEACHERS MINORING IN ENGLISH
Learner autonomy is one of the key factors in successful language teaching and learning, therefore teachers’ task is to develop learners’ autonomy. However, teachers can foster learners’ autonomy only if they are autonomous learners themselves. The article reports findings of a survey study of the level of readiness for learner autonomy among prospective teachers minoring in English at Nizhyn Mykola Gogol State University in Ukraine. The data were gathered through a questionnaire, designed to investigate each learner’s perceptions and beliefs in four domains associated with learner autonomy: willingness to take learning responsibilities, selfconfidence to learn autonomously, motivation to learn English, capacity to learn autonomously. The research indicated that the general level of the prospective teachers’ learner autonomy is moderate though mean values fluctuate considerably from item to item. Thus, their motivation to study English and willingness to take learning responsibilities are on a high level, while their capacity and self-confidence to learn autonomously are moderate, with some items having means lying in the range that characterises a low level of learner autonomy. The implications of this study suggest that teacher trainers need to pay more attention to creating conditions in the classroom that will help prospective teachers become highly autonomous learners themselves.
Introduction
Learner-centred approach in education has led to the emergence of the concept of learner autonomy, which has been in the centre of researchers' attention for a few decades.Learner autonomy is especially important in language learning because as Esch (1997) argues, language has specific features which need to be taken into consideration when we talk about autonomous language learning.Language learning is different from any other learning, say physics or geography, because language is used to describe and talk about our learning experience (p.166).
Originally, learner autonomy in connection with language learning was defined by Holec (1981) as "the ability to take charge of one's learning" (p.3).Little (1991) defines learner autonomy as essentially the matter of the learner's psychological relation to the process and content of learning, a capacity for detachment, critical reflection, decision-making and independent action (p.4).Littlewood (1996) develops this point, describing autonomy as learner's ability and willingness to make and carry out the choices which govern his or her actions.He also suggests that this ability depends on possessing both knowledge about the alternatives from which choices have to be made and necessary skills for carrying out choices, while willingness depends on having both the motivation and confidence to take responsibility for the choices required (p.428).As Sinclair (2000) observes, there seems to be almost universal acceptance of the development of autonomy as an important, general educational goal (p.5), that is, teachers should help the learners in recognising their own ways of learning and enhance learners' capacity so that they can be ready for learning all through their lives.
Whether learner autonomy is promoted depends on whether teachers are autonomous learners themselves, because "[t]he extent to and manner in which learner autonomy is promoted in language learning classrooms is influenced by teachers' beliefs about what autonomy actually is, its desirability and feasibility" (Borg & Al-Busaidi, 2012, p. 6).Moreover, Little (1995) states that "learner autonomy and teacher autonomy are interdependent" (p.179) and the teacher autonomy is "a prerequisite for the development of learner autonomy" (p.178).He further argues that "language teachers are more likely to succeed in promoting learner autonomy if their own education has encouraged them to be autonomous" (p.180).So, it is vital that English language teachers should be autonomous language learners, and teacher trainers should make sure that they foster their trainees' autonomy.
Before anything is done in this direction, it is necessary to collect some information about learners' existing attitudes and expectations (Scharle & Szabo, 2000, p. 12), i.e. explore learners' readiness for autonomous learning.There have been quite a lot of researches in different countries investigating prospective teachers' level of learner autonomy and related issues (e.g., Balçıkanlı, 2010;Hoxha & Tafani, 2015;Tarhan and Erözden, 2008).Ukrainian researchers also pay some attention to the problem of learner autonomy, but they focus mostly on ways of fostering learner autonomy (Hahina, 2014;Haidar, 2015;Solomko, 2012;Zadorozhna, 2015) while the level of readiness for learner autonomy of prospective English language teachers remains out of focus.
That is why, the objectives of this study are to find out the level of readiness for learner autonomy of prospective English language teachers and to discuss the implications of the findings for teacher trainers.
Participants
In order to explore the level of readiness for learner autonomy of prospective English language teachers, we have conducted a case study among the students who master English as their second speciality at the Philological Department of Nizhyn Mykola Gogol State University while majoring in the Ukrainian language and literature.This group was chosen for the research due to the fact that English as a second speciality is offered by a number of Ukrainian teacher-training institutions.Many graduates find employment as teachers of English so it is really important that they have the same language skills and competences as the prospective teachers who major in English.As these students have fewer classes of English and so their language learning experience is not as rich as of those who major in English, it is essential that they should be able to learn autonomously.Besides, for the research, we chose students who have studied at the university for more than two years as they are more experienced in learning English and are more likely to take their future occupation seriously.
Instruments
The study is based on quantitative research.The main research tool was the questionnaire aimed to find out the level of students' readiness for learner autonomy (Swatevarcharkul, 2008, p. 144).It is reported that the questionnaire has been piloted and its homogeneity and validity demonstrated by means of statistical tools.Thus, the questionnaire validity and reliability are 0.80 and 0.84 respectively (ibid., p. 50).Nevertheless, items 3, 8, 16, 28 in the questionnaire have been modified (negative statements have been changed into affirmative) to avoid ambiguity and misinterpretation.
The whole questionnaire contains 34 items, which are divided into four categories (or domains): items 1-7 refer to students' willingness to take learning responsibilities; items 8-13to self-confidence to learn autonomously; items 14-23motivation to learn English; items 24-34capacity to learn autonomously.The respondents, however, were not informed of these categories.
As the aim of the research is to measure people's attitudes and beliefs, Likert Scale is used to collect the attitudinal data through getting people's reactions to statements.Students were asked to rate how much they agreed with each statement (strongly agree, agree, uncertain, disagree, strongly disagree).For scoring purposes, the statements were given weights of 5, 4, 3, 2, 1.The 'weights' are interpreted as follows: 5 means that learner autonomy readiness is very high, 4high, 3moderate, 2low, 1very low.Correspondingly, the evaluation criteria of the questionnaire are as follows: 1.00-1.50means that learner autonomy readiness is very low, 1. 51-2.50low, 2.51-3.50moderate, 3.51-4.50high, 4.51-5.00veryhigh.
Data collection
The questionnaire was completed anonymously in 2016 by 45 respondents studying in their 3 rd , 4 th and 5 th years (10, 11 and 24 students respectively).The questionnaire was administered in the classroom under the teachers' supervision.The students spent about 15-20 minutes answering the questions.
Results
The survey response data were tabulated using Microsoft Excel 2010.To discover the general level of readiness for learner autonomy, the results of the 34-item questionnaire were analysed to find out mean score and standard deviation (columns 'M' and 'SD' in the tables).Table 1 demonstrates the results of statistical analysis of the four examined domains which shows that the mean value for the whole questionnaire is 3.45 (SD=1.03).According to the criteria mentioned above this indicates that on the whole the prospective teachers' autonomy readiness level is moderate.However, the data on the categories vary considerably: their willingness to take learning responsibilities and motivation to learn English are on a high level, while their capacity and self-confidence to learn autonomously are on a moderate level. 2 to 5. The first three columns of numbers present percentages of responses reduced to three categories: disagreement (options 'strongly disagree' and 'disagree', column 'Disagree' in the tables), uncertainty (option 'uncertain', column 'Uncertain') and agreement (options 'strongly agree' and 'agree', column 'Agree').In relation to the first domain of learner autonomy, the result of the statistical tests concerning the mean of the students' answers is 3.51 (SD=1.05)which proves that students' willingness to take learning responsibilities is at a high level.Table 2 provides a detailed description of the means of students' answers regarding this category.We can see that more than half of the respondents agreed with the statements of most items (except items 1 and 3).The figure that attracts attention in this table is the mean for item 4, which is the highest in this domain -3.76 (SD=0.90).It indicates that almost 70% of learners are pleased to take responsibility for their own learning.This assumption is supported by the data on item 2 which demonstrate that two thirds of the learners agree that they need to control their learning.Also, it can be inferred from the data in the table that more than half of the prospective teachers like to have some freedom in their learning determining what they want to learn both in and outside class (items 6 and 7).On the other hand, figures concerning item 1 (mean=3.24,SD=1.18) contradict this optimistic deduction as they prove that only about 42% of the prospective teachers really feel responsible for their learning.Item 3 has equally low mean of 3.24 (SD=1.40)as item 1 and the figures give evidence that just about a third of the students like to seek additional knowledge outside class without any directions of the teacher, which indicates that learners are not eager to take initiative in their learning.The conclusion from this category of questions would seem to be that on the one hand, the respondents view themselves as being quite aware of the necessity to take learning responsibilities, but, on the other hand, they are not eager to do so in practice.The mean of the students' answers to items concerning the second domain -students' self-confidence to learn autonomouslyis 2.92 (SD=0.96)(see Table 1), which indicates that this component of learner autonomy is not so well developed in the learners and is at the moderate level.The detailed results regarding this category demonstrated in Table 3 show that just item 13 "If I decide to learn anything, I can find time to study although I have something else to do" has mean above 3.50 (SD=0.98),which characterises a high level of learner autonomy.Other responses here reveal a certain lack of readiness for autonomy.Thus, in half of the items the means are below 3.The most striking responses are to items 8 and 9 that have the lowest mean value of 2.38, i.e. according to the criteria mentioned above, the level of readiness for learner autonomy in these respects is low.As the table shows, 60% of respondents are not confident in their learning so they need constant support of the teacher (item 8) and almost 58% cannot decide themselves what they should learn or what to do in and out of class (item 9).More than half of the learners chose option 'uncertain' for item 12 "I think I am an effective autonomous learner, both in and out of class".Also, most students' time-management skills leave much to be desired, as only about one fifth of the prospective teachers are confident that they can manage their time well for learning.These findings indicate that learners rely on teachers greatly in many aspects of learning English.In other words, although the respondents are willing to take responsibility for their learning in the previous category, they are not confident enough that they can do it properly.The next category of the questionnaire concerns motivation to learn English.The mean of the students' answers in this domain is 3.97 (SD=1.11),which is the highest of all (see Table 1).A detailed description of the means of the students' answers regarding this category is provided in Table 4.As the analysis of the data shows, the mean for each item is above 3.5 and half of the values are above 4, which indicates that the level of the students' motivation to learn English is high.Few learners chose option 'uncertain' white answering this part of the questionnaire.Here we find the highest scores of the whole questionnaire, e.g.82.22% of the respondents 'agree' and 'strongly agree' with the statement of item 15 "I like to learn English because it is interesting and important" which is rated 4.31 (SD=1.24)(the highest mean of all).Also, 84.45% of the learners think that studying English can be important for them because it will allow them to meet and converse with more and varied people (item 18).The same number of respondents like to take part in English activities when they have free time (item 17).It can be inferred from these data that prospective teachers are intrinsically motivated to learn English.Figures concerning item 21 "I like to learn English because I will be able to get a job easily" and item 22 "I pay attention to learning English in order to get a good grade" (mean values are 4.00 and 3.67 respectively) suggest that the students also have a high level of extrinsic motivation to study English.We can conclude that teacher trainers while fostering prospective teachers' learner autonomy can rely on their high level of motivation as it is common knowledge that a motivated person in more likely to succeed in any activity.The final component of the questionnaire refers to the students' capacity to learn autonomously.The general mean of the respondents' answers is 3.41 (SD=0.99)(see Table 1) with most mean values lying in the range of 3.11-3.40(see Table 5), which characterises their level of autonomy readiness in this respect as moderate.Some items, however, have higher means.This concerns the students' awareness of their learning weak points (item 26) and their attempts to improve on them (item 27), which shows a high level of readiness for learner autonomy in this aspect.Figures concerning items 32 (mean value=3.64,SD=1.04) and 25 (mean value=3.53,SD=1.02) are also encouraging as they indicate that most respondents know where they can seek knowledge and can tell whether or not they are making learning progress.The students' weakest points in this domain are their ability to set their own learning objectives in class (item 24 with mean value 3.18, SD=0.68) and out of class (item 30 with mean value 3.22, SD=0.99) and their capability of finding appropriate learning methods and techniques for themselves (item 29 with mean value 3.11, SD=0.97) as well as learning materials (item 31 with mean value 3.27, SD=1.04).Also, it should be pointed out that in many cases the respondents are uncertain about their answers in this category.For example, 64.44% of the prospective teachers are not sure whether they have the ability to set their own learning objectives in class (item 24); 46.67% are not certain if they are capable of telling about what they have learned (item 28); 44.44%whether they are capable of finding appropriate learning methods and techniques for themselves (item 29); 44.44%if they are capable of being totally responsible for their own learning (item 34).These data suggest that, on the one hand, teacher trainers can rely on the prospective teachers' strong points in this relation, on the other handthat trainees lack important skills of autonomous learners and that teachers trainers should create conditions for developing these skills.Another figure in the tables that is worth mentioning is the standard deviation.A low standard deviation indicates that the data points tend to be close to the mean of the set, while a high standard deviation shows that the data points are spread out over a wider range of values.As it is clear from the tables above, most values of standard deviation lie in the range 0.85-1.10,i.e. the answers are more or less homogeneous.The lowest values of standard deviation are found in the categories having the lowest mean values -students' capacity and self-confidence to learn autonomously.This seems to be one more proof that these aspects of learner autonomy are not so well developed in most learners.
Discussion
Having analysed the results of the investigation, we can say that, on the one hand, it is encouraging for teacher trainers to know that the trainees are generally willing to take responsibility for their learning and are quite highly motivated to learn English.On the other hand, the study reveals that the students seem to hold many views and beliefs that contradict the move towards greater autonomy.Thus, the majority (about 66%) are not likely to seek additional knowledge outside class on their own initiative (item 3); about 58% of respondents cannot decide themselves what they should learn or what to do in and out of class and almost 27% or are not sure that they are able to do this (item 9); two thirds of the prospective teachers are uncertain whether they are able to set their own learning objectives in class (item 24).
The obtained results tell us something about the current practices in language teaching and teacher training.Thus, we can assume that teachers and teacher trainers have been quite successful in building their learners' motivation.This assumption is supported by the data on the students' motivation on the whole and the figures concerning item 23 indicating that almost 58% of them believe that it is the teacher who is responsible for building learner's motivation.However, judging by the fact that the respondents, being adult learners with quite a long experience in learning English (ten years on average), have a moderate level of learner autonomy, we can infer that fostering learner autonomy has not become a common practice either at schools or universities and teacher trainers should pay more attention to this aspect.
Besides, we can see that there exists a gap between the students' wishes and practices.On the one hand, as we have argued above, they are highly motivated to learn English and willing to take learning responsibilities.On the other hand, prospective English language teachers do not have enough selfconfidence and capacity to learn autonomously, which is supported by the facts that 60% of learners need constant support of the teacher (item 8 of the questionnaire) and that about half of the students are not sure whether they are effective autonomous learners, both in and out of class (item 12).Therefore, in spite of the fact that the respondents are expected to be experienced language learners, they still need substantial support from teacher trainers.
The findings of the research also contribute to a better understanding of students' beliefs and needs.Thus, the figures indicate that prospective teachers value freedom and opportunities to direct their own learning, therefore, teacher trainers should create appropriate learning environment.On the other hand, it can be inferred from the obtained data that most of them are not confident in their learning therefore they need constant support of the trainers.For instance, they need some guidance in setting their learning objectives, finding appropriate learning methods and techniques for themselves and assessing the results.This implies that teacher trainers should develop students' confidence providing the necessary support in the form of learning strategies which would help them become more independent in their learning.
The findings of this case study go in line with the results reported by researchers from other countries (Balçıkanlı, 2010;Hoxha & Tafani, 2015;Tarhan and Erözden, 2008): prospective teachers view learner autonomy generally positively, though they have certain difficulties being autonomous learners.That is why we cannot but agree with Benson (2003) who states that teachers should create the atmosphere and conditions in which learners will feel encouraged to develop the autonomy they already have (p.305).
Conclusions
This study was aimed at investigating the level of prospective English language teachers' readiness for learner autonomy.It has covered four dimensions -students' willingness to take learning responsibilities, self-confidence to learn autonomously, motivation to learn English and capacity to learn autonomously.The results show that prospective teachers have some basic level of learner autonomy, so teacher trainers do not have to start from scratch.The respondents' capacity and self-confidence to learn autonomously are moderate, which implies that students are still dependent on their teachers and need some guidance to help them become more confident learners, capable to learn autonomously.However, they are highly motivated to learn English and are willing to take learning responsibilities, and, as the proverb says, where there is a will, there is a way.To provide gradual movement towards greater learners' independence, teacher trainers should seek ways to encourage conscious reflection on the learning process.Making the students aware of their | 2018-12-14T18:39:09.406Z | 2017-12-27T00:00:00.000 | {
"year": 2017,
"sha1": "fce05f8f1c4de8ba4762df315506c39070424daa",
"oa_license": "CCBY",
"oa_url": "http://ae.fl.kpi.ua/article/download/107296/114265",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "fce05f8f1c4de8ba4762df315506c39070424daa",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
198767283 | pes2o/s2orc | v3-fos-license | Effect of Motivation, Learning Style and Discipline Learn about Academic Achievement Additional Mathematics
Using quantitative research methods based on Structural Equation Modeling (SEM) in educational research, to analyze the various relationships among variables in the model formed based on the theories studied, few researchers did. This study was conducted to determine the effectiveness of motivation, learning style and discipline of teach on academic achievement Form four students Additional Mathematics in Kuala Terengganu District. The instrument used in this study is based on the School Inventory Learning model developed by Selmes (1987). The item questionnaire in this instrument has been adapted to the investigation investigation. A total of 260 research samples were included in the study, consisting of four forms students in 10 schools in Kuala Terengganu District. Data were analyzed using IBM-SPSS-AMOS (SEM) program version 21.0. SEM analysis consists of two main models: the measurement model and the Structural model. Prior to the SEM test, some adjustment tests were performed to ensure that the tested indicator actually represented the measured construct. Two analyzes in this study are prerequisites that have been met before the SEM analysis is done ie Factor Exploration Analysis (EFA) and Confirmatory Factor Analysis (CFA). The findings indicate that the motivation, learning style and discipline of learning have a positive and significant effect on student achievement in academic achievement. Furthermore, motivation also has a positive and significant impact on the learning discipline, but the learning style has no positive and not significant effect on the learning discipline. Intermediate analysis findings for the learning discipline take place between motivation and academic achievement and do not occur between learning styles and academic achievement. The findings in this study indicate that educators need to instill enthusiasm for students as well as to know their students' learning styles and to ensure that students have a learning discipline, because it can affect student academic achievement.
Introduction
Education is a constantly changing field in line with the development of the environment. These changes affect education especially in the curriculum aspect. To make Malaysia a developed country by 2020, the field of education has been identified as one of the key success factors (critical success factors). The Malaysian Government through the Ministry of Education of Malaysia (MOE) has always designed, planned and improved the education system in Malaysia. Among the steps taken include the introduction of the Education Development Master Plan (PIPP, 2006(PIPP, -2010 and the latest Malaysian Education Development Plan (2013)(2014)(2015)(2016)(2017)(2018)(2019)(2020)(2021)(2022)(2023)(2024)(2025) as wages and at the same time leading to the transformation of national education. 21st Century learning is a world education transformation based on a more dynamic and creative approach to learning and facilitation (PdPc) with relevant learning content in line with current developments. Teachers must be prepared to accept change and manage change efficiently and effectively as they are the implementing group responsible for implementing the change. Teachers act as planners, careers, counselors, drivers and assessors (Malaysian Quality Standard of Education Wave 2-SCKMg2) to develop the full potential of students to produce student academic achievement continuously at an optimal level.
Education is a constantly changing field in line with the development of the environment. During the PdPc process at school, teachers are the main factors that can influence the way students learn. Although some students learn something according to their own approach or method, they do not realize that the method they use is a distinctive and different learning style with other students. According to Emeliana et al. (2012), teachers should make full use of every learning style to make learning more interesting. Teachers should also communicate clearly, motivate and apply flexible learning styles, especially in the Supplemental Mathematics lessons that are mostly taught in schools. Based on the theory of motivation by using goal-setting theory, the main goal of achieving a person influences achievement through variation in the quality of self-regulatory processes (Locke, 2005). This self-regulation process is closely related to a student's metacognitive abilities or skills. It shows an indirect relationship between motivation and academic achievement through metacognition. Students need the enthusiasm and motivation as well as an effective way of learning to overcome their weaknesses in Supplemental Mathematics. Therefore, this study will look at the role played by motivation (internal and external) towards students to achieve additional academic mathematics achievement.
Various teaching methods have been used in schools aimed at improving the academic achievement of the students' Additional Mathematics subjects, to ensure decline and problems arising in additional mathematics learning can be identified. In addition to the students' own factors that lead to a decrease in performance in the Mathematics Supplementary subjects, educators also sometimes have no suggestions or motivations for their students. Some even consider that a weak student is a habit or an ordinary trait, without trying to give advice or to overcome it. Some weak and self-aware students sometimes exist, but the need for appropriate and effective motivation and encouragement and learning styles of educators is essential.
The importance of teachers to know and understand a student's learning style is because the effectiveness of a student's learning style may not be the same. Thus, teachers need to introduce different learning styles to ensure the appropriateness of all students involved.
Students also need to know which learning styles are appropriate for them, while teachers need to play an important role in helping their students understand the trends and ways they learn, to improve the effectiveness of learning so as to achieve good results. Several findings have been made in the West, finding the suitability and motivation of motivation with learning styles can produce good academic achievement. According to Nelson (2003), there is a positive impact between motivation and learning styles on student achievement. Students exposed to learning styles and motivated, achieving higher academic achievement, compared with those not exposed. During the PdPc process, teachers must diversify teaching strategies to create positive stimuli for students to learn. In this way, teachers will be able to increase students' interest and curiosity towards their teaching. Students who are motivated by teachers will usually be more interested in helping the process of achieving learning goals.
The purpose of this study was to examine the effectiveness of motivation, learning styles and discipline of learning on student academic achievement, as well as the role played by the discipline of learning as a mediator of the relationship between motivation and learning styles to the academic achievement of four students.
Research Methodology
The research method used is quantitative, and using research instruments based on the Learning Inventory model in the School developed by Selmes (1987). The questionnaire items have been adjusted according to the suitability of the learning system in SMA. Data were analyzed using Structural Equation Modeling (SEM) with IBM-SPSS-AMOS program version 21.0. SEM is formed with two main models namely measurement model and Structural model. Before the SEM test is tested, prior adjustment tests should be made to ensure that the tested indicator actually represents the measured construction. There are two analyzes as prerequisites that must be met before the SEM analysis is performed: (1) Exploration Analysis Factor (EFA), and (2) Confirmation Factor Analysis (CFA). Validation factor analysis (CFA) is a test of measurement model to ensure that each construction meets procedures such as validity and reliability for each experiment being built (Kline, 2016;Awang, 2015;Chua, 2014d;2013;Hair et al., 2006;Schumucker & Lomax, 2004). Comparison of model measurement is essential to ensure that any latent construction in this study is compatible with the data studied before SEM can be continued (Kline, 2016;Awang, 2015;Schumucker & Lomax, 2004).
Using the CFA method can assess the extent to which factors are observed significantly to the latent construction used. This assessment is done by examining the stiffness value of the regression pathway from factor to observed variable (factor loading) rather than the relationship between factors (Byrne, 2001). Through the use of CFA, any item not conforming to the measurement model is derived from the model. This inequality is due to the low load factor value. Researchers need to apply the CFA process to all model-related constructions, either separately or collectively (combined CFA models) (Alias & Hartini, 2017).
The compatibility of the hypothetical models tested is verified using the Fitness Indexes to see the values of Root Mean Square Error of Approximation (RMSEA<0.08), Goodness of Fit Index (GFI>0.90), Comparative Fit Index (CFI>0.90) and Chi Square/Degree of Freedom (chisq/df <5.0). According to Hair et al. (2006) if the value of χ2 is less than 2.00 but significant, it should be noted whether the sample is large or vice versa. Sample size above 200 can cause χ2 to be significant. Therefore, Hair and his colleagues propose two other indices namely CFI and RMSEA to ensure CFA analysis establishes a dimensionless research model. If the CFI value exceeds 0.90 and the RMSEA is less than 0.08 it is said that the existence of Unidimensionality exists for the formation of each construct.
The hypothetical model is considered to be in accordance with the research data when the chisq/df value is less than 3.0 (Marsh and Hocevar, 1985). The hypothetical model is also considered to correspond to a GFI value greater than 0.90 (Joreskog and Sorbom, 1993). The value of RMSEA is very good if it is smaller than 0.08 (Hair et al., 2006;Browne & Cudeck, 1993), but still less than 0.1 (Byrne, 1998(Byrne, , 2013. Bentler (1990) also recommends receiving CFIs over 0.90. But the CFI value between 0.80 and 0.89 is still at the margin received. To verify the model developed, the boostrapping value is determined. According to Bollen & Stine (1992), the developed model is considered to have validity when the bootstrap value exceeds 0.05 means there is no difference between the data collected from the sample with the proposed model. Therefore, the proposed model is valid based on data collected from the research sample.
Research Findings CFA Analysis for Conventional Motivation Measurement Models
The Analysis of Fitness Index in Table 1 shows the Motivation Construction Model has reached the level of Compatibility Index level. This means Building Validity for this construction has been achieved (Awang 2011;2012;2014;2015;Awang et al., 2015a;Kashif et al., 2016). (Awang 2011;2012;2014;2015;Awang et al., 2015a;Kashif et al., 2016).
CFA Analysis for Learning Styles Conflict Measurement Models
The Analysis of the Fitness Index in Table 2 shows the Learning Style Styles Measurement Model has reached the level of Compatibility Level. This means Building Validity for this construction has been achieved (Awang 2011;2012;2014;2015;Awang et al., 2015a;Kashif et al., 2016). The Measurement Model for Learning Style constructs has reached the level of Compatibility Index level. This means Building Validity for this construction has been achieved (Awang 2011;2012;2014;2015;Awang et al., 2015a;Kashif et al., 2016).
CFA Analysis for Learning Discipline Model
The Analysis of Fitness Index in Table 3 shows Measurements of Constructive Model Learning Discipline has reached the level of Compatibility Level. This means Building Validity for this construction has been achieved (Awang 2011;2012;2014;2015;Awang et al., 2015a). (Awang 2011;2012;2014;2015;Awang et al., 2015a;Kashif et al., 2016).
Confirmation Factor Analysis of All Measurement Models (Pooled CFAs)
The Integrated Validation Factor (CFA) analysis is required to evaluate the correlation value between construct in the Discriminant Validity procedure. If the correlation value between constructs exceeds 0.85, both constructs are said to be excessive (Awang, 2015;Hoque et al., 2017;Awang et al., 2015a;Kashif et al., 2016). For overly complex models involving second-order construction, joint validation factor analysis is difficult. Second level construction is a construct that has dimensions or substructures where each dimension or substructure has a certain number of items. The researcher will find it difficult to combine all the second level constructs in one model to conduct Pooled Confirmatory Factor Analysis.
To solve this problem, all second order constructions need to be summarized into first order construction by taking minutes from each sub-construction or dimension 2014;2015;Hoque et al., 2017). The procedural findings of Combined Factor Confirmation (Pooled CFA) are shown in Figure 4. As always, the value on a single-headed arrow is the weighting factor of each item, while the value on the double-headed arrow is the correlation between the constructs. Through the Combined Validity Factor Analysis method, only one model of the compatibility index represents all the constructed constructs. The findings from Table 4 show the three categories of model compatibility indexes for all construction model constructions have been achieved. Another requirement of the validity that all constructs in the model need is Discrimination Validity. Discriminatory validity is necessary to prove that all constructs in the model do not have a strong relationship with each other causing multicollinearity problems (Awang, 2014;Hoque et al., 2017;Awang et al., 2015a;Kashif et al., 2016). This verification requires researchers to develop the Discrimination Index Validity Summary table. Table 6 shows the Summary of Discrimination Validity Index among all constructs in the model. Awang (2014;2015;Hoque et al., 2017;Awang et al., 2015a;Kashif et al., 2016), Discrimination Validity will be achieved if all the root values of convergence validity (AVE) (Diagonal) are greater than other values of both rows and columns. Findings from Table 5 show Discrimination Validity for all constructions in the model achieved.
Analysis of the Impact between Building Motivation, Learning Styles and Learning Disciplines
Analysis by using SEM yields a standard regression value between the construct and the usual regression value and both have their own utility. Figure 6 shows the standard regression weight findings, whereas Figure 7 shows a typical regression value as a result of the SEM procedure. 3) The correlation value between two free constructs on the model shown by doubleheaded arrows is as follows: The correlation between Motivation and Learning Styles is 0.54. This shows that the SEM model is valid and has no multicollinearity problem. Figure 6 shows the findings of regression values between the constructs in the model, to build the required regression equation and to test the next hypothesis. Furthermore, the researcher will test every hypothesis proposed in this research. Table 6 shows the approximation of the direct effects of the effects of each independent construct on the dependent construct in the model as shown in Figure 6 above. Table 7 shows the results of hypothesis testing of the direct effect of independent construct on dependent construct. Hypothesis testing in Table 7 is based on the SEM findings from Figure 6 above. Table 6 shows that motivation has a significant direct impact on academic achievement with estimated regression value (β) is 0.368 at significant level 0.001 (Estimate=0.368, CR=3.498, p<0.001). This means that the construction of Motivation has a positive and significant influence on the construction of Academic Achievement. This means that if the Motivation increased by 1 unit, Academic Achievement will increase by 0.368 units. The findings of this study indicate that the construct of Motivation has a positive and significant influence on the development of Academic Achievement. Table 6 shows that motivation has a significant direct impact on the learning discipline with an estimate of regression value (β) is 0.933 at a significant level of 0.001, (Estimate=0.933, CR=6.426, p<0.001). This means that the construction of Motivation has a positive and significant influence on the construction of the Learning Discipline. This means that when Motivation increases by 1 unit, the Learning Discipline will increase by 0.933 units. The findings of this study indicate that the construct of Motivation has a positive and significant influence on the constructs of the Learning Discipline. Table 6 shows that the discipline of learning has a significant direct impact on academic achievement with an estimate of regression value (β) is 0.703 at a significant level of 0.001 (Estimation=0.703, CR=12.731, p<0.001). This means that the construction of Discipline Learning has a positive and significant influence on the construction of Academic Achievement. This means that if the Learning Discipline increased by 1 unit, Academic Achievement will increase by 0.703 units. The findings of this study indicate that the construction of Discipline Learning has a positive and significant influence on the construction of Academic Achievement. Table 6 shows that learning styles have a significant direct effect on academic achievement with regression value estimation (β) is 0.188 at a significant level of 0.020 (Estimate=0.188,CR=2.323,p<0.001). This means that the Learning Style construction has a positive and significant influence on the construction of Academic Achievement. This means that if the Learning Styles increase 1 unit, Academic Achievement will increase by 0.188 units. The findings of this study indicate that the construction of Learning Style has a positive and significant influence on the construction of Academic Achievement. Table 6 shows that learning style has no significant effect on learning discipline with regression value estimation (β) is 0.083 at significant level of 0.507 (Estimation=0.083, CR=0.664, p<0.001). This means that the Learning Style constructs have no positive and not insignificant influence on the construction of the Learning Discipline. The findings of this study indicate that the construction of Learning Style has no positive and not insignificant effect on the construction of Learning Discipline. Table 8 shows hypotheses testing the influence of mediators of the Learning Discipline construct in the relationship between two free construction (Motivation and Learning Style) and dependent construct Academic Achievement (AA_AM). Figure 7 H7 : The discipline of learning is the mediator of the relationship between learning styles and academic achievement
Intermediate Analysis (Mediator) for Development of Learning Discourse
Not supported See the test in Figure 8 Discipline Learning Is a Mediator Relationship between Motivation and Academic Achievement Figures 7 and Table 8 illustrate the mediator's testing procedure in the model by Awang (2012;2014;2015). Table 8 illustrate the procedure of mediator test in the model according to Awang (2012;2014;2015). In this model, Learning Discipline (LD) is an intermediate variable, Learning Style (LS) is an independent variable and Academic Achievement (AA_AM) is a dependent variable. Findings indicate that intermediary linking tests are not supported and the type of intermediate relationship cannot be applied, because the direct effect of Learning Styles (LS) on the Learning Discipline (LD) is not significant. The bootstrapping findings also do not show any mediation due to indirect messages indicating no significant inconsistencies with the results of mediation in the test procedure.
Conclusion
Overall, the CFA analysis carried out on the measurement model for the construction of motivation, learning styles and learning discipline, has been shown to have reached the fitness index. While the combined factorization analysis of all measurement models (Pooled CFA) shows that the three categories of model compatibility indexes for all models of construction constructs have been achieved and discriminant validity for all constructions in the model has also been achieved. Inference analysis findings also show motivation, learning styles and learning discipline, have a positive and significant influence on academic achievement. Furthermore, motivation also has a positive and significant impact on the learning discipline, but the learning style has no positive and not significant effect on the learning discipline. Intermediate analysis findings for the learning discipline take place between motivation and academic achievement and do not occur between learning styles and academic achievement. | 2019-05-26T20:30:32.242Z | 2018-05-09T00:00:00.000 | {
"year": 2018,
"sha1": "3167892a5859ac0b41faa32c43d5cee333b69ff7",
"oa_license": null,
"oa_url": "https://doi.org/10.6007/ijarbss/v8-i4/4059",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "5746c4c6693a47a8176cd552e65094761067776a",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
235859786 | pes2o/s2orc | v3-fos-license | Association of Urinary Biomarkers Noninvasively Forecasts The Extent of Renal Injury In The Early Diabetic Nephropathy Patients With Kidney Qi Deficiency Syndrome: A Retrospective Investigation
Background: Recently, diabetic nephropathy (DN) becomes a common health problem in China as one of the major microvascular complications of diabetes mellitus. Therefore, the incipient diagnosis and noninvasive detections in clinic by the association of urinary biomarkers are important for preventing the progress of DN. However, the clinical significance of urinary biomarkers is controversial nowadays. This study thereby aimed to further evaluate the clinical significance of the association of urinary biomarkers in noninvasively predicting the renal damaged extent of the early type 2 DN patients with kidney qi deficiency syndrome in an integrated traditional and western medical center, and preliminarily confirm the correlation between urinary tubular biomarkers and the biological bases of DN patients.Methods: Ninety-two patients in an integrated traditional and western medical center of China were categorized into 3 groups, 20 patients with normo-albuminuria, 50 patients with micro-albuminuria, and 22 patients with macro-albuminuria. In addition to urinary albumin (UAlb) and urinary albumin-to-creatinine ratio (UACR), serum creatinine, estimated glomerular filtration rate and various urinary tubular biomarkers were tested respectively. Besides, clinical characteristics and kidney asthenia syndrome distribution in all patients were observed. Results: In these 3 groups, 24-h UAlb and UACR showed stepwise and significant increases. Urinary cystatin C (UCysC), urinary N-acety1-β-D-glucosaminidase (UNAG) and urinary retinal binding protein (URBP) synchronously showed gradual increases consistent with albuminuria degree in 3 groups. Moreover, 24-h UAlb and UACR were positively correlated with UCysC, UNAG and URBP. In the 72 DN patients with albuminuria, there was a positive correlation between UNAG and URBP, and UCysC was also positively correlated with UNAG and URBP. Additionally, TCM syndrome distributional characteristics in all patients were consistent with the clinical manifestations of kidney qi deficiency syndrome. Conclusion: In this investigation, we ulteriorly demonstrated that the association within UCysC, UNAG, URBP and UAlb may be used as practical targets in noninvasively forecasting the extent of renal injury in the early type 2 DN patients with kidney qi deficiency syndrome. More importantly, we found that urinary tubular biomarkers may be one of the biological bases of DN patients with the specific traditional Chinese medicine syndrome.
Background
Nowadays, diabetic nephropathy (DN) becomes a common health problem in China as one of the major microvascular complications of diabetes mellitus (DM) [1]. 30-40% of the patients with DM develop to DN, which is the dominant cause of end-stage renal disease (ESRD) [2]. Therefore, the clinical incipient diagnosis and noninvasive detections are important for preventing the progress from DN to ESRD. It is reported that, albuminuria is one of the most typical clinical changes in the early stage of DN, and the level of albuminuria excretion can be determined as a way for DN patients screening [3]. However, growing evidences have recently suggested that the detection of albuminuria alone is not comprehensive and sensitive for DN patients, especially for those with inchoate and latent injuries in glomeruli and renal tubules [4]. On the other hand, although renal biopsy is thought to be the best method for the diagnosis of DN at the incipient stage [5], it is impossible to be performed in all cases because of its invasion. Given these shortcomings in clinic, more workable biomarkers in urine other than urinary albumin (UAlb) are crucially required to be explored for the earlier diagnosis and prediction of renal injurious extent in DN patients [6].
Generally speaking, glomerular dysfunction is a major cause of DN development [7]. Even though, the impaired absorption of the ltered proteins from renal tubular epithelium might also play many important roles in incipient DN [8]. Notably, the increasing evidences have demonstrated that, as biomarkers, some tubular injurious indicators in urine have clinical implications [9]. It includes urinary cystatin C (UCysC) [10,11], urinary N-acety1-β-D-glucosaminidase (UNAG) [12], urinary kidney injury molecule (UKim)-1 [12][13], urinary 1iver-type fatty acid-binding protein (UL-FABP) [14], urinary retinal binding protein (URBP), urinary neutrophil getatinase-associated lipocalin (UNGAL), urinary β2-microglobulin (Uβ2-MG) and so on [15,16]. Thereinto, UNAG and URBP have been especially reported to be associated with the progression of type 2 DM [15]. On the contrary, recently Macisaac et al. [17] considered there was neither a clinical assay nor an adequate study focuses on de ning urinary biomarkers' prognostic value in the DN progress. Besides, Hus et al. [18] lately reported the urine biomarkers of tubular injury did not improve on the clinical model predicting chronic kidney disease (CKD) progression. Therefore, so far, the clinical signi cance of urinary biomarkers in predicting renal injurious extent of DN is controversial.
In traditional Chinese medicine (TCM), DN is recognized as Xiaoke (a disease with symptomatic polydipsia)-related nephropathy. According to the fundamental principles of TCM theory, the main pathogenesis of DN lies in kidney asthenia [19][20]. More importantly, in clinic, Chinese herbal medicine (CHM) formulas focused on nourishing kidney such as Tangshen Formula (TSF) [21] and Liuwei Dihuang Pills (LDP) [22] have played many important roles in the treatment of Xiaoke and its related complications in the kidney. Yang et al. [21] reported that TSF combined with conventional therapy appear to be effective in reducing urinary protein and UL-FABP, which is considered to be associated with the severity of DN as a new urinary renal tubular biomarker. Correspondingly, our previous study based on 108 patients with stage III type 2 DN in China, a unicentral observation with a cross sectional design, indicated that 79 patients with the de ciency states of both kidney and spleen showed different increased levels of UCysC, UNAG and URBP, which as the speci c markers of renal tubular dysfunction rather than glomerular damage. Of note, the increase in UCysC has been closely related to UAlb in 30 patients with kidney de ciency syndrome [23]. These results strongly suggested that the association of urinary biomarkers is possibly used as practical targets in noninvasively forecasting the extent of renal injury in the type 2 DN patients with kidney asthenia syndrome. Hence, in this investigation, we aimed to further evaluate the clinical signi cance of the association of urinary biomarkers in noninvasively predicting the renal damaged extent of the early type 2 DN patients with kidney qi de ciency syndrome in an integrated traditional and western medical center, and preliminarily con rm the correlation between urinary tubular biomarkers and the biological bases of DN patients.
Study population
We retrospectively investigated the samples of urine from the 92 patients with type 2 DM who were consecutively enrolled at the Department of Endocrinology and TCM, Nanjing Drum Tower Hospital from March 2012 to April 2013. The proposal was approved by the Ethics Committee of The A liated Hospital of Nanjing University Medical School (Nanjing Drum Tower Hospital). All patients were categorized into 3 groups according to their albuminuria levels based on urinary albumin-to-creatinine ratio (UACR), including 20 patients with normo-albuminuria (< 30 mg/g creatinine), 50 patients with micro-albuminuria (30-300 mg/g creatinine), and 22 patients with macro-albuminuria (> 300 mg/g ceatinine) (Fig. 1).
Collection of kidney de ciency syndrome information
Signs and symptoms by kidney de ciency syndrome of TCM diagnostic methods mainly including kidney qi de ciency syndrome, kidney yang de ciency syndrome, kidney yin de ciency syndrome and kidney essence insu ciency syndrome were collected respectively [24,25,26]. The clinical manifestations include aching and weakness of loins and knees, dispiritedness and lassitude, frequent micturition, dripping urination, incontinence of urine, light-colored tongue with whitish fur and thin pulse; dizziness and tinnitus, insomnia and amnesia, ushed cheeks in the afternoon, bone-steaming tidal fever, night sweating, dry mouth and throat, emaciation, yellowish and scanty urine, reddish tongue with scanty fur and thin and rapid pulse; cold limbs and body, loose stool, early morning diarrhea, clear and profuse urine, profuse nocturnal urine, bright whitish complexion, light colored tongue with white fur as well as sinking and deep and weak pulse. All of kidney de ciency syndrome information was collected using a uni ed questionnaire.
Clinical and laboratory measurement
All eligible subjects provided 20 mL second voided clean-catch urine samples in the early morning after an overnight fast. These samples were immediately frozen and stored at -80 °C to prevent protein degradation before the test. Urine samples were centrifuged at speed of 3000 r/min for 10 min. And the supernatants were collected for sample test in 2 hours. Twenty-four-hours UAlb (24-h UAlb) was tested using immunoturbidimetric assay (Yong Ye, ShangHai, China). Subsequently, baseline of albuminuria status was determined according to UACR. The levels of URBP, UNAG and UCysC were measured using immune turbidimetric method (KangTe, Zhejiang, China), colourimetry (Mei Kang, Zhejiang, China) and particle-enhanced nephelometric immunoassay (YanJing, ShangHai, China) respectively. All the relevant procedures were followed by the manufacturer's instructions.
In addition, all patients donated fasting venous blood (2 mL) in the morning. And serum was separated by centrifugation. The levels of estimated glomerular ltration rate (eGFR) were calculated using the
Statistical analysis
All analyses were performed using the SPSS software (Version 16.0, SPSS Inc, USA) and the Graph Pad Prism 5. The data were expressed as mean ± SD for normally distributed values and median (interquartile range) for nonparametric values. Qualitative data were described as frequencies and analyzed using the Chi-square test. Differences among groups were analyzed by the one-way ANOVA followed by the Bonferroni's test for normally distributed values, and by the Kruskal-Wallis test for nonparametric values. SNK (Student-Newman-Keuls) or LSD (Least signi cant difference) method was used for multiple comparisons. To test the correlations between different urinary markers, Pearson's correlation coe cient was employed for normally distributed values and Spearman's correlation coe cient for skew distributed values. To determine the association between UACR, 24-h UAlb and urinary markers respectively, with the exception of bivariate correlation analysis, we performed linear regression analysis with the urinary markers as independent variables and UACR, 24-h UAlb as dependent variables in order to investigate the urinary markers related to albuminuria. Clinical parameters related to the elevated urinary markers were analyzed using multivariate logistic regression according to the ''Enter'' procedure. The p-value report was two-sided and value of less than 0.05 was considered statistically signi cant.
Clinical characteristics in all patients
The study population of type 2 DM is consisted of 20 patients with normo-albuminuria, 50 patients with micro-albuminuria and 22 patients with macro-albuminuria. All patients' baseline characteristics are summarized in Table 1. The mean age and DM duration of the patients were 63.12 ± 12.07 years and 12.62 ± 1.17 years respectively, and there were 47 males and 45 females. According to the baseline data, we did not nd a signi cant difference in sex, age, DM duration, systolic blood pressure (SBP), diastolic blood pressure (DBP), fast blood glucose (FBG), 2 hours postprandial blood glucose (2-h PBG), glycated hemoglobin (HbA1c), triglyceride (TG), blood urea nitrogen (BUN) and uric acid (UA) among normoalbuminuria, micro-albuminuria and macro-albuminuria groups. Although the levels of body mass index (BMI) and low-density lipoprotein cholesterol (LDL-C) in normo-albuminuria, micro-albuminuria and macro-albuminuria groups showed the stepwise increase with albuminuria level, there was no statistical signi cance among 3 groups. We also found that the level of total cholesterol (TC) in macro-albuminuria group was signi cantly higher than that in normo-albuminuria group (p = 0.017), and the levels of Scr and eGFR in macro-albuminuria group were signi cantly increased compared with those in micro-albuminuria group (p = 0.004, p = 0.022), whereas there were no signi cant differences between micro-albuminuria and normo-albuminuria groups. The data are expressed as mean ± SD for parametric variables and median (interquartile range) for nonparametric variables.
Distribution of kidney de ciency syndrome TCM syndrome could be validated by corresponding diagnostic standards of kidney asthenia [26]. Kidney de ciency syndrome of all patients was divided into the following types, including kidney qi de ciency syndrome, kidney yang de ciency syndrome, kidney yin de ciency syndrome and kidney essence insu ciency syndrome. As shown in Table 2, among these 92 patients with type 2 DM, the syndromes that occurred at least 30% included: aching and weakness of loins and knees (78.3%), dispiritedness and lassitude (78.3%), frequent micturition (47.8%), dripping urination (33.7%), dizziness and tinnitus (31.5%), light-colored tongue with whitish fur (86.9%) and thin pulse (30.4%). There is no doubt that these TCM syndrome distributional characteristics in all patients were consistent with the clinical manifestations of kidney qi de ciency syndrome. Differences of Scr, eGFR, 24-h UAlb and UACR Figure 2 shows the differences of Scr, eGFR, 24-h UAlb and UACR among normo-albuminuria, microalbuminuria and macro-albuminuria groups. We found that the levels of Scr and eGFR in macroalbuminuria group were signi cantly higher than micro-albuminuria group, but there were no signi cant differences between micro-albuminuria group and normo-albuminuria group. In addition, compared with the levels of Scr and eGFR, we concluded that the levels of 24-h UAlb and UACR showed a stepwise increase in normo-albuminuria, micro-albuminuria and macro-albuminuria groups (p = 0.000).
Changes of UCysC, UNAG and URBP Table 3 and Fig. 3 show the changes of urinary tubular biomarkers' levels in normo-albuminuria, microalbuminuria and macro-albuminuria groups according to the levels of albuminuria. We found that the levels of UNAG, URBP and UCysC synchronously showed a gradual and signi cant increase consistent with albuminuria degrees in 3 groups. The data are expressed as mean ± SD for continuous variables. p-values were obtained by ANOVA.
Discussion
Under the fundamental principles of TCM theory, the differentiation and treatment of DM and its complications have focused on the asthenia syndromes including the de ciency of lung, spleen (stomach) and kidney for a long time, of them, kidney asthenia is considered to be the key pathogenesis of DN, which is a well-known complication of long-standing DM [28,29,30]. However, there is no further observation on the distribution of kidney de ciency syndrome in the early DN patients. In the present study, rstly, we unexpectedly found that TCM syndrome distributional characteristics of DM without or with abnormal albuminuria wholly belong to kidney qi de ciency syndrome, which is different from the previous investigation about the 108 stage III type 2 DN patients with massive proteinuria [23]. Hence, we preliminarily suggest kidney qi de ciency is the potential pathogenesis of these 92 patients with type 2 DM.
In clinic, the incipient diagnosis and noninvasive detections of renal harmful risk of DM are undoubtedly important. In general, glomerular dysfunction is thought to be a main factor for DN at the early stage. The routine and classical evaluation of glomerular ltration dysfunction in DN patients includes the increased levels of UAlb and Scr, which is the basis of the calculation for eGFR [28]. Despite this, lamentedly, the overt increase in Scr might be lately found after the serious glomerular impairments [5], moreover, a decline of eGFR in the patients with DM is not always accompanied by an increase of UAlb [28].
Therefore, neither Scr nor eGFR is the perfect marker for the early detection of glomerular dysfunction in DN patients. Our results in the present study showed that 24-h UAlb and UACR, in comparison to Scr and eGFR, appeared a stepwise rise and a signi cant difference in the individuals with normo-albuminuria, micro-albuminuria and macro-albuminuria. In a nutshell, based on these 92 type 2 DM with kidney qi de ciency syndrome, we secondly con rmed that there is a really renal lesion characterized by the different levels of UAlb and UACR, the markers of injurious glomeruli.
The recent studies have demonstrated that renal tubulointersititial lesion plays a critical role along with glomerular injury in the pathogenesis of DN patients [30]. The several biomarkers indicating proximal tubular impairment have the potential to be the clinical markers for predicting renal damage extent of DN patients at the early stage. Thereupon, to evaluate the clinical implication of urinary tubular biomarkers in diagnosing renal lesion, we then investigated the changes of UCysC, UNAG and URBP in all type 2 DM patients with kidney qi de ciency syndrome.
It is well known that, the clinical biomarkers of renal tubular injury contain urinary enzymes (enzymuria) and urinary proteins with the low molecular weight (LW-protienuria), while LW-protienuria originates from the de cient reabsorption of plasma proteins by tubular epithelial cells [29]. CysC, as its low molecular weight, is freely ltered by glomeruli and metabolized by renal proximal tubule without the in uences of age or muscle mass in healthy conditions. It is reported that CysC in urine (UCysC) along with albuminuria could predict tubular impairment in the type 2 DN patients [31]. RBP, a low molecular weight protein, is synthesized mainly in hepatocytes, easily ltered by glomeruli and almost completely reabsorbed in renal proximal tubule. The level of urinary RBP (URBP) is thus very low in the nal urine.
Several researchers have demonstrated that the increased level of URBP was correlated with renal tubular dysfunction in DM patients [16,28]. In addition to these, it has already been reported that urinary NAG (UNAG), a proximal tubular brush border lysosomal enzyme, plays a crucial role in diagnosing tubular impairment in DM patients [12]. The data in this observation indicated that the above-mentioned urinary tubular biomarkers, UCysC, UNAG and URBP, were simultaneously increased in normo-albuminuria, microalbuminuria and macro-albuminuria groups accompanied by the rise of 24hUAlb and UACP, which are the earliest known clinical indexes for the diagnosis of DN. Furthermore, these biomarkers in urine were independently related to 24hUAlb and UACP. In brief, we also con rmed that UCysC, together with UNAG and URBP, as the acknowledged tubular dysfunctional biomarkers, have an independent association with renal injurious extent in the early type 2 DM patients with kidney qi de ciency syndrome.
Thirdly, we paid attention to the interrelation of UCysC, UNAG and URBP in the 72 type 2 DN patients with micro-albuminuria and macro-albuminuria. To our surprise, the results in this report unexpectedly displayed that UCysC was positively correlated with UNAG and URBP, moreover, there was also a positive relationship between UNAG and URBP in DN patients. Whereupon, we naturally speculated a possibility that which one is the more sensitive and speci c tubular dysfunctional biomarker among UCysC, UNAG and URBP based on the different levels of UAlb. To our knowledge, CysC is synthesized and secreted at a nearly constant rate by virtually all nucleated cells. Given its molecular weight is 13 kDa, in healthy subjects in human, CysC is almost freely ltered by glomeruli and entirely reabsorbed in renal proximal tubular epithelial cell like other urinary proteins with the low molecular weight. Therefore, it is not excreted in urine and the increased level of UCysC, an independent of serum CysC, is particularly useful for estimating renal tubular impairment [31]. Kim et al. [10] reported that UCysC was mainly increased in the type 2 DN patients with macro-albuminuria and not signi cantly different between those with microalbuminuria and normo-albuminuria. By contrast, being different from the above consequences, we amazedly found in the present study that UCysC was not only increased signi cantly in the type 2 DN patients with unnormal albuminuria but also closely related to a raise in UNAG and URBP. Additionally, we detected that UCysC was associated with BMI and FBG, the clinical baseline parameters of DM patients.
Consequently, it was further con rmed the clinical signi cance of assessing the levels of UCysC, UNAG and URBP in the type 2 DM patients on the basis of the different levels of albuminuria, especially in unnormal albuminuric individuals. More importantly, it is emphasized that UCysC, UNAG and URBP are all considered as sensitive and speci c tubular dysfunctional biomarkers.
Lastly, to be frank, there are several limitations in this report. First, except for the insu cient sample size in a signal-center, the study was di cult to illustrate the causal relationship between the risk factors and the natural course of normo-albuminuric renal insu ciency since it was conducted with a retrospective cross sectional design rather than based on a longitudinal observation. Second, due to the lack of healthy control group, we failed to clarify the relationship among the levels of UNAG, URBP and UCysC under healthy condition. Third, it is unknown whether the synchronized rise of UCysC, UNAG and URBP could assuredly indicate renal tubular impairment in the type 2 DN patients at the early stage without the histopathologic evidences in the kidney. Fourth, we only found kidney qi de ciency was the main TCM syndrome in these 92 DM patients, and could not demonstrate the relationships between urinary biomarkers and kidney yin or yang de ciency syndromes. Despite these, we have reasons to believe that the clinical implication of UCysC, UNAG and URBP in these 92 type 2 DM patients were clearly described, and the associated detection of urinary biomarkers could noninvasively forecast renal injurious extent in the incipient type 2 DN patients with kidney qi de ciency syndrome.
Conclusion
In this investigation, we ulteriorly demonstrated that the association within UCysC, UNAG, URBP and UAlb may be used as practical targets in noninvasively forecasting the extent of renal injury in the early type 2 DN patients with kidney qi de ciency syndrome. More importantly, we found that urinary tubular biomarkers may be one of the biological bases of DN patients with the speci c TCM syndrome.
Figure 1
Trial ow diagram.
Figure 2
Changes of Scr, eGFR, 24hUAlb and UACR in all patients. The differences of Scr, eGFR, 24hUAlb and UACR among 3 groups were shown. The levels of Scr and eGFR were signi cantly different between micro-albuminuria group and macro-albuminuria group (p = 0.004, p = 0.022), but there were no signi cant difference between micro-albuminuria group and normo-albuminuria group. In addition, compared with Scr and eGFR, the levels of 24hUAlb and UACR showed the stepwise increase and the signi cant difference in 3 groups (P = 0.000). □ = normo-albuminuria group; ■ = micro-albuminuria group; ■ = macro-albuminuria group. Each value was expressed as mean ± SD. ap < 0.01 vs normoalbuminuria group; bp < 0.01 vs micro-albuminuria group.
Figure 2
Changes of Scr, eGFR, 24hUAlb and UACR in all patients. The differences of Scr, eGFR, 24hUAlb and UACR among 3 groups were shown. The levels of Scr and eGFR were signi cantly different between micro-albuminuria group and macro-albuminuria group (p = 0.004, p = 0.022), but there were no signi cant difference between micro-albuminuria group and normo-albuminuria group. In addition, compared with Scr and eGFR, the levels of 24hUAlb and UACR showed the stepwise increase and the signi cant difference in 3 groups (P = 0.000). □ = normo-albuminuria group; ■ = micro-albuminuria group; ■ = macro-albuminuria group. Each value was expressed as mean ± SD. ap < 0.01 vs normoalbuminuria group; bp < 0.01 vs micro-albuminuria group.
Figure 3
Levels of urinary tubular biomarkers in all patients. The changes of urinary tubular biomarker in 3 groups according to albuminuria were revealed. The levels of UNAG, URBP and UCysC synchronously showed the gradual increase along with albuminuria in 3 groups, and there was a signi cant difference between normo-albuminuria and micro-albuminuria groups (p = 0.000), as well as between micro-albuminuria and macro-albuminuria groups (p = 0.000). □ = normo-albuminuria group; ■ = micro-albuminuria group; ■ = macro-albuminuria group. Each value was expressed as mean ± SD. ap < 0.01 vs normo-albuminuria group; bp < 0.01 vs micro-albuminuria group. | 2020-10-28T19:20:28.496Z | 2020-10-16T00:00:00.000 | {
"year": 2020,
"sha1": "bd18faeceed7fc9f1a9b06d9183497637619e6c9",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-90747/v1.pdf?c=1602868807000",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "5a64f221c34907f1c795b8b1dd1a9db90cc90ec5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244488045 | pes2o/s2orc | v3-fos-license | Kidney tissue engineering using a well-preserved acellular rat kidney scaffold and mesenchymal stem cells
The aim of this study was to acquire an effective method for preparation of rat decellularized kidney scaffolds capable of supporting proliferation and differentiation of human adipose tissue derived mesenchymal stem cells (AD-MSCs) into kidney cells. We compared two detergents, the sodium dodecyl sulfate (SDS) and triton X-100 for decellularization. The efficiency of these methods was assessed by Hematoxylin and Eosin (H&E), 4', 6 diamidino-2-phenylindole and immunohistochemistry (IHC) staining. In the next step, AD-MSCs were seeded into the SDS-treated scaffolds and assessed after three weeks of culture. Proliferation and differentiation of AD-MSCs into kidney-specific cell types were then analyzed by H&E and IHC staining. The histological examinations revealed that SDS was more efficient in removing kidney cells at all-time points compared to triton X-100. Also, in the SDS-treated sections the native extracellular matrix was more preserved than the triton-treated samples. Laminin was completely preserved during decellularization procedure using SDS. Cell attachment in the renal scaffold was observed after recellularization. Furthermore, differentiation of AD-MSCs into epithelial and endothelial cells was confirmed by expression of Na-K ATPase and vascular endothelial growth factor receptor 2 (VEGFR-2) in seeded rat renal scaffolds, respectively. Our findings illustrated that SDS was more effective for decellularization of rat kidney compared to triton X-100. We presented an optimized method for decellularization and recellularization of rat kidneys to create functional renal natural scaffolds. These natural scaffolds supported the growth of AD-MSCs and could also induce differentiation of these cells into epithelial and endothelial cells.
Introduction
The incidence of end-stage renal diseases (ESRDs) appears to be increasing in the world. Dialysis and renal transplantation are possible treatments for patients with ESRD. 1 Renal transplantation is considered as one of the most effective treatments, which can recover renal function, 2 and greatly improves life quality. 3 However, renal transplantation still faces a basic challenge: an inadequate supply of viable donor kidney for the growing demand of transplants. 4 Progress in tissue engineering and regenerative medicine can cut the gap between limited supply of organs and increasing demands. 5 Extracellular matrix (ECM) scaffold can be prepared by decellularization of native tissues. 5,6 Several decellularization processes including physical methods like osmotic shock and freeze-thaw, and chemical methods such as treatment with SDS or triton X-100, as well as enzymatic regimens have been used to reach the ECM-derived scaffolds. [7][8][9] Natural renal scaffolds should be free of all native cells, but preserve biological activity, tensile strength, and the primary proteins of the ECM (e.g. laminins, collagens, and fibronectins). 7 Moreover, these acellular renal scaffolds should support the kidney three-dimensional architecture of vasculature and capillary network. 10,11 Thus, when stem cells are seeded on renal ECM scaffolds, which are a complex of the proteins and biomolecules, 12 decellularized scaffolds play important roles in the cell adhesion, cell signaling, 13 proliferation, migration, and differentiation of stem cells into kidney-specific cell types. 13 Until now, decellularization methods have been used for a variety of tissues including blood vessels, 14,15 valves, 16 urinary bladder, 17 liver, 18 intestine, 19 trachea, 20 and kidney. 11,21 Due to differences in tissue mass, function, structure and biomechanical characteristics, each organ or tissue requires specific processing and investigation. 6,22 Several decellularization methods have been proposed for rat kidney and renal ECM scaffolds were produced. 23 The goal of the present study was to compare two detergents, the non-ionic triton X-100 and the ionic SDS, for decellularization of rat kidney and to optimize an effective decellularization method for possible kidney engineering. Following this, human AD-MSCs were seeded into the rat kidney scaffolds and their survival and differentiation were assessed after 1, 2 and 3 weeks of culture.
Organ preparation. In this study, rat kidneys were collected from 10 male Wistar rats (weighing 250-300 g, six weeks old) which were obtained from the Animal House of the Medical School, Mashhad University of Medical Sciences. All experiments were approved by the Animal Research Ethical Committee of Mashhad University of Medical Sciences (project code: MUMS 941327). Kidneys were divided into transverse sections of 10.00 × 10.00 × 2.00 mm pieces using a scalpel, and washed with normal saline. 24 All the experiments were performed under sterile conditions. The prepared sections included both cortex and medulla.
Decellularization of rat kidney. Kidney sections were embedded in decellularization solution of either 1.00% (v/v) SDS or 1.00% (v/v) triton X-100 diluted in distilled water at 4.00 ˚C in a shaking incubator (200 rpm). Decellularization solution was changed 8 hr after first tissue harvesting and then every 48 hr until tissues were transparent. Decellularized kidney sections were analyzed after 2, 5, 10 and 14 days of soaking in decellularization solutions to find ideal time point for decellularization. Afterward, the sections were washed twice with PBS to remove the detergents completely. 24 Finally, rat renal ECM scaffolds were fixed in 10.00% formalin and softly embedded in paraffin. 25 Acellular kidney scaffolds characterization. In the current study, fresh, SDS-decellularized, and triton X-100decellularized paraffin-embedded sections were analyzed by H&E, DAPI, and immunohistochemistry (IHC) staining.
Histological examinations. Both native kidney tissue and acellular paraffin-embedded tissue sections were cut into 5.00 μm sections and stained with H&E. Then slides were examined under optical microscope (Olympus, Tokyo, Japan) for detection of cell nuclei elimination and ECM architecture assessment. 26 To determine residual DNA and the degree of cell removal in samples, DAPI staining was used according to the routine protocol, followed by their analysis under a fluorescent microscope (Olympus).
Immunohistochemical analysis. To check the integrity and preservation of ECM architecture in rat kidney scaffolds prepared with the better detergent, between SDS and triton X-100, expression of laminin as the most important basement membrane protein was examined by immunohistochemistry. 23 For immunohistochemical staining, the 5.00 μm thick sections were deparaffinized, rehydrated and incubated with proteinase K. After endogenous peroxidase activity blocking with methanol containing 3.00% H2O2, tissue sections were incubated in 10.00% normal goat serum. Sections were then covered with primary antibody to laminin (dilution 1:50, rabbit polyclonal anti-laminin; Abcam, London, UK) and left overnight in a humid chamber at 4.00 ˚C. The next day, after washing with PBS, sections were incubated with HRP-conjugated secondary antibody (dilution 1:500, goat polyclonal secondary antibody to rabbit IgG; Abcam) for 2 hr at room temperature. Finally, the sections were developed using diaminobenzidine (DAB) as a chromogen. The slides were subsequently counterstained with hematoxylin and were observed under a light microscope. 27 The staining intensity was scored by three examiners blindly (0 = no staining, 1 = weak, 2 = moderate, and 3 = strong). 28 Recellularization of kidney scaffolds with hAD-MSCs and histological examinations. The AD-MSCs were derived and characterized as previously described. 29 For sterilization, SDS-treated scaffolds were washed with distilled water and soaked in sterile PBS solution for 1 hr, followed by placing them in 70.00% ethanol for 15 min. Finally, decellularized kidney scaffolds were placed into wells of 24 well plates and soaked in culture medium (1.00 mL) with 10.00% FBS and 1.00% penicillin/streptomycin for 24 hr. 30 Then, cell seeding was conducted by dripping 0.05 mL cell suspension with the cell density of 2.00 × 10 5 cells per scaffold. One hour after seeding, 1.00 mL DMEM was added into each well, which were then incubated at 37.00 ˚C and 5.00% CO2 for three weeks. Changing the medium was performed every 48 hr. Unseeded scaffolds were used as controls and cell-seeded renal scaffolds were analyzed after 7, 14, and 21 days of culture with H&E and IHC staining. The H&E staining was performed to determine cellular attachment, survival and migration within the scaffolds. Also, three slides were selected at each day, and the numbers of cells were counted from part of the slides randomly (40×). After data collection, statistical analysis was used to determine the significance of time on the number of cells. 30 Furthermore, differentiation of AD-MSCs towards epithelial and endothelial cells of the kidney, was determined by staining with primary antibodies against sodium/potassium adenosine triphosphatase (Na/K-ATPase dilution 1:50; rabbit polyclonal anti-Na-K ATPase, Abcam) and vascular endothelial growth factor receptor-2 (VEGF-R2, dilution 1:20; rabbit polyclonal anti-VEGF-R2, Abcam). 31 The staining intensity was scored by three examiners blindly (0 = no staining, 1 = weak, 2 = moderate, and 3 = strong). 28 Statistical analysis. The data were expressed as means ± SD. Kruskal-Wallis and Mann-Whitney nonparametric statistical tests were used to compare differences between immunostaining scores. The difference between means of parametric data was statistically analyzed using repeated measures analysis followed by Bonferroni test. Differences were considered statistically significant when p < 0.05.
Results
Histological and immunohistochemical analysis of the scaffolds. The gross observation of the decellularized tissues showed that the color of SDS-treated sections turned white and became translucent more efficiently and faster than triton-treated sections. Based on H&E, comparison of the decellularized and native kidney tissues revealed a successful elimination of cell nuclei with better extracellular matrix preservation in SDS-treated samples compared to the triton-treated scaffolds over the same time frame (Fig. 1). The DAPI staining similarly indicated that residual DNA absence in the SDS-treated scaffolds was greater than triton-treated scaffolds compared to fresh kidney sections (Fig. 2). This results showed that SDS was a better detergent for preparation of rat kidney scaffolds as determined by H&E. Microscopic analyses of H&E and DAPI staining revealed that treatment of sections with SDS showed limited decellularization after 2 days, while SDS treatment after 5 days or more, resulted in full decellularization with minimal changes in tissue volume and maintained the architecture of glomeruli, vessels, tubules and integrity of the extracellular matrix (Fig. 1). Furthermore, no cell nuclei or remaining DNA could be detected in these scaffolds after DAPI staining (Fig. 2). Eosinophilic prevalence structures in these scaffolds indicated that the tissue was largely composed of collagen. To assess the conservation degree of decellularized renal ECM matrix composition, immunohistochemical staining was performed for laminin on renal ECM prepared using the SDS protocol and native rat kidney tissue. The results of IHC staining revealed that contiguous laminin network and basement membrane integrity were completely preserved during decellularization procedure and laminin was distributed in tubules and glomerular basement membrane. Also, laminin displayed similar expression patterns in both native and decellularized kidneys, however, we observed a little decrease in laminin expression from day 5 towards day 14 (Fig. 3). According to these results and the integrity of the ECM, decellularization with 1.00% SDS led to fully decellularized bioscaffolds on day 5. Further, the results indicated that expression of laminin was significantly reduced in decellularized bioscaffolds on day 10 and 14 in comparison with native rat kidney tissue (p < 0.05), (Fig. 4A).
Recellularization of kidney scaffolds with hAD-MSCs.
In order to assess these scaffolds suitability for tissue engineering, AD-MSCs were seeded on SDS-treated scaffolds to check their attachment and differentiation potential. Histological examination indicated seeded cells presence and thus non-toxicity of the SDS-treated scaffolds on day 7. Furthermore, AD-MSCs could adhere and penetrate into rat renal scaffolds 7 days after culture (Fig. 5). By day 14, H&E staining results showed an increase in proliferation, the number of adhered cells and their tendency to migrate into the SDS-treated scaffolds compared to day 7 (Fig. 5). In some areas, cells formed a bilayer pattern and produced an epithelium-like structure. However, the number of AD-MSCs observed on or within the scaffold was decreased by time as seen in Figure 5 on day 21. Moreover, histological studies showed that the structure of seeded scaffolds with AD-MSCs was reconstructed compared to the renal ECM scaffolds which were harvested without cells on day 7. Most importantly, the highest regeneration was observed on day 14 compared to days 7 and 21. In the resultant graph of cell counting and density it is evident that the number of cells was increased on days 14, and 21 in comparison with day 7, however, this increase was not significant. Also, no significant difference was observed between the cell densities at day 14 compared to day 21 (p < 0.05), (Fig. 4B).
Immunohistochemical analysis of recellularized kidney scaffolds. Na-K ATPase expression, an ionic pump in the plasma membrane of cells, was evaluated with IHC staining to analyze differentiation of AD-MSCs seeded on SDS-treated scaffolds towards kidney epithelial cells. The results of IHC staining showed that Na-K ATPase was expressed in seeded rat renal scaffolds compared to non seeded scaffolds at different times of cell seeding. As shown in Figure 6, the adherent cells on recellularized kidney ECM scaffolds expressed higher levels of Na-K ATPase on day 14 compared to day 7 after hAD-MSCs seeding. The results of IHC staining showed that Na-K ATPase expression was increased on seeded scaffolds on day 21 compared to day 14. Furthermore, its expression levels were higher in seeded scaffolds than non-seeded scaffolds on day 21.
Fig. 4. A)
Laminin immunoreactivity in kidney decellularized with SDS on 2, 5, 10 and 14 days. *p < 0.05: Significant difference was observed in decellularized kidney sections with SDS compared to native kidney; B) Average number of cells in each scaffold in recellularization rat kidneys with hAD-MSCs after 7, 14 and 21 days. There was no significant difference between the cellular densities in seeded SDS-treated scaffolds with AD-MSCs at days 7, 14, and 21; C) Na-K ATPase and D) VEGF-R2 immunoreactivity on 7, 14, and 21 days after recellularization of rat kidney scaffolds with hAD-MSCs. *p < 0.05: Significant difference was observed in seeded rat renal SDS-treated scaffolds with AD-MSCs compared to non-seeded scaffolds on the same days. Data are provided as mean ± standard deviation. Results of IHC staining indicated that VEGF-R2 was expressed in rat renal scaffold 7 days after cell seeding compared to non-seeded scaffolds (Figs. 7A-7C), and its expression in cells within the vasculature and glomerular capillaries showed a significant increase on day 14 compared to the acellular scaffold at the same time and rat renal scaffold on day 7 after cell seeding (Figs. 7D-7F).
Despite the decrease in the number of cells on recellularized kidney ECM scaffolds prepared with SDS, 21 days after Ad-MSC seeding in comparison to renal cellseeded scaffolds on day 14, IHC staining illustrated higher VEGF-R2 expression levels on day 21 (Figs. 7G-7I). However, overall expressions of Na-K ATPase and VEGF-R2 were lower in seeded rat renal SDS-treated scaffolds compared to native rat kidney tissue. Moreover, the results showed that the expression of Na-K ATPase was significantly enhanced 7, 14, and 21 days after cell seeding in comparison with non-seeded scaffolds on the same days. The expression of VEGF-R2 was significantly enhanced 14, and 21 days after cell seeding in comparison with non-seeded scaffolds on the same days (p < 0.05), (Figs. 4C and 4D).
Discussion
Organ shortage for transplantation has dramatically increased in recent years and investigators have been prompted to discover novel solutions to overcome this problem. 2 Using the scaffolds derived from biological tissues can be a platform technology for regenerative medicine. 32 Accordingly, it is necessary to acquire an effective method for preparation of a decellularized kidney scaffold which can support proliferation and differentiation of various stem cells into kidney cells. 12 Even though various methods of decellularization for kidney in different species have been accomplished in several research studies, 3,32,33 there are fewer reports comparing different chemical detergents for producing ideal decellularized ECM architecture to support cell attachment and recellularization. 22,25 Here, in the present study, transverse sections of mature rat kidneys were required to use the minimum number of animals. Here, we aimed to compare different methods for preparation of a natural rat scaffold. To do so, two commonly ionic (SDS) and non-ionic (triton X-100) detergents with different physical and chemical attributes were used for decellularization. SDS is an ionic detergent with the ability to entirely denature proteins by disrupting proteinprotein interactions while retaining the structure and composition of the ECM. 2,24 SDS is more powerful than triton X-100 in removing the entire cellular materials from solid tissues such as kidney. 5,25 Moreover, SDS achieves quick cell disruption and nuclear elimination compared to triton X-100 that achieves these slowly. 11 On the other hand, triton X-100, a non-ionic detergent, disconnects lipid-protein and lipid-lipid interactions, however, not protein-protein interactions and thereby leads to separation of the cells from each other and release of the cytoplasmic materials as a result of cell membrane lysis. 34,35 There is a debate on suitability of these two popular detergents for preparation of kidney scaffolds. SDS has been stated as an effective detergent for kidney decellularization by Nakayama et al., 25 Sullivan et al. 22 and Ross et al. 23 , however, some studies have reported triton X-100 as a more suitable detergent. 24,36 In this study, the results of H&E staining could indicate that cell nuclei and cellular compartments of rat kidneys could be perfectly eliminated with SDS-based solutions in comparison with triton-based solutions. Moreover, maintenance of vascular integrity and native tissue architecture of renal ECM scaffolds in SDS-treated sections were more effective than triton-based solutions. DAPI staining results also demonstrated more residual DNA in the triton-treated scaffolds when compared to intact kidney tissue sections. Inadequate decellularization and cellular remains are the most important reason for acute immune response and rejection of tissue engineered scaffolds. Thus, validation of complete decellularization is necessary. 37 These results are bolstered by Sullivan et al. who compared the ionic and non-ionic detergents for porcine kidneys to recognize the best method for kidney decellularization. They illustrated that 0.50% SDS was a more effective detergent for decellularization of porcine kidneys compared to triton X-100. 22 Moreover, Orlando et al. reported that renal ECM scaffolds using SDS treatment were successfully produced from human kidneys. Furthermore, these scaffolds preserved renal ECM structure (glomeruli, tubules, vessels) and the biochemical properties. 38 Bonandrini et al. presented efficient production of an acellular ECM scaffold from rat kidney using only SDS instead of SDS and triton X-100 mixture. These scaffolds maintained integrity of ECM structure, glomerular capillaries and tubular membranes. 39 Fischer et al. reported that SDS and sodium deoxycholate (SDC) represented best cell removal efficacy, while triton X-100 insufficiently decellularized the porcine kidney pieces. 40 In the present work, SDS was reported as an appropriate detergent for providing rat renal ECM scaffolds in agreement with earlier studies on this topic. In addition to histological examinations, IHC was carried out to confirm the maintenance of ECM components in renal ECM scaffolds prepared using SDS protocol. Laminin is one of the most important components of the basement membrane which is involved in cell viability, migration, and differentiation. 33,41 In agreement with previous studies, 23,25,39 the IHC results determined complete decellularization of rat kidney tissue without loss of laminin expression. Based on laminin expression, the best time for recellularization of renal SDS-treated scaffolds was determined as five days after cell removal. The challenge of recellularization should be addressed as the next step towards creating a viable tissue. For this purpose, AD-MSCs were seeded on the SDS-treated scaffolds, and cell survival and differentiation into kidneyspecific cell types were assessed for up to 21 days. Use of AD-MSCs was preferred due to their high plasticity and differentiation potential towards many different cell types. Our results were indicative of proper attachment and growth of AD-MSCs on rat renal SDS-treated scaffolds. Attachment and a notable increase in the number of AD-MSCs were remarked by histological studies after 7, 14 and 21 days of scaffold recellularization. However, H&E staining showed a decrease in cell density on SDS-treated scaffolds on day 21 after cell seeding compared to day 14. MSCs destroy ECM proteins through secretion of proteases and open a new path for migration in extracellular matrix which is probable cause of scaffold destruction in the place of cell existence. In response to substrate, migrating cells activate various kinds of enzymes including proteases and metalloproteinases or increase their expression. After 21 days cultivation of AD-MSCs on scaffold of rat kidney, some of the cells that were not connected to basal membrane of matrix may experience apoptosis. 30,42 Moreover, reduction of the cell number on day 21 of recellularization could indicate cells progression towards apoptosis. These results showed that SDS-treated scaffolds present a suitable 3D microenvironment required for Ad-MSC proliferation and differentiation. 11,39 Ross et al. stated the first study on renal scaffolds repopulation in 2009. They successfully recellularized renal scaffolds created by SDS protocol with murine ES cells and presented cellular growth within the tubular structures and glomerular network. 23 In addition, as shown by Bonandrini et al. mouse embryonic stem (mES) cells were infused into kidney scaffolds and contributed in the glomerular capillaries and vascular network. 39 Our results were also consistent with the results reported by Guan et al. 11 Who reported that acellular renal scaffolds produced with 0.5% SDS were recellularized with mES cells.
In the present study, immunohistochemistry technique was also used to assess Na-K ATPase and VEGF-R2 expression in cultured AD-MSCs on renal SDS-treated scaffolds. The Na-K ATPase is an integral membrane protein which produces electrochemical gradients. 43 VEGF-R2 a receptor for VEGF-A, is known as the primary marker for endothelial cell growth which plays a role in proliferation, maintenance and migration of endothelial cells. 44 Nakayama et al. reported that human embryonic stem cells (hESCs) seeded on rhesus monkey kidney scaffolds were able to produce tubular structures. The recellularized constructs expressed renal markers including Acupoinine-2 and PAX2 after 8 or 16 days of culture. 34 As shown by Bonandrini et al. infused murine embryonic stem cells through the renal artery scaffold expressed CD31 and Tie-2, markers of endothelial lineage, after 24 and 72 hr. 39 In this study, IHC staining demonstrated that both VEGF-R2 and Na-K ATPase expressions were increased on AD-MSCs seeded on renal SDS-treated scaffolds. Higher Na-K ATPase and VEGFR expression were observed on day 21 after culture. However, expression of both markers was higher on native human kidney tissue compared to the seeded rat renal scaffolds after 7, 14, and 21 days from cell seeding. Eventually, the results of this study showed that these natural scaffolds not only supported the proliferation and growth of AD-MSCs but they were also good for differentiation of adherent cells into kidney cell types, endothelial cells of vascular structures, and glomerular capillaries. Our results demonstrated an effective decellularization technique with SDS capable of removing all cellular components from rat kidneys, and creating an intact three dimensional ECM architecture which can preserve tubular structure, glomerular capillaries, and vasculature. These natural SDS-treated scaffolds supported the growth of AD-MSCs and could induce their differentiation into epithelial and endothelial cells. Further experiments are required to assess the suitability of this method for preparation of whole organ kidney scaffolds and also testing them in animal models.
There are a few limitations to this study that need to be addressed in further studies. In this project, it was better to examine expression of more specific markers for kidney cells differentiation such as prominin-1 (CD133), CD10, PAX8 and PAX2. Therefore, the immunohistochemical evidence of the seeded rat renal SDS-treated scaffolds needs to be evaluated in further studies. | 2021-11-24T05:18:13.916Z | 2021-09-15T00:00:00.000 | {
"year": 2021,
"sha1": "7e672bc2264ff84f765beab302da686e846449fb",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "7e672bc2264ff84f765beab302da686e846449fb",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219017874 | pes2o/s2orc | v3-fos-license | Ufer Grounding System to Minimize Risk of Lightning Strike using Concrete Mixed with Bentonite and Coconut Fiber
Received: January 14, 2020 Accepted: April 16, 2020 Published: April 30, 2020 The increasing frequency of lightning strikes endangers human safety and life. The grounding system was introduced to face the lightning strikes. This research aimed to understand the changes of grounding resistance value using concrete mixed with bentonite and coconut fiber. The research was conducted in the Laboratory of Electrical Engineering, University of Lampung. The research was started from October 2017 to April 2018. This research used the Ufer grounding system. Concretes with (25 x 25 x 30) cm in volume were planted at a depth of 50 cm with copper-coated electrodes that were 16 mm in diameter. 4 concrete was produce with different component T1= concrete, 30% bentonite, T2= concrete, 30% bentonite, 1.5% coconut fiber, T3= concrete, 30% bentonite, 0.75% coconut fiber, T4= concrete + 1.5% coconut fiber. The results show that the lowest grounding resistance values were 45.896 Ω on the concrete with bentonite: cement: sand: gravel = 0.3: 0.7: 2: 4. By adding 1.5% coconut fiber, the grounding resistance value is 3.5 times smaller than the grounding resistance values of the soil (161.2 Ω). Adding bentonite and coconut fiber can decrease the grounding resistance values.
INTRODUCTION
Nowadays, global climate change was causing extreme weather conditions (Lehmann et al., 2015;Ogunbode et al., 2019;Rahmat & Mutolib, 2016); one of them is increasing the frequency of lightning strike. In the high magnitudes of current and voltage could destroy the electrical device or properties and endanger to human safety and life. Moreover, lightning can damage the building and can be burning in that affected area. The technology to avoid lightning stroke and protect the buildings is needed. The grounding system was introduced to face the lightning strike (Al-Ammar et al., 2010;Ghania, 2019).
Grounding or earthing is a process of connecting any electrically part to the mother earth, of which the potential is treated as the zero references. That is a vital part of the fields of lightning protection, power, and communication system. In the lightning protection system, the grounding acts as the interface between natural transient phenomena from the cloud (lightning) and the masses of soil. The function of the grounding system is to divert the flow of charge to the soil masses as fast as possible (Gomes et al., 2014;Halim et al., 2019).
Current research, studying of grounding materials use conductive concrete as it had good mechanical properties, electrical conductivity, and corrosion resistance. The resistivity of conductive concrete was lower than the soil (Ma et al., 2014). The contact area between the soil and electrode could be enlarged by put down conductive concrete around the metal grounding electrodes, and the grounding resistance could be reduced (Sun, 2001). Currently, conductive concrete was mainly used as auxiliary grounding material in the grounding system by molding around the vertical grounding body through a directly molding around the ground/ soil surface (Jin-rong & Yong-ming, 2008;Tian et al., 2012).
Various proportions of bentonite to mixed into concrete for Ufer grounding was investigated.
The lowest grounding resistance with the least fluctuation as well is proportioned 30% bentonite-concrete mixed (Lim et al., 2016).
However, researches about the grounding resistance system using conductive concrete with adding with addictive material are limited. This research will describe the grounding resistance system using conductive concrete with adding addictive material such as bentonite and coconut fiber.
Experimental design
The research was conducted in the Laboratory of Electrical Engineering, Faculty of Engineering, University of Lampung. The research was started from October 2017 to April 2018. This research use model of the "Ufer grounding" system. This research design uses concrete that is the volume (25 x 25 x 30) cm 3 and planted at a depth of 50 cm and uses copper-coated electrodes that are 16 mm in diameter and 50 cm in length. Four concrete produced with different treatment/ additional components (additive material), there are T1= Grounding hole with concrete and 30% bentonite, T2= Grounding hole using concrete (bentonite: cement: sand: gravel = 0.3: 0.7: 2: 4) and the addition of coconut fiber waste 1.5%, 300 g, T3=Grounding hole using concrete (bentonite: cement: sand: gravel = 0.3: 0.7: 2: 4) and the addition of coconut fiber waste 0.75% which is 150 g, T4= Grounding hole using concrete (cement: sand: gravel = 1: 2: 4) and the addition of coconut fiber waste 1.5%, which is 300 g. The preliminary data was collected using pure concrete without additives material. Measuring the grounding resistance value using pure concrete was carried out five months before measuring the grounding resistance value from 4 different compositions; the value of pure concrete resistance can be used as standard.
Comparison of material for pure concrete is (cement: sand: gravel = 1: 2: 4). Measuring the grounding resistance value using the Kyoritsu model 4105A (Hutauruk, 1991), see figure 1. The measurement of the grounding resistance value in this study is using the 3 points method for standard data (Badan Standardisasi Nasional, 2002Nasional, , 2008). Figure 2. shows how earth grounding measurements use the 5 points method. Calculating percentage changes in grounding resistance value following below equation (Martin et al., 2019;Hutauruk, 1991): Where, Rx = grounding resistance value without additives, Ry = grounding resistance value with additives.
Characteristic of the material Concrete
Concrete is a homogeneous mixture between cement, water, and aggregate. The characteristics of concrete are high crushed stress and low tensile stress. Concrete is a function of its constituent material consisting of hydraulic cement (Portland cement), fine aggregate, coarse aggregate, water, and added material (admixture or additive) (Badan Standarisasi Nasional, 2004). The advantage of using concrete is that it has a broader surface so that it can absorb more water and be able to keep soil moisture longer.
Bentonite
Bentonite is a type of clay that primarily contains more than 85% montmorillonite with minerals such as calcite, quartz, feldspars, dolomite, and other minerals (Kusrini, 2018). Based on the type, bentonite is divided into two, namely Na-Bentonite and Ca Bentonite. Comparison between Na + cations and Ca + cations contained therein is quite high, and the colloidal suspension has a pH of 8.5 to 9.8 (Lim et al., 2013), Figure 3. shows the bentonite powder. Figure 4. shows the chemical structure of the bentonite powder.
Coconut fibers
Coconut fiber is one of the bio-mass that is easily obtained and utilized because it can hold the water content and chemical elements of fertilizers and can neutralize the acidity of the soil. Coconut fibers consist of fiber (fiber) and cork (pitch), which connects one fiber to another fiber. Coconut fibers consist of 75% fiber and 25% cork. Nutrient content that is owned by coconut fibers, either macro or micro, turns out to be needed by plants.
Ufer grounding system
Ufer grounding is an electrode wrapped in concrete like a building foundation that is in direct contact with the earth to be used as grounding. This concept is based on concrete conductivity and its large surface area so that it can handle very high current loads (Fink & Beaty, 2006;Departemen Pekerjaan Umum, 2010). Figure 6. Shows the grounding system using the Ufer grounding method. (Fink & Beaty, 2006)
RESULTS AND DISCUSSION Pure concrete test results
Pure concrete testing is the preliminary data from this study, which has been carried out for five months from October 2017 to March 2018 and is measured 14 days in the first month and the fifth month in the morning and evening. That is to find out the testing of grounding prisoners using concrete for a more extended period.
From the graphic can be seen, resistance value in pure concrete is quite stable compared to resistance value from the soil. It means concrete is a good candidate for Ufer grounding materials.
Testing of Ground Resistance
The grounding resistance value following the general electrical installation requirements standard, namely ≤ 5Ω, is used as a security system for devices with a power source for interference caused by short circuits or lightning. To reduce the grounding resistance value, have several methods, one of them is by adding additives material to the grounding system. Figure 8 shows a graph of the comparison of the results of measuring the value of earth resistance (grounding resistance) from concrete variations, which are composed of bentonite and coconut fiber waste. From the figure, it can be seen that the grounding resistance value on the soil looks fluctuating. In contrast, the variation of the concrete mixture of bentonite and coconut fiber waste looks more stable. The average grounding resistance value from the composition of concrete that is composed produces different grounding resistances, namely 54.546 Ω for T1, 45.889 Ω for T2, 50.192 Ω for T3, 66.157 Ω for T4, and 162.2 Ω for grounding resistance of the soil. Table 1 shows the percentage change in the value of the most considerable grounding resistance, namely with concrete, which is composed of 30% bentonite, and the addition of 300 g coconut fiber (T2) is 72%. The difference in the percentage change grounding resistance value from concrete, which is composed of 30% bentonite from the addition of 150 g coconut fiber (T3) and 300 g (T2), is not significant.
Discussion
Ufer grounding is an electrode wrapped in concrete like a building foundation that is in direct contact with the earth to be used as a grounding resistance system. Concrete is composed of materials in the form of cement, sand, and gravel with a ratio of 1: 2: 4, where cement itself is composed of several ingredients, namely, as shown in table 2. Source: (Tjokrodimulyo, 2007). Table 2. the main ingredients of cement are lime and silica (clay). Limestone is the primary source of compounds of calcium carbonate (CaCO3). Therefore, when cement is made as one of the concrete forming materials, it will increase Ca + 2 levels while CO3 -2, and if it reacts with H2O to H2CO3, it can eventually break down into CO2 and water vapor.
The concrete structure that contains cement, creates concrete pore spaces, especially water-binding pores which can reduce resistance through the role of water as an electrolyte. In contrast, clay is the primary source of silica compounds. Silica compounds can absorb large amounts of water because they have a large surface and large pore volume. Also, when concrete becomes a solid object, it will have absorption characteristics that can absorb water for a long time and maintain substances around it so that it can retain the moisture in the soil (Badan Standarisasi Nasional, 2004).
The coconut fiber waste is used to increase water absorption and compressive strength of the concrete so that the grounding resistance value will be better than without the addition of coconut fibers. The addition of coconut fiber waste must be limited because it will affect the compressive strength of the concrete itself. The best addition of coconut fiber waste is 0% -3% of the concrete volume. In this study, the amount of addition of coconut fiber waste, which must be limited to 1.5% is ± 316 g. The addition of coconut fiber waste was varied to 0.75% and 1.5% in mixing electrodes with bentonite 30% of the amount of cement. From the addition of the two variations of coconut fiber addition, we should know the best mixture of coconut fiber waste. From the results of measurements obtained by the addition of 30% bentonite from the amount of cement and coconut fiber waste of 1.5%, 300 g mixed into the concrete has the best value of detention, namely the value of the average resistance 45,896 Ω.
Based on the results using concrete can reduce the grounding resistance value significantly, and adding bentonite effectively decreases the grounding resistance. The results are because concrete and bentonite have characters good in absorbing water.
Good bentonite depends on the chemical content, water absorption rate, swelling ability, resistivity, and density (Lim et al., 2013). Bentonite is a type of material with a primary component that is smectite and its physical properties to be determined by the smectite minerals (Ralph & Guven, 1978). It is montmorillonite and hygroscopic clay, which is characterized by an octahedral sheet of aluminum atoms being infixed between two tetrahedral layers of silicon atoms (Özcan & Özcan, 2004). It has net negative electric charge due to the isomorphic substitution of Al 3+ with Fe 2+ and Mg2+ in the octahedral sites and Si4+ with Al3+ in the tetrahedral sites and is balanced by the cations such as Na+ and Ca2+ located between the layers and surrounding the edges (Önal & Sarikaya, 2007). Natural bentonite has a pH of 8 to 10 when hydrated with water. It is hydrophilic, as it is strongly hydrated by water (Shen, 2001). This reveals why bentonite has great water absorption capability. Water absorption of bentonite occurs using diffusion and capillary suction (Borgesson, 1985). Also, it also able to retain water or rather moisture content for a considerable period at atmospheric pressure. Once water is absorbed, it can expand up to several times its original volume. However, this water retention and swelling capacity of bentonite are dependent on temperature and pressure (Villar & Lloret, 2004).
CONCLUSION
The conclusions are the lowest grounding resistance values are 45.896 Ω at concrete + 30% bentonite and the addition of coconut fiber waste 1.5% (300 g), the grounding resistance value is 3.5 times smaller than the grounding resistance values from the soil. Adding bentonite and coconut fiber can decrease the grounding resistance values. However, the value still far from the standard (5 Ω). In the future, the research about the grounding system is needed to facing global climate change and urbanization.
ACKNOWLEDGMENT
The researchers appreciate the support provided for this work by faculty of engineering, university of Lampung. This work was also partially supported by the postdoctoral of environment science. | 2020-05-30T22:03:42.312Z | 2020-04-30T00:00:00.000 | {
"year": 2020,
"sha1": "ba029f41900aba6762dcbaa576ed0c62f3dbddc4",
"oa_license": "CCBYSA",
"oa_url": "http://ejournal.radenintan.ac.id/index.php/al-biruni/article/download/6281/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "006d6910b138dc468ee01c135f8149bf97cd7abb",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
252109186 | pes2o/s2orc | v3-fos-license | Cultivating epizoic diatoms provides insights into the evolution and ecology of both epibionts and hosts
Our understanding of the importance of microbiomes on large aquatic animals—such as whales, sea turtles and manatees—has advanced considerably in recent years. The latest observations indicate that epibiotic diatom communities constitute diverse, polyphyletic, and compositionally stable assemblages that include both putatively obligate epizoic and generalist species. Here, we outline a successful approach to culture putatively obligate epizoic diatoms without their hosts. That some taxa can be cultured independently from their epizoic habitat raises several questions about the nature of the interaction between these animals and their epibionts. This insight allows us to propose further applications and research avenues in this growing area of study. Analyzing the DNA sequences of these cultured strains, we found that several unique diatom taxa have evolved independently to occupy epibiotic habitats. We created a library of reference sequence data for use in metabarcoding surveys of sea turtle and manatee microbiomes that will further facilitate the use of environmental DNA for studying host specificity in epizoic diatoms and the utility of diatoms as indicators of host ecology and health. We encourage the interdisciplinary community working with marine megafauna to consider including diatom sampling and diatom analysis into their routine practices.
www.nature.com/scientificreports/ Common health indicators currently used to monitor cetaceans, sirenians and sea turtles include mortality rates, demographics, disease prevalence and frequency of stranding events. Since animal-associated microbiota may affect and be affected by their host, both internal and external microbiome composition at any given time could also reflect mid-and longer-term effects of disturbances or stressors experienced by the animal 1 . New health and fitness indices based on compositional changes in the native microbiomes could be a valuable addition to comprehensive health assessments for aquatic vertebrates 2 . Studies on the external microbiome of large aquatic vertebrates have typically focused on the bacterial and/ or viral components. In contrast, epizoic microeukaryotes remain poorly explored despite the observation of diatoms on whales over a century ago 3,4 . Diatoms (Bacillariophyta) are a diverse group of largely photosynthetic microalgae characterized by their uniquely shaped siliceous thecae (frustules) and are commonly found in the plankton and benthos of many different aquatic habitats. Recent studies have expanded the known diversity of epizoic diatoms through increased sampling of hosts to include sea turtles [5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22] , sea snakes 23 and manatees 24,25 .
Competition for limited resources among diatoms has led to niche partitioning and significant habitat specificity in some taxa. The epizoic diatom communities growing on aquatic vertebrates appear to be formed by a combination of opportunistic surface-attached taxa and putatively obligate epizoic (POE) taxa. While the opportunistic taxa are shared across the benthic habitats of the local environment, the POE taxa thus far have only been observed in the epizoic microbiome 7,21,26,27 . This mixture of opportunistic and POE taxa is an intriguing assemblage, as it is potentially influenced by the host's biology (e.g. physiology, anatomy and host-specific prokaryotic microbiome) and behavior (e.g. long-distance migrations, diving, basking, and terrestrial nesting which expose epibionts to extremes in temperature, pressure, irradiance, nutrient concentration and desiccation) as well as the environment (e.g. mean temperature, salinity, nutrient load, local biocenoses). Moreover, the unique and highly specific diatom flora composition can be documented long past the death of the diatom cells by the weathering-resistant inorganic frustules. This has resulted in diatoms being utilized extensively for paleoecological reconstructions and bioindication in freshwater environments; for multiple reviews, see 28 . Similar diatom-based health indices may be developed for the marine animals and their habitats.
However, before this can happen, at least two issues must be addressed: 1) We must expand upon our knowledge of the specific molecular, genomic and ecological nature of the interactions between POE diatoms and their host and environment.
2) We need to simplify the identification of epizoic diatoms, which currently requires specialized equipment (such as electron microscopy) and literature that can be highly fragmented and incomplete, particularly in the case of marine diatoms.
Both of these issues could be addressed by metagenomic and metabarcoding techniques, respectively. Currently, however, the dearth of reference data-both in annotated genome and transcriptomes as well as vouchered DNA barcodes for diatoms-would limit the effectiveness of either effort. For example, a metabarcoding attempt on sea turtle epiflora 29 failed to recover some of the diatom taxa identified in microscopical surveys, including the dominant POE taxon Labellicula lecohuiana Majewska, De Stefano & Van de Vijver. The authors acknowledged that this failure was likely due to the lack of any relevant reference sequences for the genus Labellicula. Further, the position of Labellicula in the molecular phylogeny of diatoms is unknown. This uncertainty significantly hinders any bioinformatic efforts to find sequence data even closely related to Labellicula among both the metabarcoding reads and the reference databases. Many other POE taxa have uncertain phylogenetic affinities within the raphid diatoms, including Tursiocola Holmes, Nagasawa & Takano, Epiphalaina Holmes, Nagasawa & Takano and the "Tripterion complex". This latter assemblage of diatom genera (Tripterion Holmes, Nagasawa & Takano, Chelonicola Majewska, De Stefano & Van de Vijver, Poulinea Majewska, De Stefano & Van de Vijver and Medlinella Frankovich, Ashworth & M.J.Sullivan) is of particular taxonomic interest as they represent a radiation of exclusively epizoic diatom taxa. Their current taxonomy is not universally accepted 15 , and distinguishing the genera can be difficult without the use of electron microscopy due to a similar overall frustule morphology (heteropolar, stalked and septate or pseudoseptate) and relatively small size (< 20 um).
To address the aforementioned issues, we have cultured and sequenced DNA data from POE diatom taxa. These were isolated from sea turtles and manatees from the wild, rehabilitation and rescue centers as well as aquaria from the United States of America, The Bahamas, Croatia, Italy and South Africa. While DNA sequence data from vouchered specimens alone would be useful for molecular identification, the ability to maintain these diatoms away from their hosts facilitates the formulation of hypotheses and laboratory experiments to test the molecular nature of the relationship between the diatom and host.
Results
Culture success. We successfully cultured > 600 strains, both POE and opportunistic diatoms on the epizoic habitat. This manuscript focuses on 76 of these sequenced strains (Table 1) and the sequences from the singlecell DNA extractions of the non-photosynthetic Tursiocola spp. (Figs. 1, 2). Sequence data from 21 additional diatoms are included (Figs. S1, S2). While these additional sequenced diatom taxa were isolated from epizoic collections, they are known opportunistic taxa, occur in non-epizoic habitats, or their habitat preferences are unclear.
Target POE taxa. POE taxa were identified based on the available literature and included diatom species that have only ever been observed in association with the epizoic habit being found on multiple animal specimens 6,8,10,11,[14][15][16]24,25,30 . Among these were epizoic taxa typically reaching high relative abundances (> 25%)-Achnanthes elongata Majewska Molecular phylogenetic results. The currently recognized POE strains were predominantly located in two clades in the molecular phylogeny-Achnanthes sensu stricto + Craspedostauros (Fig. 1) and the clade containing the Tripterion complex, Tursiocola and Proschkinia (Fig. 2). With regards to Achnanthes, most of the sampled diversity comes from three species of sea turtles (green, Kemp's ridley and loggerhead) and West Indian manatees sampled in the southeastern US. These strains formed a well-supported clade (ML bootstrap support [bs] = 100%, BI posterior probability [pp] = 1.0) sister to the rest of the sequenced Achnanthes spp. The Fig. S1). Support values (ML bootstrap support/BI posterior probability) shown above nodes; "*" = nodes with 100%/1.0 values. Taxon name followed by DNA extraction voucher number or strain ID. Taxa isolated from epizoic habitats followed by a diagrammatic representation of the host from which the strain was isolated, and metadata on the location and setting in which the host was sampled (A = aquarium, R = rehabilitation facility, W = wild).
POE Achnanthes clade also sorted by host, with strains collected from manatee (100%/1.0 bs/pp) and sea turtle (100%/0.74 bs/pp) hosts in their own clades. The POE Craspedostauros taxa showed a different pattern to the rest of the POE diatoms. Their clade included both POE and non-POE species, with POE taxon C. danayanus sister to C. alyoubii and C. paradoxus (96%/0.99 bs/pp) rather than to the POE C. macewanii and C. alatus.
The "Tripterion complex + " clade (strains illustrated in Fig. 3a-d) was resolved with strong support (100%/1.0 bs/pp). While we were able to sample taxa from the Chelonicola, Poulinea and Medlinella genera in this complex, we were unable to observe any taxa within Tripterion sensu stricto in our collections. The "Tripterion complex + " clade also contained the POE genus Tursiocola and Proschkinia Karayeva, which has both POE and non-POE species, as well as the non-epizoic genera Stauroneis Ehrenberg, Craticula Grunow, Parlibellus E.J.Cox, Fistulifera Lange-Bertalot and some monoraphid genera such as Schizostauron Grunow and Astartiella Witkowski, Lange-Bertalot & Metzeltin. The molecular data suggested no common origin for the POE clades; Tursiocola and the Tripterion complex are sister to non-POE taxa rather than each other, and the POE Proschkinia (P. vergostriata and P. sulcata) formed a clade sister (100%/1.0 bs/pp) to the rest of the Proschkinia spp.
Only two clades in the Tripterion complex had any geographic variation: the Poulinea clade and Chelonicola caribeana clade. For Poulinea, strains collected in South Africa were not monophyletic, with "Majewska 17C" sister to the rest of the clade, which included strains isolated from the Adriatic, Florida, California and South Africa. It should be noted that the Florida clade represented strains collected from a single location-a rehabilitation facility-while the South African strains were isolated from collections of both wild and captive host animals. The C. caribeana clade, on the other hand, contained strains isolated exclusively from wild host animals in South Africa, Florida and the Bahamas, with the South African strains ("Majewska39A/40A") sister to the rest.
Discussion
Based on our molecular phylogeny, it appears that the epizoic habit has evolved several times and in several different raphid diatom morphotypes: elongate biraphid (Tursiocola and Proschkinia , Fig. 3f,g, respectively) and monoraphid frustules (Achnanthes, Fig. 3e), asymmetric, clavate biraphid frustules (Tripterion complex, Fig. 3a) and thin oval monoraphid frustules (Bennettella, Epipellis 31 ). These independent gains of the epizoic habit could be driven by the host biology and evolution. Fig. S1). Support values (ML bootstrap support/BI posterior probability) shown above nodes; "*" = nodes with 100%/1.0 values. Taxon name followed by DNA extraction voucher number or strain ID. Taxa isolated from epizoic habitats followed by a diagrammatic representation of the host from which the strain was isolated, and metadata on the location and setting in which the host was sampled (A = aquarium, R = rehabilitation facility, W = wild). Black host icon = POE taxon; white host icon = unclear habitat preference. Among others, the eco-physiological constraints shaping epizoic diatom speciation through adaptive radiation would include the nature and character of the animal substrate. Variations of the dermal layer of sirenians and sea turtles including the ultrastructure, topology, physiology (e.g. shedding patterns), and biochemistry (e.g. enzymatic activity) would require different attachment and colonization (and re-colonization) strategies, thus encouraging the development of specific adaptations. Such a specific adaptation is evidenced by Melanothamnus maniticola Woodworth, Frankovich & Freshwater, an epizoic red alga on manatees that has unique skin penetrating rhizoids that anchor the thallus to the deeper epidermis and permit the alga to persist as the host surface skin cells are shed 32 . In marine reptiles, the carapace scutes are often shed periodically, while the skin scales are either shed continuously (sea turtles) or the epidermis is renewed completely in a process called ecdysis (sea snakes 33 ). These patterns differ from those observed in marine mammals in which skin shedding may be regulated by external factors such as temperature 34 . Similarly, animals with different diving regimes may www.nature.com/scientificreports/ host diatoms with different physiological and metabolic adaptations as various stages of photosynthesis will be differently affected by changes in hydrostatic pressure related to the depth, duration, and frequency of dives 35 . Moreover, the diversification dynamics in POE diatoms may be linked to the host animal behavior and lifestyle. The niche heterogeneity, biodiversity, productivity, and nutrient concentrations typical of shallow-water habitats occupied by sirenians and some sea turtles may increase colonization rates by new species and favor benthic diatom immigration to the epizoic community, thus spurring the observed diversity of diatom forms associated with manatees 24,25 or sea turtles using neritic foraging habitats (e.g. loggerheads; 21 ). The opposite phenomenon could explain low epizoic diatom diversity on leatherback sea turtles 5,30 , and pelagic sea snakes 23 that spend significant time feeding in the pelagic zone rather than on benthic organisms 36 . This follows the general pattern of low macro-epibiotic diversity on leatherbacks 37 . Epizoic diatom diversity might also be driven by intrinsic biotic factors, such as gregariousness and range of the host species as both factors may affect the new species encounter and colonization rates. However, in these systems in which epizoic diatom species richness is driven mainly by speciation rates as opposed to benthic species immigration, the total epizoic diatom diversity may remain low. The higher number of diatom taxa observed on neritic megafauna species as compared to openwater animals seem to support this hypothesis 20 .
Currently, taxon sampling is still scattered, and while strains were isolated from multiple geographic localities, much of the strain diversity in species-level clades come from a single collection. The Florida Poulinea lepidochelicola clade, for example, represents strains isolated exclusively from the Turtle Hospital rehabilitation facility in Marathon, Florida. Among the South African P. lepidochelicola strains, six strains (Majewksa 14C, Majewska 20C, HK630, HK638, HK639 and HK640) came from collections from three turtles at the uShaka Sea World facility in Durban, and likely represent one population. However, a morphological difference does exist between the sequenced Medlinella amphoroidea strains from South Africa and the type population of Florida Bay. The valve areolae of the former appear to be occluded by hymenes (Fig. 3d) as opposed to the volae of the type population 14 . Whether this corresponds to a genetic, and perhaps species differentiation remains to be seen, once the Florida Bay population is sequenced.
While we do not yet have enough information to assign any sort of host specificity to certain POE diatom taxa, we have enough DNA sequence data to suggest that some genetic differentiation among POE diatoms is occurring. While we do not know if the genetic distance between the Florida, Mediterranean and South African Poulinea strains is driven by speciation or intraspecific biogeography, they are genetically distinct. Data collected from loggerheads suggests little mixing between sea turtle individuals across ocean basins 38 , with the Mediterranean population being distinct from the northeast Atlantic one, which is then distinct from northwest Atlantic (including the Gulf of Mexico) population. Even within closer geographic boundaries, such as the western Atlantic, there is demonstrated genetic distance between POE strains (C. caribeana of Florida and the Bahamas; Achnanthes elongata of Florida and Georgia) in DNA sequence markers which are generally considered too conserved to show intraspecific variation in diatoms 39,40 .
The collection of molecular information from a larger number of POE diatom strains may reveal whether genetic diversity in epizoic diatoms reflects biogeographic, ecological, and behavioral patterns observed in the host animal populations. For example, it was demonstrated that sea turtle phylogeography is shaped by the sea turtle species thermal regime and habitat preference 41 . Provided the close relationship between epizoic diatoms and sea turtles holds up under the scrutiny of increased data sampling, it may be expected that POE diatoms associated with the cold-tolerant leatherbacks, which are able to use the southwestern corridors to migrate across the oceans, will be characterized by lower genetic diversity than diatom taxa growing on tropical species such as green turtles, hawksbills, and olive ridley sea turtles, whose Atlantic and Indo-Pacific populations appear to be genetically distinct 42 . This knowledge may significantly advance our understanding about evolutionary relationships between diatoms and their animal hosts as well as shed more light on the mechanistic processes of divergence and adaptive evolution of diatoms and other marine microbes.
This study lays the groundwork for biodiversity and biogeographical work in marine epibioses by starting the development of a database of DNA sequence data from 16 of the known POE diatom species for sea turtles and manatees. These sequences will also be useful in not only identifying more POE taxa, but searching for potential refugia of these taxa in non-epizoic habitats. Large areas of the world's marine shallow benthic environment are poorly studied for diatoms, and therefore we cannot exclude the possibility that the POE taxa do exist outside of epizoic habitats. Even in localities that are relatively well-studied for benthic diatoms, variation in the composition and relative abundance in an assemblage due to substrate specificity and seasonality make the assembly of an exhaustive diatom flora extremely difficult. Environmental DNA surveys, such as metabarcoding, have an advantage over microscope-based surveys with regards to relatively small-sized taxa. Based on the molecular phylogeny of the Tripterion complex, it is easy to see how these taxa might have remained undetected in a bioinformatic summary of OTUs by sequence similarity, as there is significant genetic difference between the Tripterion complex and the only other sequenced representatives of the Rhoicospheniaceae-the freshwater taxon Rhoicosphenia abbreviata (C.Agardh) Lange-Bertalot. In fact, there are no morphological characters exclusive to the taxa in the molecular clade containing Tursiocola and the Tripterion complex that would cause a diatomist to expect a close match in sequence identity to the POE taxa. With curated sequence data now available for the most common POE taxa, we may find evidence for their occurrence in non-epizoic habitats through eDNA studies.
One of the stated goals of this study was to generate additional DNA sequence data from POE diatom taxa on sea turtles and sirenians. This goal was greatly aided by our ability to culture many of these POE diatoms away from their hosts, which raises several questions about the ecological requirements and adaptations of epizoic diatoms. The isolated strains of POE diatoms, which can be maintained in artificial conditions and without the animal hosts, provide opportunities to further study the molecular, genomic and physiological nature of the unique relationship between the diatoms and marine megafauna in a laboratory setting. For example, we can examine how different species may be affected by different conditions or possess specific adaptations to epizoic www.nature.com/scientificreports/ lifestyle. It is possible that some trade-off in obtaining those adaptations makes the POE taxa less competitive in non-epizoic benthic environments. We know little about the extent to which the microbes associated with the diatom ("phycosphere") might affect the competitive ability of diatoms, and/or whether the phycosphere may itself manufacture some critical compound only in an epizoic community. Since all cultured POE diatoms were maintained as non-axenic cultures, it is yet unclear what role the bacterial strains played in the development and survival of the targeted diatom species and whether the long-term maintenance of axenic POE strains would be feasible. Future studies may also determine the number of evolutionary leaps to the epizoic habitat and the number of host switches, shedding more light on the co-evolution of diatom-animal relationships.
Methods
Cultures and microscopy. Diatoms were collected from the skin of West Indian manatees and the skin and carapace of six species of sea turtles (see Table 1 for details). These collections were made following the protocol outlined by Pinou et al. 43 . Wild sea turtles were either sampled on nesting beaches after oviposition (as to not disturb the nesting process) or from turtles captured in water via a rodeo method 44 . The seven sea turtles resident at the uShaka Sea World in Durban (South Africa) were sampled during feeding. The Adriatic Sea turtles were sampled upon arrival to the rescue center after being caught accidentally during trawling (Iracus) or during rehabilitation in an outdoor pool with freely circulating seawater (Lunga). Manatees were sampled during annual health assessments conducted by the USGS Sirenia Project. Individual diatom cells were isolated by micropipette into sterile f/2 culture medium 45 with a salinity matching that of the collection area. Strains isolated from the Bahamas, and the US were maintained under natural light in a north-facing window at UT Austin at room temperature (between 20 and 24 °C). South African strains were lit by natural light from a south-facing window and maintained at a temperature of 20-24 °C at the Unit of Environmental Sciences and Management in Potchefstroom. The strains isolated from the Adriatic were grown at 18-20 °C at 7-10 μmol m 2 s −1 , 12:12 (light:dark) cycle .In the case of non-photosynthetic taxa (like some Tursiocola species), individual cells were documented by light micrograph ("photovouchered") and isolated into WGA whole-genome amplification cocktail 25 .
Cultures were harvested into separate pellets for microscopy preparation and DNA sequencing. Pellets for microscopy were cleaned with hydrogen peroxide and nitric acid, rinsed to neutral pH and dried onto 22 × 22 mm and 12 mm coverslips for light microscopy (LM) and scanning electron microscopy (SEM), respectively. Permanent mounts for the LM slides were made with Naphrax® mounting medium (Brunel Microscopes, www. brune lmicr oscop essec ure. co. uk) and micrographs were taken with a Zeiss Axioskop. Coverslips for SEM were coated with iridium by a Cressington 208 Bench Top Sputter Coater (Cressington Scientific Instruments, Watford, UK) and micrographs taken with a Zeiss SUPRA 40 VP scanning electron microscope (Carl Zeiss Microscopy, Thornwood, NY, USA). Additional micrographs of the strains are available from the authors. DNA isolation, amplification and sequencing. Pellets for DNA sequencing were extracted using the DNeasy Plant Minikit, with an extra 45 s incubation in a Beadbeater (Biospec Products, Bartlesville, OK, USA) with 1.0 mm glass pellets for colony and frustule disruption. The nuclear-encoded ribosomal SSU and chloroplast-encoded rbcL and psbC markers were amplified by PCR using the primers outlined in Theriot et al. 46 in 25 µL reactions with 1-3 µL of template DNA, 0.5 µL of each primer, 0.25 µL of Taq polymerase, 12.5 µL of pre-mixed FailSafe Buffer E (Lucigen Corporation) and 8.25-10.25 µL of sterile water. PCR conditions were identical for rbcL and psbC: 94 °C for 3.5 min., 35 cycles of (94 °C for 30 s, 48 °C for 60 s., 72 °C for 2 min.), and a final extension at 72 °C for 15 min. PCR conditions for SSU were: 94 °C for 3.5 min., 35 cycles of (94 °C for 30 s., 51 °C for 60 s., 72 °C for 3 min.), and a final extension at 72 °C for 15 min. The amplicons were purified using an EXO-SAP protocol: a 3 µL of an EXO-SAP solution containing 0.5 µL of shrimp alkaline phosphatase, 0.25 µL of exonuclease I and 2.25 µL of sterile water were added to the PCR products and incubated at 37 °C for 30 min. followed by 80 °C for 15 min. Purified products were then sequenced on an ABI 3730 DNA Analyzers using BigDye Terminator v3.1 chemistry.
Sequence data were added to a dataset of raphid and araphid pennate diatoms, with Asterionellopsis glacialis used as an outgroup (see Table S1 for GenBank accession numbers). SSU data were aligned by the SSUalign program, using the covariance model outlined in Lobban et al. 47 . Data were initially partitioned by gene, by paired and unpaired sites in SSU secondary structure and codon position in rbcL and psbC. Model testing and grouping of partitions were performed by PartitionFinder 2 48 using all nucleotide substitution models, linked branches, and rcluster search 49 settings for trees inferred by RAxML 8 50 . The best model was chosen using the corrected Akaike information criterion (AICc). Maximum Likelihood and Bayesian Inference based phylogenies were inferred using IQ-TREE version 1.6.12 for Linux 51 with partitioned models 52 and multi-threaded MPI hybrid variant of ExaBayes version 1.5 53 , respectively. Nodal support for the maximum likelihood phylogeny was assessed using 1000 bootstrap replicates via IQ-TREE. ExaBayes analyses included four independent runs with two coupled chains where branch lengths were linked. Convergence parameters included an average deviation of split frequencies (ASDSF) of less than or equal to 5% with a minimum of 10,000,000 generations. Bayesian nodal support was assessed using posterior probabilities, with the first 25% of the trees removed as "burn-in".
Data availability
DNA sequence data generated for this study are published on the NCBI GenBank online sequence depository under the accession numbers listed in Table S1. Additional micrographs and cleaned voucher material from the sequenced cultures are available from lead author MPA. | 2022-09-08T06:16:38.316Z | 2022-09-06T00:00:00.000 | {
"year": 2022,
"sha1": "798533f28a34eb0ce5e4c597a2945d430c4e75e7",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-022-19064-0.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "48691e28397a34ae4e5ff0a89a6747063338028a",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
249589334 | pes2o/s2orc | v3-fos-license | Cross-sectional Study Knowledge, attitude, and practice regarding lifestyle modification among type 2 diabetes patients with cardiovascular disease at a Tertiary Hospital in Somalia
A S Background: Diabetic was the eighth leading cause of death among both sexes and the fifth leading cause of death in women in 2012{WHO, 2016}. The main objective of this study is to identify the knowledge, attitude, and practice regarding lifestyle modification among type 2 DM with CVD at Mogadıshu Somali Turkish Training and research hospital in Mogadishu, Somalia. Method: This study was hospital-based cross-sectional study conducted from type 2 Diabetic Mellitus patients with cardiovascular disease attended to Mogadishu Somali Turkish Training and Research Hospital for medical check-ups and regular medical treatments between September 2020 to August 2,0221. Results: A total of 384 patients were enrolled in the study. Out of 384 participants 221(57.6%) were females, while 163(42.4%) were males. The majority of the repondents, 261 (68%) fell within the age group of 60 years and above. Most of the participants (29.4%, n = 113) had no formal education. Interestingly, more than half 228 (59.4%) of participants were employed, while near one-third of the respondents (34.1%, n = 131) belonged to the low-income group ( < 2,000,000SH). Concerning knowledge of the patients towards LSM of diabetic; the majority of the participants 68% ( n = 261) had poor knowledge regarding knowledge questions, while 32% (n = 123) had good knowledge. Regarding to the level of attitude, 71.9% of respondents had a negative attitude toward the lifestyle modification of diabetics and the remaining 28.1% (n = 108) had negative attitude. More than two-thirds of participants 61.2% (n = 235) had a poor practice, while 38.8%(n = 149) respondents had a good practice regarding lifestyle modification. Finally A significant relationship was evaluated between Knowledge and Attitude ( 0.007* ) and between Knowledge and Practices (P = 0.000**) suggesting that most participants had good knowledge associated with good attitude and practices correspondingly. Conclusion: The result of this study revealed, majority of type2 DM patients with CVD had poor knowledge, negative attitude and poor practices towards LSM. So, we recommend to all stake holders (Ministry of health, Health institution, health professionals, and national and international NGO) to improve KAP of the patients towards LSM.
Introduction
Diabetes is currently defined as a group of metabolic disorders characterized by hyperglycemia that results from defects in the secretion or action of insulin, or both [1]. Globally, an estimated 422 million adults are living with diabetes mellitus, according to the latest 2016 data from the World Health Organization [2]. Furthermore, The global prevalence (age-standardized) of diabetes has nearly doubled since 1980, rising from 4.7% to 8.5% in the adult population. When comparing this data to the 2013 estimation data from International Diabetes Federation that showed 381 million was living with diabetic Mellitus(DM) ("Simple treatment to curb diabetes", 2014) you can see that the disease is growing rapidly and it's believed to project almost double by the year of 2030 (3).
Sub-Saharan Africa, like the rest of the world, is experiencing an increasing prevalence of diabetes alongside other non-communicable diseases [4]. In 2010, 12.1 million people were estimated to be living with diabetes in Africa, and this is projected to increase to 23.9 million by 2030 [5]. Just like the World, Type-2 diabetes accounts for over 90% of diabetes cases in Sub-Saharan Africa [6].
Factors that are believed to be related to growing the disease and its risk factor are knowledge, attitude, and practice regarding lifestyle modification in diabetic patients [7].
All the population should change their lifestyle since the lifestyle modification interplay with the prevention of DM type 2, and also reducing the morbidity and the comorbidities of type 2 diabetes [8,9].
In Somalia, there is no data regarding the percentage of Somalis who made lifestyle modification (LSM) and the people consider obesity to be a healthy, prosperity, and wealth symbol [10].
Despite the importance of the topic, there's no accessible study done in Somalia; therefore this study will try to establish the knowledge, attitude, and practice regarding lifestyle modification among type 2 DM patients with cardiovascular disease in a tertiary hospital in Mogadishu, Somalia.
Material and method
This study was conducted in a tertiary teaching hospital in Mogadishu, Somalia between September 2020 to August 2, 0221. It was a hospital-based cross-sectional study.
The study population was all type 2 Diabetic Mellitus patients with cardiovascular disease attending Mogadishu Somali Turkish Training and Research Hospital for medical check-ups and regular medical treatments. People who were mentally fit, above the age of 18, consented for participation and willing to provide information on this matter were included in this study. Written informed consent was obtained. The participants were interviewed individually at their convenient places and the questionnaire was made anonymous and kept in a lockable cabinet.
Kish-Leslie formula (Kish 1965) was used to determine the required sample size.
where n = required sample size. Z 2 = 1.96 (Critical value of the standard normal distribution corresponding to error rate α/2 at the level of significance σ = 0.05 (5%). p = Estimated proportion of people who fortify food q = (1-P), which represents the estimated proportion of people who do not fortify food e = error allowed.
Therefore, using the formula above, the sample size was: From the selected hospital, before the data is collected from the field, a pretest was made, by identifying 15 respondents from the target population, in order to identify questions that do not make sense to participants, and other problems within the questionnaire that had probe biased answers. The questions that did not provide useful data were discarded and final revisions of the questionnaire were made. All Diabetic Mellitus type 2 patients who consented to participate in the study were interviewed conveniently until the sample size of 384 respondents was achieved. This study used a structured questionnaire guide to collect information. The research assistants explained the aim of the interview to the participants and also sought consent from the respondents. While preparing the questionnaire new knowledge scale for patients with type 2 diabetes and poor literacy: Spoken Knowledge in Low Literacy Patients with Diabetes (SKILLD) [11].
The questionnaire comprised four sections, first part constituting socio-demographic details of participants, the second consisted of knowledge regarding the benefits of exercise, diet control, weight loss, and stress management, the third part assessed attitude toward lifestyle modifications and fourth part evaluated on lifestyle modification practices of patients towards LSM of diabetic. The questionnaire consisted of both open and close-ended items which were filled by direct face-to-face interviews with all eligible participants.
KAP score, for knowledge each positive response was given score1 while a negative response was scored 0. Total knowledge scores can range between 0 and 8. Knowledge scores; from 0 to 3 were considered as poor knowledge and scores more than 3 were considered as having good knowledge regarding type two diabetes.
Attitude towards diabetic patients was assessed using 6-item questionnaires; where attitudes scored between 0 and 3 were considered poor attitude and scores 4 to 6 were considered a good attitude.
Practice towards diabetic patients was assessed using 6-item questionnaires; where practice scores between 0 and 3 were considered poor practice and scores 4 to 6 were considered as good practice. Before the data is collected from the field, a pretest was made, by identifying 15 respondents from the target population, to identify questions that do not make sense to participants, and other problems within the questionnaire that had probe biased answers. The questions that did not provide useful data were discarded and final revisions of the questionnaire were made.
To avoid being inconsistent about translating from English to the Somali language verbally, the structured questionnaire Guide which is written in the English version was translated into Somali language and the respondents were subjected to a questionnaire with the Somali version.
This study was conducted after we found the permission letter that has been written officially by the ethical commette board of Mogadishu Somali Turkish Training and Research Hospital. The confidentiality was assured in such a way that no disclosure of any name of the patients, the health care provider or drug product in relation to this finding.
The data was collected, edited, coded, and put into Epi-Data and then exported to the IBM SPSS (Statistical Package for Social Science) template. Descriptive analysis was made to determine the knowledge, attitude, and practice towards lifestyle modification among type two Diabetic Mellitus. The distribution of the variables was analyzed using frequency tables. KAP score was introduced for Bivariate analysis to establish the determinant factors associated with lifestyle modification by using Pearson's correlation.
Results
A total of 384 patients were enrolled in the study, As table-1 shows that out of 384 participants 221(57.6%) were females, while 163 (42.4%) were males. The majority of the participants, 261 (68%) fell within the age group of 60 years and above. It also shows that a majority of the participants, 113(29.4%) had no formal education. Interestingly, more than half 228(59.4%) of participants were employed and 156 (40.5%) participants were unemployed. Near one-third of the respondents (34.1%, n = 131) belonged to the low-income group (<2,000,000SH), flowed by respondants in between 2,000,000Sh to <,000,000SH (25%, n = 102). Most of the participants, 241 (62.8%) were married.
As the table-2 shows, we were interested to know about the knowledge of lifestyle modification among diabetic patients with CVD and asked different questions to the participants; whether they know about the lifestyle modification or not, their knowledge regarding benefits of exercise and weight loss, diet control, and stress management and complications of DM. Concerning knowledge of the patients towards LSM of diabetic; the majority of the participants 68% (n = 261) had poor knowledge regarding knowledge questions, while 32% (n = 123) had good knowledge.
To assess the attitude of participants towards LSM, we used main questions related to LSM; whether it's useful or useless, believes that LSM is a benefit for diabetic patients, and their thought of controlling diabetes with regular exercise or controlled by diet modification. The majority of the respondents 71.9% (n = 276) had a negative attitude toward the lifestyle modification of diabetics and the remaining 28.1% (n = 108) had negative attitude (Table-2).
Regarding the practice of lifestyle modification, 6 questions related to the practices of participants were used to assess the level of the knowledge included; sought treatment, and preventive measures such as screening for DM, exercise, and planned and controlled diet. More than two-thirds of participants 61.2% (n = 235) had a poor practice, while 38.8%(n = 149) respondents had a good practice regarding lifestyle modification.
A significant relationship was evaluated between Knowledge and Attitude (0.007*) and between Knowledge and Practices (P = 0.000**) suggesting that most participants had good knowledge associated with good attitude and practices correspondingly (Table 3).
Discussion
The study enrolled and selected 384 diabetic patients for medical check-ups and regular medical treatments in 1 year at Mogadishu Somali Turkish Training and Research Hospital. This study is the first study from Somalia to report on knowledge, attitude, and practice related to LSM among diabetes patients.
Majority of respondents in this study came from the age groups 46-59 years, 51-60 years and ≥60 years with 20.3% and 68% of respondents respectively, this is due to the fact that type 2 diabetes mellitus frequently develops in elderly age [12,13].
This study enrolled more females (57.6%) than males (42.4%), reflecting the gender ratio of patients at the Mogadishu Somali Turkish training and research hospital, Mogadishu, Somali. DM and associated risk factors are more common among women, according to recent data from developing nations including South Africa and Ethiopia (12,13. In this study, most of the participants (29.4%, n = 113) had no formal education which is similar to the previous study on KAP regarding life style modification among type 2 diabetic mellitus patients reported by Adem AM et al. and his collagenous [13].
Majority participated in this study, near one-third (34.1%, n = 131) belonged to the low-income group (<2,000,000SH), flowed by respondents in between 2,000,000Sh and 4,000,000SH. This finding was consistent with the findings of a cross-sectional study in Ethiopia attachment to Diabetes Self-Management Practices among Type 2 Diabetic Patients, in which the majority of the study participants, 139 (43%), had absolutely low monthly income [14].
The Diabetes Prevention Project demonstrated that lifestyle modification, including intensive exercise, is more effective in preventing diabetes than pharmacological therapy, and highlighted the role of trained professionals in motivating people to follow lifestyle interventions. Although lifestyle intervention improves the condition of the patients with type 2 diabetes mellitus and prevents those with IGT to develop diabetes there is not enough evidence to determine if lifestyle interventions affect mortality in those who already have DM2 [15].
Knowledge is the greatest weapon in the fight against diabetes. Information can help people assess their risk of diabetes, motivate them to seek proper treatment and care, and inspire them to take charge of their [16]. Regarding the level of knowledge toward lifestyle modification, Almost (68.0%) of the respondents had poor knowledge about lifestyle modification among type 2 diabetes with CVD, and most of them had not known diet control, while some others now knew about exercise but a handful of others not knew about stress management. In similar to this study poor knowledge regarding lifestyle modification of diabetes has been reported in several studies from Pakistan, Kenya and Nepal [17][18][19]. In contrast to this finding, Adem and his teammate found in their study that 77.59% of respondents had adequate (good) knowledge to the LSM included benefits of exercise, weight loss and healthy diet amon diabetic patients [13].
A study done by WHO in 2016 has stated that the application of appropriate knowledge and information with mass involvement of people in overcoming and controlling chronic disease could lead to a hasty improvement in life expectancy and quality of life especially among middle-age and older people in some countries [20].
In the current study, majority of respondents 71.9% had a negative (poor) attitude regarding lifestyle modification followed by 28.1% of the respondents who had a positive (good) attitude. This finding is a contrast to those of studies done in South Africa at Mamelodi Hospital in which the majority of respondents (92.7% and 51.6% respectively) had a positive (good) attitude towards lifestyle modifications [12]. In this study also differences in other studies do in Ethiopia, in which the majority of respondents 81.9% had positive (good) attitudes towards lifestyle modifications [21].
The Somali population doesn't believe that obesity is an illness or it may lead to any disease because they believe that obesity is a sign of health, prosperity, and wealth symbol [10].
Also, the practice of the patient is not only affected by the patient's compliance but also it can be affected by limited resources and low income which limit their affordability for a well-balanced dieting and necessary equipment to exercise [1]. In the present study, the majority of the respondents 61.2% had poor practice and those with an average of 38.8% had good practice regarding to lifestyle modification. Correspondingly to the present study, a study from Kenya revealed that 75.6% of respondents had poor practices concerning lifestyle modifications among diabetic patients [22]. The proportion of participants with poor lifestyle practices was in contrast to a study done in India which reported that 99% of their participants had good practices towards lifestyle modification among diabetic patients [23].
Conclusion
The result of this study revealed, majority of type2 DM patients with CVD had poor knowledge, negative attitude and poor practices towards LSM.
Based on in our research findings about knowledge, attitude, and practice to lifestyle modification on type two diabetic patients with CVD, we recommended: • All stake holders (Ministry of health, Health institution, health professionals, and national and international NGO) to improve KAP of the patients towards LSM. • Ministry of Health and health partners should sensitize the community members on possible options of lifestyle modification, and to be the especially targeted on the lifestyle related patients and sensitize them on the advantages of lifestyle modifications and its impact on their current and future health. • Health Workers/health professionals should provide relevant information about the lifestyle modification to all clients visiting at their respective health facilities, and special consideration should be offered for diabetic and other chronic patients.
• The ministry of health should implement lifestyle modification friendly spaces including special runways for morning an evening exercises and importations quality control unit.
• The media and other health organizations should play a role in increasing the awareness of LSM about diabetes most steadily. • Training, empowering, nutritional intervention programs, and motivating health care providers for delivering adequate health messages.
Ethical approval
Approval for conducting the study was obtained from Mogadishu Somali Turkish Training and Research Hospital.
Sources of funding
We declare that we have no funding source.
Author contributions
MFYM: Was evaluated and selected the study, did the analysis, wrote the first draft, and critically read through the manuscript.
MOOJ: was reviewed and revised the manuscript for important intellectual content.
Both authors discussed the results and commented on the manuscript. Both authors read and approved the final manuscript.
Trial registry number
1.Name of the registry: Not Applicable. 2.Unique Identifying number or registration ID: Not Applicable. 3.Hyperlink to your specific registration (must be publicly accessible and will be checked): Not Applicable.
Guarantor
As Corresponding Author, I confirm that the manuscript has been read and approved by all named authors.
Consent
Written informed consent was obtained from the patients for publication of this article and accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal on request.
Availability of data and material
All data generated or analyzed during this study are included in this article.
Declaration of competing interest
We declare that we have no conflict/competing interests. | 2022-07-25T04:26:38.103Z | 0001-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "413b339f616606899ba68ba25a3de9ec02339d1d",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.amsu.2022.103883",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "413b339f616606899ba68ba25a3de9ec02339d1d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221740293 | pes2o/s2orc | v3-fos-license | Comment on: “The treatment of sarcoptic mange in wildlife: a systematic review”
This letter comments on the article “The treatment of sarcoptic mange in wildlife: a systematic review” published in Parasites & Vectors 2019, 12:99, and discusses the limitations in the use of endectocides for scabies control in free-ranging wildlife. The ecological impact and drug resistance to ivermectin are also discussed. In our view, scabies control in free-ranging wildlife should be based preferably on population management measures, and whether to apply individual treatments to free-ranging populations should be considered very carefully and avoided where not absolutely warranted.
Letter to the Editor
Recently, Rowe and colleagues have published an interesting review of the treatment of sarcoptic mange (hereafter "scabies") in wildlife [1]. This review highlighted the impact of this worldwide distributed parasitic disease and pointed out the need for consensus in the implementation of effective treatment of captive and free-ranging wildlife affected by scabies.
Although we fully agree on the need for a convention on scabies control in wildlife, we would like to draw attention to the challenges and risks of implementing protocols based on pharmacological treatments in the wild. In this letter, we will set out some reasons for the incongruency of such drug-based approach.
After the systematic review of 2205 publications, Rowe et al. [1] kept 28 relevant articles reporting pharmacological effective treatments of scabies in wildlife, notably ivermectin delivered multiple times via subcutaneous injection. Most of these studies share a clinical treatment approach rather than population-based disease management.
Since the availability of ivermectin formulations for animal use in 1975, this drug became the most effective and safe treatment for a broad range of endo-and ectoparasites including Sarcoptes scabiei [2][3][4]. In fact, ivermectin is included amongst the election drugs to treat scabies not only in humans [5,6] but also in domestic [7] and wild mammals [8]. This macrocyclic lactone is mainly delivered orally (e.g. in the form of drenching), pour-on or subcutaneously [2,7] in physical or chemical restrained individuals. Based on our professional experiences and field research using ivermectin and drugs with related treatment regime methods, ivermectin treatments may be feasible for target individuals but highly challenging to unrealistic for whole populations in most instances.
Most of the studies reviewed by Rowe et al. [1] were focused on captive wildlife and only six out of the 28 papers were supposed to describe the output of ivermectin to control mange in free-ranging wildlife. Nevertheless, and after a careful reading of these articles, we noted that none of them was about the use of ivermectin to control an ongoing scabies outbreak in wild free-ranging
Open Access
Parasites & Vectors *Correspondence: luca.rossi@unito.it 1 Dipartimento di Scienze Veterinarie, Universitá di Torino, Grugliasco, Torino, Italy Full list of author information is available at the end of the article populations. For example, the studies of Skerratt et al. [9] and Ruykys et al. [10] were largely conducted on wombats (Vombatus ursinus and Lasiorhinus latifrons, respectively) kept in captivity until their recovery. On the other hand, Kalema-Zikusoka et al. [11] worked with a group of four free-ranging gorillas (Gorilla beringei) habituated to human presence which facilitated their approach and treatment. Similarly, Chhangani et al. [12], treated a troop of Hanuman langurs (Semnopithecus entellus) living close to human habitations and religious places. Expanding urbanization and habitat loss, are leading to increasingly common human-wildlife interfaces [13], and as a result, much of the free-ranging wildlife identified as suffering from sarcoptic mange and becoming candidates for treatment might be habituated to humans to varying degrees.
Rajkovic-Janje et al. [14] focused on the use of ivermectin for endoparasite control in the wild boar. Sarcoptic mange was detected only in skin samples collected after the necropsy of four piglets. Moreover, sarcoptic mange can occur subclinically in this species [15] and others, and therefore the lack of clinical signs after the ivermectin treatment is not always a reliable indicator of recovery. Gakuya et al. [16], however, showed the outcomes of successful ivermectin treatment of endemic scabies in Thomson's gazelles (Eudorcas thomsonii) and cheetahs (Acinonyx jubatus) from the Masai Mara National Park. The peak of scabies prevalence here ranged from 7.4% (n = 10 scabietic gazelles) to 28% (n = 2 scabietic cheetahs) affecting a small number of individuals that were captured and ear-tagged for potential re-treatment. Even though that work was representing an outbreak in free-ranging wildlife, the numbers of affected individuals are far from those recorded during scabies outbreaks in European fauna, e.g. the Northern chamois (Rupicapra rupicapra; n = 1696 affected individuals, 16.6% of the chamois population [17,18]), or the Iberian ibex (Capra pyrenaica; c.7695 scabietic individuals, 80% of the ibex population in Sierra de Cazorla [19] and 3382 scabietic individuals, 23% of Sierra Nevada ibex population [20]).
Pharmacological treatment of mange in wild animals mostly produces individual healing, but its effects on achieving control or eradication in a population are mostly inconclusive [21]. Therefore, gathering more information on the population and environmental effects and on the consequences of massive antiparasitic treatments has been recently recommended, approaching the management of sarcoptic mange in wildlife populations from a wider ecological perspective [22]. As opposed to the individual approach revised by Rowe et al. [1], the success of scabies control in free-ranging wildlife depends on the size of the target population, scabies prevalence, and the feasibility of reaching the required percentage of the population with any specific treatment or measure [23]. Individualized pharmacological therapies, however, are desirable for vulnerable or endangered species where the complete recovery of specific individuals is decisive for species recovering (e.g. see an example for the Iberian lynx, Lynx pardinus [24], or for the black bear, Ursus americanus [25]). However, in abundant non-threatened and widespread populations with a high prevalence of scabies, it is unlikely that any individual approach would reach the necessary proportion of the population to prevent transmission and reinfection. This is even more evident for ivermectin due to the need for multiple doses to achieve a complete recovery and the total elimination of all mites from the host and the environment, although other long-acting drugs could be a better option. Nevertheless, the environmental and public health concerns of massive antiparasitic drug release in the environment would still persist [23].
While of recognized limited efficacy, selective culling of clinically affected individuals is also a common strategy in epizootic outbreak scenarios in free-ranging wildlife [26]. However, this population management measure is not free of disadvantages, such as the culling of individuals recovering from scabies [27] in detriment of the host population viability, as well as the possible objections from the public opinion in some particular species considered national icon, such as koalas (Phascolarctos cinereus) and wombats in Australia.
It is also important to acknowledge the potential for non-target environmental effects of mass administration of ivermectin. Avermectins are excreted during four days post-treatment and it can be detected in feces for up to 40 days post-defecation [28] and for more than one year in reindeer pastures [29]. In soil, ivermectin shows a half-life degradation between 7 and 217 days, depending on the solar radiation [3]. Once on the environment, this drug has pre-lethal consequences for dung beetles [30] and for other dung-dwelling invertebrates [3]. If the drug is delivered orally in feeding stuff, as per common practice in game ungulate populations in Spain, soil contamination and thus the potential effects of ivermectin on other terrestrial fauna, and possibly food chain effects, could be expected not only through fecal contamination but through the drug preparation itself. On the other hand, game treatment would limit venison consumption, as ivermectin withdrawal time in edible tissues may vary from 18 to 48 days depending on the administration route [7,23,31]. Finally, another concern about the use of ivermectin for scabies control in wildlife is the drug resistance phenomenon recently described in human scabies [32,33] and also suspected in companion animals [34].
In line with Rowe et al. [1], little is known about the outcome of pharmacological mass treatments of scabies outbreaks in free-ranging wildlife populations. Previous reports and our own experience after decades of scabies investigation in mountain ungulates unveil that no strategy has ever unambiguously resulted in effective control of scabies in naive or endemically affected herds [23]. Instead, initial outbreak epizootics can become enzootic in successive waves as mite and host mutually adapt [35,36]. We should also acknowledge that a range of environmental, host and pathogen factors can influence disease dynamics between enzootic, epizootic and disease-free scenarios [37]. Accordingly, whether to apply individual treatments to free-ranging populations should be considered very carefully and avoided when not absolutely warranted.
Bearing in mind the points provided here, we advocate for careful consideration of the potential limitations of the pharmacological treatment of free-ranging wildlife before the use of endectocide drugs in scabies outbreaks, considering: feasibility and efficacy, ecological impact, drug resistance, drug residues in meat (for animal and human consumption) and economics, among others. Balancing the relative merits of traditional ecological population-based management approaches to handle scabies outbreaks independent of drug-based treatments may be warranted in many free-ranging wildlife contexts. Similarly, a pragmatic assessment of whether the control can be achieved, and intervention therefore justified, should always be made. | 2020-09-17T05:10:26.606Z | 2020-09-15T00:00:00.000 | {
"year": 2020,
"sha1": "264f52a0ef556c79e325aa8d42d331c592415c87",
"oa_license": "CCBY",
"oa_url": "https://parasitesandvectors.biomedcentral.com/track/pdf/10.1186/s13071-020-04347-0",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "264f52a0ef556c79e325aa8d42d331c592415c87",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
53981918 | pes2o/s2orc | v3-fos-license | Minimal Ramification in Nilpotent Extensions
Let $G$ be a finite nilpotent group and $K$ a number field with torsion relatively prime to the order of $G$. By a sequence of central group extensions with cyclic kernel we obtain an upper bound for the minimum number of prime ideals of $K$ ramified in a Galois extension of $K$ with Galois group isomorphic to $G$. This sharpens and extends results of Geyer and Jarden and of Plans. Also we confirm Boston's conjecture on the minimum number of ramified primes for a family of central extensions by the Schur multiplicator.
Introduction
Given a number field K and a finite group G an important problem is to find a Galois extension L of K such that its Galois group Gal(L/K) is isomorphic to G. Scholz and Reichardt (see Serre [12] for a modern account) proved independently that any l-group G, l an odd prime, occurs as the Galois group of an extension of the rationals. Shafarevic [13] has shown for any solvable group G and number field K that there exists a Galois extension L/K with G ∼ = Gal(L/K). In this paper we ask for given K and nilpotent G, what is the minimum number minram K (G) of prime ideals of K ramified in L as L runs over extensions of K with Gal(L/K) ∼ = G? We rephrase the question for l-groups G: For a given finite set S of prime ideals of K, K(l, S) denotes the maximal l-extension of K which is unramified outside S. How large must S be so that G is isomorphic to a quotient group of Gal(K(l, S)/K) for some S?
One knows minram Q (G) ≤ n if G is an l-group of order l n , l = 2, cf. [12]. If G is an abelian group, an application of class field theory (Theorem 5.2) shows minram K (G) ≤ d(G) := minimum number of generators of G. In fact for the case when K = Q, Boston's conjecture [1] stated below implies that minram Q (G) ≤ d(G) for all finite groups G.
Suppose G is a nilpotent group and the field K satisfies for each prime l dividing the order |G| of G conditions (1) K does not contain a primitive l-th root of unity ζ l (2) K has no ideal classes of order l 2 , then Theorem 8.4 states Here {G i } is the lower central series of G, G 1 = G, G i+1 = [G i , G] and t(K) is a constant depending only on K. This extends Plans' [10] result on minram Q (G) to all number fields K satisfying (1), (2) above. Secondly, Geyer and Jarden [3] obtain 0 AMS classification: 12F12, 11S31 , 12F10, 11R32 Research supported by the Claude Shannon Institute, Science Foundation Ireland Grant 06/MI/006. the bound minram K (G) ≤ n + t(K), where the l-group G has order l n and ζ l / ∈ K. We obtain the improved bound by considering central embedding problems with a cyclic kernel, not just kernel of prime order as in [3]. Note that without condition (2), the methods of Section 8 still generalize the results of Geyer and Jarden [3] to nilpotent groups, giving a weaker bound for a nilpotent group G of order l||G| l n l , namely minram K (G) ≤ max l||G| {n l } + t(K).
We generalize Geyer and Jarden's definition of an exceptional set T of primes to the prime power setting in Section 4; this provides the technical tool for constructing idele class characters with strictly controlled ramification.
The realization of l-groups is carried out in three steps similarly to [3], [12], [10]: the first step involves solving an embedding problem given a Scholz extension, in the second step we remove ramification in the solution outside the set of exceptional primes, and in the third step we force the solution to be Scholz at the cost of one extra ramifying prime. Finally in Section 8, for G nilpotent this prime is chosen to be the same for all primes l dividing the order of G.
We take another approach to the problem of realization of Galois groups with minimal ramification in Section 9. Take K = Q or an imaginary quadratic field with ζ l / ∈ K. We consider a family of l-extensions of K obtained from central extensions by the Schur multiplicator and observe that a result of Fröhlich [2] for K = Q (extended to imaginary quadratic by Watt [14]) on realizing the Schur multiplicator confirms Boston's conjecture 1.2 of [1] for groups corresponding to this family. This conjecture states that for any nontrivial finite group G, there exists an extension of Q with Galois group G and exactly max(1, d(G ab )) ramified primes, and moreover no extension of Q with Galois group G can be ramified at fewer than max(1, d(G ab )) primes (counting the infinite prime). See Kisilevsky-Sonn [6] for results on minimally ramified realization of semiabelian groups.
Embedding Problem
Fix an algebraic closureK of a number field K and let G K = Gal(K/K) denote the absolute Galois group of K. An embedding problem (G K , ρ, α) for G K (see e.g. [9]) is a diagram with an exact sequence of finite groups and epimorphism ρ.
A solution φ of the embedding problem is a homomorphism φ : G K → G such that α • φ = ρ; a solution is proper if φ is surjective. If G,Ḡ are l-groups with the same number of generators, it is easily seen that every solution is proper. When the kernel group C is contained in the center of G, the embedding problem (2.0.1) is called a central embedding problem. Every nilpotent group can be realized as a Galois group by solving a sequence of central embedding problems. For every prime p of K, fix a prime ofK above p and let D p (resp. I p ) denote its decomposition (resp. inertia) subgroup in G K . Let denote the corresponding local embedding problem, whereḠ p = ρ(D p ), G p = α −1 (Ḡ p ), and α p , ρ p are restrictions of α, ρ.
In this paragraph we assume in 2.0.1 that G is an l-group and the kernel C has prime order; let S 0 be a finite set of primes of K containing the infinite primes, prime divisors of l, and prime divisors of a set of ideals representing the ideal classes of K. More generally when G is a nilpotent group in Section 8, the set S 0 contains in addition the divisors of the order of G. It is known, cf. [3], that a solution to a global embedding problem 2.0.1 exists if and only for every prime p of K there exists a solution to the local embedding problem 2.0.2. The local embedding problem is solvable if ρ(I p ) = 1, since D p /I p ∼ =Ẑ is a free group; the Scholz condition ensures solvability at the ramified primes. Let Ram(ρ) = {p of K | ρ(I p ) = 1}. [3]). Given a number field K, an l-group G and a positive integer N such that l N is divisible by the exponent of G. Denote by T a set of l N -exceptional primes as defined in Section 4.
An epimorphism φ : The last condition is an example of local data of [3]. We will also say the extension L/K is l N -Scholz, where L is the subfield ofK fixed by ker(φ).
The definition of l N -Scholz does not depend on the choice of prime ofK above each p. Clearly if a homomorphism φ is l N -Scholz, then it is l k -Scholz for all integers k ≤ N .
Existence of Solutions
Theorem 3.1 (Existence Theorem). Let (G K , ρ, α) be a central embedding problem, G = ρ(G K ) is an l-group, C = ker(α) cyclic of order l e . Suppose ρ is l N -Scholz (exponent of G divides l N ) and ζ l / ∈ K. Then the embedding problem
has a solution.
Proof: If G is a split extension ofḠ , we may apply Proposition 5.3, so assume the extension is Frattini, i.e., C is contained in the Frattini subgroup of G. We may break 3.0.3 into a sequence of e embedding problems each with kernel group of order l, which we may solve by Proposition 7.3 of [3] at the cost of one ramified prime at each step. We obtain an l N -Scholz solution ψ 0 to 3.0.3 such that In sections 5-7 we will show that the embedding problem 3.0.3 has an l N -Scholz solution at the cost of only one additional ramified prime (assuming K has no ideal classes of order l 2 if |C| > l).
Exceptional Set of Primes
The key Lemma 4.2 was originally proved by the first author in a different way in her thesis [8]. The lemma below generalizes results of Gras in [4] Ch. II, Theorem 6.3.2 and Lemma 4.1, p. 361 in [11].
Since ∆ is cyclic, by Herbrand theory, the orders of the Tate cohomology groups This completes the proof.
Let K S be the group of S-units of K, where S contains the infinite primes of K. By Dirichlet's unit theorem, the Z-rank of K S is
Proof:
The corollary below will be used in section 8. We include it here for convenience.
The diagram below contains the fields involved in these isomorphisms.
The first two isomorphisms follow from Lemma 4.2. To show the rightmost isomorphism note that M l ( l N √ Lemma 4.4. For each l|a, assume that ζ l / ∈ K. Let R l denote the field L l ( l N √ K S ) and let σ l ∈ Gal(R l /L l (µ l N )). Define R = l|a R l . Then there exists σ ∈ Gal(R/K(µ a )) such that σ| R l = σ l for all l|a.
Proof: By Lemma 4.3, each σ l extends to an element, sayσ l , of Gal(R l M l /M l ). The latter group is a subgroup of the l-group Gal(R l M l /K(µ a )). Now observe that Gal(R/K(µ a )) ∼ = l|a Gal(R l M l /K(µ a )). Therefore we may define σ ∈ Gal(R/K(µ a )) as σ = l|aσ l .
For an abelian group A and a prime number l, We have the following split exact sequence, e.g. pg. 109 of [7], where E denotes the group of units of K and the right hand map sends a mod K ×l to the ideal class of a, where (a) = a l . Similarly Let w 1 , . . . , w s be a Z-basis of E mod torsion. As in [3], choose ideles α 1 , . . . , α r ∈ J whose images are an F l -basis of the l-torsion subgroup (J/K × U ) l of the ideal class group of K. Then for j = 1, . . . , r We define a governing field Ω l (compare Chapter 5 of [4] or [3] for N = 1)
It follows from lemma 4.2 that the Kummer extension satisfies
Define subfields of Ω l Note that this property is independent of the primes above p i (resp. q j ) since N i (resp. N ′ j ) is a normal extension of K. For a prime ideal p of K unramified in a Galois extension F/K, F rob(p, F/K) denotes the conjugacy class in Gal(F/K) consisting of the Frobenius elements of all prime ideals of F above p. Choose Further if a is the product of the primes dividing |G|, the latter group is isomorphic to Gal(Ω l (µ a N )/K(µ a N )) by Lemma 4.2.
By the Chebotarev density theorem, there exists an l N -exceptional set of primes disjoint from any given set of primes of K of density 0. Note that since v splits It follows from Kummer theory for primes p i , q j ∈ T l that • w i not an l-th power in U pi We will therefore fix a set T l of l N -exceptional primes, where l N is divisible by the exponent of the l-group G. From now on until section 8 we will let T denote T l , as the prime l is implicit.
Split Case
We begin with a lemma which generalizes Lemma 4.2 of [3]. If K = Q, and b is an integer greater than one, the lemma follows at once from the fact there are infinitely many primes q ≡ 1 (mod b), and we take subfield M of Q(µ q ) of degree b.
Lemma 5.1. Given integer b > 1 and number field K, there exist infinitely many prime ideals q of K and cyclic extensions M = M (q) of K of degree b such that q is the unique ramified prime of M/K, q is totally ramified, and q does not divide b.
Proof:
Let S be a finite set of primes of K containing S 0 and prime divisors of b and let Ω = K( b √ K S ). By Chebotarev's theorem there exist infinitely many primes q of K, q / ∈ S, such that q splits completely in Ω/K. For such q, Ω is contained in the completion K q and so By class field theory, cyclic extensions of K are given by idele class characters. Since J/K × ∼ = J S /K S , we want to define an epimorphism χ : , v ∈ S. By class field theory, χ corresponds to a cyclic, degree b extension M (q)/K in which q is totally and tamely ramified and the other primes of K are unramified.
Theorem 5.2. Let A be a finite abelian group with d generators. There exist infinitely many Galois extensions N/K such that Gal(N/K) ∼ = A and exactly d primes of K ramify in N . Such N is its own genus field relative to K.
Proof:
Write A as a direct product of d cyclic groups and apply Lemma 5.1 to each factor. The resulting extensions M (q i ), 1 ≤ i ≤ d are linearly disjoint over K by ramification considerations. Take N to be the composite of the fields M (q i ). Note that these q ′ i s are not to be confused with the ones defined in Definition 4.5.
where the kernel C of α : G →Ḡ is cyclic. There is an l N -Scholz solution φ to the embedding problem (G K , ρ, α) and a prime q not in S = Ram(ρ) ∪ S 0 ∪ T such that
Proof:
We apply the argument in Lemma 5.
where L is the subfield ofK fixed by ker(ρ), to obtain q and an idele class character χ of order b; q splits completely in Ω/K. By the Reciprocity law χ corresponds to an epimorphism η : G K → C. Then φ = (ρ, η) : G K →Ḡ × C, σ → (ρ(σ), η(σ)), is a proper solution to the embedding problem. It remains to check that φ is l N -Scholz, given that ρ is l N -Scholz. If If v = q, q splits completely in K(µ l N )/K, hence N (q) ≡ 1 (mod l N ). Since q splits completely in L/K, ρ(D q ) = 1. As η(I q ) = C for η : G K → C, we have η(D q ) = η(I q ). Thus φ(D q ) = φ(I q ). If We conclude φ = (ρ, η) is an l N -Scholz solution with one additional ramified prime.
Removing Ramification
Lemma 6.1. Let K be a number field not containing ζ l , N ≥ e ≥ 1. Given a finite set S of primes disjoint from an l N -exceptional set T , characters χ v : U v → µ l e , for v ∈ S, at least one of which is onto. Assume K has no ideal classes of order l 2 when e > 1. There exists an idele class character χ : J/K × → µ l e such that χ| Uv = χ v for all v ∈ S and χ| Uv = 1 for all v / ∈ S ∪ T .
Proof:
It suffices to prove the result when S = {v 0 } and then take the product of the resulting characters. Let I = T ∪ {v 0 }.
Step 1: Defining f on U K × /K × . We define an epimorphism f : U → µ l e of the form The character χ v0 is given and the characters χ v , v ∈ T, are to be defined suitably. Each character χ v is trivial for v / ∈ I. By the definition of an l N -exceptional set of primes, the image of each unit w i generates U pi /U l e pi , p i ∈ T , hence we can define χ pi : U pi → µ l e , 1 ≤ i ≤ s, to satisfy χ pi (w i )χ v0 (w i ) = 1. Similarly ǫ j,qj generates U qj /U l qj (hence also modulo U l e qj ) and we can define χ qj : U qj → µ l e , 1 ≤ j ≤ r, to satisfy χ qj (ǫ j,qj )χ v0 (ǫ j,v0 ) = 1.
Next we establish the "off-diagonal" vanishing of v∈I χ v . Recall that ǫ j,v ∈ U l v for q j = v ∈ T for each j, and w i ∈ U l e v for p i = v ∈ T for each i. Thus we have It follows that v∈I χ v is trivial on the image of E ⊕ (⊕ r j=1 ǫ j ) in v∈I U v /U l e v . Letting ∆ : K × → J be the diagonal embedding, we have in particular f (∆(E)) = 1, so f is defined on U/∆(E), which we write as U/E ∼ = U K × /K × .
Note if l does not divide the class number of K, then f already provides the desired idele class character since the l-part of the ideal class group J/K × U will be trivial. Otherwise we must extend f from K × U/K × to J/K × .
Step 2: Character of order l. Define f 1 : U → µ l by f 1 = f l e−1 . By the techniques of the proof of Lemma 6.1 of [3], f 1 extends to an idele class character χ 1 of order l with χ 1 | Uv = χ l e−1 v , for v ∈ I and χ 1 | Uv = 1 if v / ∈ I. This follows from the trivial fact that an l e -exceptional set T is l-exceptional.
Step 3: Extending to a character of order l e . First we prove the following claim about finite abelian l-groups. Claim 6.2. Let Γ be a finite abelian l-group and let γ ⊆ Γ be a cyclic subgroup of order l e . If Γ/γ l has exponent l, then γ is a direct summand of Γ.
Proof: The exponent of Γ is l e , since for any element g ∈ Γ we have g l ∈ γ l and hence g l e = 1. Therefore γ is a subgroup generated by an element of maximal order, and hence is a direct summand, as desired.
We have the following diagram with exact rows and columns 1 1 Theorem 6.3 (Removing Ramification). Suppose K has no ideal classes of order l 2 and does not contain ζ l . If the Frattini embedding problem (G K , ρ, α) has a solution ψ 0 , then it has a solution ψ :
Proof:
The proof is similar to Lemma 6.2 of [3] except that we twist ψ 0 by a character of order l e . Let S = Ram For v ∈ S we define χ v := ψ 0 | Iv viewed as χ v : U v → µ l e by reciprocity. By 6.1 there exists an idele class character χ of order l e with certain local properties. We identify χ with η : G K → C via reciprocity and set ψ = ψ 0 η −1 . Since the embedding problem (G K , ρ, α) is Frattini, ψ is surjective. Remark 6.4. Note that in case e = 1 the hypothesis on the order of ideal classes in the theorem above can be dropped.
Finding an m-Scholz solution
We generalize Lemma 7.1 of [3] to prime powers. Lemma 7.1. Given integers N ≥ e ≥ 1, Galois l-extension L/K, characters χ v : K × v → µ l e for all v in a finite set S ⊇ S 0 . Assume that K does not contain ζ l . There exists a prime ideal q of K outside S and a character χ : J K /K × → µ l e such that conditions (1)-(4) hold:
Proof:
Since S 0 is chosen large enough, we have J S /K S ∼ = J/K × . It therefore suffices to define a character g : for some prime q and some epimorphism χ q : U q → µ l e chosen so that q splits completely in L(µ l N )/K and g(K S ) = {1}.
We define a character h : K S → µ l e as the composition where the left map j is the embedding of K S in v∈S K × v and the right map is v∈S χ v . Thus for x ∈ K S , g(x) = h(x)χ q (x), so χ q must be chosen to make g(x) = 1 for all x ∈ K S .
Case h(K S ) = {1}. If q satisfies K S ⊂ U l e q , then for any character χ q : U q → µ l e , we have χ q (K S ) = {1}. By Chebotarev's theorem, there exists a prime ideal q / ∈ S of K which splits completely in Ω := L(µ l N , l e √ K S ). Note that q splitting completely in K(µ l N )/K implies that absolute norm N K Q (q) ≡ 1 (mod l N ). Then K S ⊆ U l e q by Kummer theory. Case h(K S ) = {1}. The image h(K S ) is cyclic of order l k , 1 ≤ k ≤ e. Thus there exists x 1 ∈ K S with h(x 1 ) of order l k . K S /K l k S may be generated by {x 1 , x 2 , . . . , x u }, with h(x i ) = 1, i > 1. By Burnside's basis theorem {x 1 , . . . , x u } also generate K S /K l e S . We want to pick a prime q ∤ l, q / ∈ S such that • q splits completely in L(µ l N )/K.
The field Ω k is a normal extension of K. By Lemma 4.2, Gal(Ω/L(µ l N )) ∼ = (Z/l e Z) u and Gal(Ω/Ω k ) is cyclic of order l k . By Chebotarev's theorem we may choose q / ∈ S such that F rob(q, Ω/K) generates Gal(Ω/Ω k ), in particular q splits completely in Ω k /K. This guarantees that the above three conditions on q are satisfied.
Having chosen q, we define χ q , a character of order l e . Choose y ∈ U q such that y l e−k = x 1 ∈ U q . We want χ q (y) of order l e , then χ q (x 1 ) has order l k . If β = h(x 1 ) is an element of µ l e of order l k , then β = α l e−k , where α is a generator of µ l e . Set χ q (y) = α −1 . Then χ q (x 1 ) = β −1 .
So we have chosen χ q so that χ q (x 1 )h(x 1 ) = 1. Thus g(K S ) = 1 and we have proved the lemma for prime power order characters. Proof: Since ρ is l N -Scholz and Ram(ψ) ∪ T = Ram(ρ) ∪ T , after adjusting the lift σ v we may assume ψ(σ v ) ∈ C (see pg. 36 of [3]). Then let η v be the unique homomorphism We have defined η v , v ∈ S; now we apply Lemma 7.1 to get a map η : G K → C and a prime q / ∈ S such that η| Dv = η v , v ∈ S, η(I q ) = C, and η unramified for Step 2. We claim ϕ is unramified outside . The result follows.
Step 3. We claim ϕ is l N -Scholz. Since the extension is Frattini, any solution is proper. The check of the three points of Definition 2.1 is similar to pg. 37 of [3] except for the proof that ϕ(D q ) = ϕ(I q ). For that, note that q is chosen to split completely in the fixed field of ker(ψ), so ψ(D q ) = {1}. Putting this together with η(I q ) = C, we conclude that ϕ(D q ) = ϕ(I q ).
Putting together Existence Theorem, Proposition 5.3, Proposition 6.3, Proposition 7.2 we have the next result. Proposition 7.3. Suppose ζ l / ∈ K and K has no ideal classes of order l 2 . Given a central embedding problem (G K , ρ, α) with G an l-group, cyclic C and ρ l N -Scholz. If the extension is split or of Frattini type, then there exists an l N -Scholz solution ϕ and a prime q of K such that Define the lower central series {G i } of G by G 1 = G, G i+1 := the commutator subgroup [G i , G], i ≥ 1. If G is nilpotent, the smallest positive integer c such that G c+1 = {1} is called the nilpotency class of G. Our main result below generalizes Proposition 2.5 of [10] who considers only the case K = Q and improves the result of Theorem 7.4 of [3] when the kernel C of the embedding problem is not of prime order.
Theorem 7.4. Given a number field K, a prime l, and an l-group G of nilpotency class c. If G is nonabelian, suppose ζ l / ∈ K and K has no ideal classes of order l 2 . Then Remark 7.5. 1. This bound may be achieved by a tamely ramified extension L/K with G ∼ = Gal(L/K). 2. If G is of nilpotency class 2, 3. If we allow K to have ideal classes of order l 2 , then the bound has the form of [3] minram K (G) ≤ g + |T |, |G| = l g .
Proof:
As in Proposition 2.5 of [10] we use induction on i for a central embedding For i = 1, by Proposition 5.3 the embedding problem has an l N -Scholz solution with at most d(G ab ) = d(G) ramified primes. For i ≥ 1, each extension is of Frattini type, and we may break the i-th problem up into d(G i /G i+1 ) cyclic Frattini problems. As shown in Proposition 7.3, each such problem may be solved at the cost of one more ramified prime. And since we can make the solution l N -Scholz at each stage, it is guaranteed that we may solve the next embedding problem.
Ramification bound on nilpotent groups
We use the notation that a is the product of the primes dividing the order of G and integer N satisfies a N is a multiple of the exponent of G. The purpose of this section is to extend Theorem 7.4 to groups G = l G l that are the direct product of their Sylow l-subgroups G l , that is nilpotent groups. Assume ζ l / ∈ K for all l dividing |G|. We will obtain G by a sequence of central embedding extensions with cyclic kernel; each of these extensions is a "product" of central extensions of l-groups as in sections 6 and 7. The nilpotent case was initially handled in the first author's thesis [8]. In this section we obtain an improved bound on minram K (G) for fields K which do not contain ideal classes of order l 2 , where l | |G|.
The first step is to define a set T (as small as possible) of primes of K that contains an l N -exceptional set T l of primes for each l dividing |G|. Let as in 4.0.4 and letΩ = l|a Ω l . Since Gal(Ω l (µ a N )/K(µ a N )) is an l-group, we have N )).
Using the isomorphism of 8.0.5 we define elements of Gal(Ω/K(µ a N )). Here r = max l|a r l and we set τ j (l) = 1 if r l < j ≤ r. By Chebotarev's theorem, in K there is a set of s + r prime ideals T = {p i , q j : 1 ≤ i ≤ s, 1 ≤ j ≤ r} disjoint from any given finite set such that F rob(p i ,Ω/K) = C(Gal(Ω/K), σ i ), 1 ≤ i ≤ s and F rob(q j ,Ω/K) = C(Gal(Ω/K), τ j ), 1 ≤ j ≤ r.
Here C(Gal(Ω/K), γ) denotes the conjugacy class of γ in Gal(Ω/K). By the properties of the Frobenius, the restriction to Ω l of σ i (resp. τ j ) is σ i (l) (resp. τ j (l)) for each l dividing a.
Lemma 8.1. We continue the notation of Corollary 4.3 and Lemma 4.2. For each l|a, let L l be an l N -Scholz l-extension of K fixed by the kernel of homomorphism ρ l : G K →Ḡ l and let (G K , ρ l , α l ) be a Frattini central embedding problem as in (2.0.1). Assume for all l | a, that ζ l is not in K and the exponent of G l divides l N . When |ker(α l )| > l, assume additionally that no ideal class of K has order l 2 . Then for each l|a there exists a solution
Proof:
The existence of any solution is Lemma 3.1. Our set of primes T contains l Nexceptional subsets T l , hence we may apply Theorem 6.3 to get a solution φ l such that Ram(φ l ) ⊆ Ram(ρ l ) ∪ T for all primes l | a.
In the next lemma we apply Corollary 4.4 to find a single prime q that we use to lift local characters indexed by l|a.
Lemma 8.2. Let S be a finite set of primes of K that contains S 0 . For each prime l|a, we are given integers e l , N ≥ e l ≥ 1, Galois l-extension L l /K, character χ v,l : K × v → µ l e l for all v ∈ S. Assume that K does not contain ζ l for each l|a. There exists a prime ideal q of K outside S and idele class characters χ l : J K /K × → µ l e l such that conditions (1)-(4) hold for all l | a: • q splits completely in L l (µ l N )/K • χ l | K × v = χ v,l for all v ∈ S. • χ l (U q ) = µ l e l .
Proof: Let R l denote the field L l ( l N √ K S ), R = l|a R l , Γ l = Gal(R l /K) and Γ = Gal(R/K).
In Lemma 7.1, for all l | a we have defined a special prime q l (not to be confused with q i 's defined in Definition 4.5). Define σ l ∈ Γ l by F rob(q l , R l /K) = C(Γ l , σ l ). Next we show that a single prime q can be chosen. By Lemma 4.4 there exists an element σ ∈ Γ whose restriction to R l equals σ l for all l|a. By Chebotarev's theorem, there exists a prime q of K outside S such that F rob(q, R/K) = C(Γ, σ). By restriction F rob(q, R l /K) = C(Γ l , σ l ) for all l | a and conditions (1)-(4) of 8.2 are satisfied.
Remark 8.3. The method by which we replaced {q l : l | a} by q is similar to that where we replaced {T l : l | a} by T . Theorem 8.4. Given a number field K and a finite nilpotent group G of class c. If G is nonabelian, suppose gcd(|G|, |µ K |) = 1 and assume for all primes dividing |G| that the ideal class group of K has no elements of order l 2 . Then Here s = Z-rank of units of K and r = max l||G| {dimCl(K) l }.
Proof:
By Corollary 5.2 it remains to prove the result for nonabelian groups G. Since G is nilpotent, for each l dividing |G|, we may apply Propositions 7.2, 7.3, Theorem 7.4 inductively. By Lemma 8.2 there exists a single prime q to which Proposition 7.2 may be applied, and the conclusion follows.
Schur Extensions
In this section we use Fröhlich's result on realizing the Schur multiplicator without additional ramification to verify Boston's conjecture [1] for a certain class of l-groups given by central extensions In addition Theorem 9.7 confirms Boston's conjecture for a particular G of exponent l, and includes the determination of the central class field of any finite abelian extension L/Q of exponent l that is its own genus field.
The group M(Γ) is the Schur multiplicator of a profinite group Γ as defined in [2]. | 2010-07-22T16:25:07.000Z | 2010-07-22T00:00:00.000 | {
"year": 2011,
"sha1": "8b332eef78dd827378d54dc7d31505a1e1e46748",
"oa_license": null,
"oa_url": "http://msp.org/pjm/2011/253-1/pjm-v253-n1-p08-s.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "8b332eef78dd827378d54dc7d31505a1e1e46748",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
41947052 | pes2o/s2orc | v3-fos-license | Atomically thin optical lenses and gratings
Two-dimensional (2D) materials have emerged as promising candidates for miniaturized optoelectronic devices due to their strong inelastic interactions with light. On the other hand, a miniaturized optical system also requires strong elastic light–matter interactions to control the flow of light. Here we report that a single-layer molybdenum disulfide (MoS2) has a giant optical path length (OPL), around one order of magnitude larger than that from a single-layer of graphene. Using such giant OPL to engineer the phase front of optical beams we have demonstrated, to the best of our knowledge, the world’s thinnest optical lens consisting of a few layers of MoS2 less than 6.3 nm thick. By taking advantage of the giant elastic scattering efficiency in ultra-thin high-index 2D materials, we also demonstrated high-efficiency gratings based on a single- or few-layers of MoS2. The capability of manipulating the flow of light in 2D materials opens an exciting avenue towards unprecedented miniaturization of optical components and the integration of advanced optical functionalities. More importantly, the unique and large tunability of the refractive index by electric field in layered MoS2 will enable various applications in electrically tunable atomically thin optical components, such as micro-lenses with electrically tunable focal lengths, electrical tunable phase shifters with ultra-high accuracy, which cannot be realized by conventional bulk solids.
INTRODUCTION
Interactions between light and matter can be divided into two categories: inelastic and elastic 1 . An inelastic interaction involves energy transfer between photons and electrons or phonons. In contrast, elastic interactions do not involve energy transfer and are responsible for controlling the propagation of light. Optical components, such as resonant cavities, waveguides, lenses, gratings, and, more recently, optical meta-materials 2 and photonic crystals 3 , all rely on strong elastic interactions between light and matter to achieve sophisticated control of the flow of light. Strong elastic interactions rely on significant changes of the amplitude and phase of the light accumulated over a long optical path and, hence, for very thin materials, such as a twodimensional(2D) graphene sheet, the interaction is generally very small 4 . Considerable effort has been devoted to this issue, but success has been only achieved in the mid-to far-infrared where the plasmonic resonance in graphene can enhance the elastic optical response [5][6][7] . It remains a great challenge to manipulate the flow of light using atomically thin 2D materials in the important visible and near-infrared spectral regions where 2D materials have most interesting optoelectronic properties [8][9][10][11][12][13][14][15][16][17][18][19] . Rather surprisingly, as we will show later, the strength of the elastic interaction in a thin 2D material increases dramatically with increasing refractive index because of the unique geometry associated with an ultra-thin film. Such favorable scaling makes high-index transition-metal dichalcogenide (TMD) 2D semiconductors 11,[20][21][22] , such as molybdenum disulfide(MoS 2 ), particularly attractive for strong elastic light-matter interactions.
Device fabrication and characterization
For the phase-shifting interferometry (PSI) measurements, single-and few-layer TMD semiconductors and graphene were deposited onto a SiO 2 /Si substrate (275 nm thermal SiO 2 ) by mechanical exfoliation using 3M scotch tape. All Raman and photoluminescence (PL) measurements were conducted with a Horiba Jobin Yvon T64000 micro-Raman/PL system, with a 532 nm Nd:YAG green laser for excitation. All the optical path length (OPL) characterizations were obtained using a Veeco NT9100 phase-shifting interferometer. The atomically thin micro-lens and gratings were fabricated in a FEI Helios 600 NanoLab focused ion beam (FIB) system (Gallium ion source) using pre-calibrated dosage, optimized beam voltage (30 kV) and beam current (9.7 pA). The gratings and micro-lens were characterized using a green laser with a wavelength of 532 nm.
Numerical simulation
Rigorous coupled-wave analysis (RCWA) was used to calculate the phase delay and grating efficiency. The method numerically solves Maxwell's equations in multiple layers of structured materials by expanding the field in Fourier space. The finite element method was used to calculate the optical scattering cross-section of the nano ribbons.
RESULTS AND DISCUSSION
Refractive optical components rely on the OPL to modify the phase front of an optical beam. The OPL is directly related to the geometrical length of the light path. As a result, it is normally expected that the OPL of a monolayer of a 2D material would be too small to have a significant impact on the phase front because the layer is so thin. However, here we measured a giant OPL of 38 nm from a single-layer MoS 2 , which is more than 50 times larger than its physical thickness of 0.67 nm and around one order of magnitude larger than the measured OPL of a single-layer graphene that was found to be only 4.4 nm (Figure 1).
In our experiments, single-or few-layer MoS 2 flakes were transferred onto a silicon wafer with 275 nm of surface thermal oxide by mechanical exfoliation 4,7 . The flakes were firstly identified by their optical contrast in an optical microscope. Regions with different colors corresponded to MoS 2 flakes with different thicknesses (Figure 1a). Due to their high refractive index, these atomically thin MoS 2 layers have significant and layer-dependent OPL values and this enables the layers to be easily identified by phase-shifting interferometry (PSI) (Figure 1b and 1c). PSI is capable of measuring the vertical OPL to an accuracy of around 0.1 nm, by analyzing the digitized interference pattern obtained during a well-controlled phase shift (Supplementary Fig. S1 and S2). The measured OPL value of the MoS 2 flake on a SiO 2 substrate at 535 nm was determined by: where l is the wavelength of the light source, w MoS2 and w SiO 2 are the PSI-measured phase shifts of the light reflected from the MoS 2 flake and the SiO 2 substrate (Figure 1d This giant OPL is created by relatively strong multiple reflections at the air-MoS 2 and MoS 2 -SiO 2 interfaces. We consider a simple interface between air and SiO 2 , each occupying half infinite space. A layer of 2D material with a real refractive index n is placed in between the two media. The high impedance mismatch at these interfaces leads to large reflection coefficients, which cause the strong multiple reflections of light in the 2D material( Figure 2a). The amplitude of the reflected light is the summation of the multiple reflections off the interfaces of the thin high index layerR i , where i indicates the number of round trips in the 2D material. As the index increases so does the reflectivity of the interfaces, which increases the effective number of transits of the light through the high index layer and thus the OPL of the reflected light ( Figure 2a). We verify this intuition with numerical calculation as shown by the dashed line in Figure 2c. The magnitude of OPL difference comparing with and without the 2D material on SiO 2 (Supplementary Fig. S7) increases rapidly with increasing n. The OPL is low for low-index 2D materials, where the small reflection coefficients cause R i to be small. This situation in illustrated schematically in Figure 2b. Additionally, in the experiment, we used a silicon substrate with a layer of 275 nm thermal SiO 2 on its surface, which forms a weak Fabry-Perot resonance. As a result of this weak resonant enhancement, the OPL is further enhanced by a factor of around 1.5 as shown by the solid line in Figure 2c. Figure 2c also shows the OPL for a few other materials. The OPL of high-index 2D materials, such as MoS 2 , is remarkably larger than that of SiO 2 , graphene, Au or Si. The wavelength used for these calculations was 535 nm. The refractive indices used for single-layer MoS 2 23 , silicon, graphene 24 In order to approximate our simulation to the actual condition, for 2L MoS 2 , the refractive index in our simulation is assumed to be 4.410.6i (the index for 1L MoS 2 ); for 3L and higher layer numbers, the refractive index is assumed to be 5.211.1i (the index for bulk MoS 2 ) 23 . In addition, it should be noted that the giant OPL is not a narrow band effect. The calculated OPL for 1L MoS 2 is above 20 nm at the wavelength ranging from 450 nm to 560 nm ( Supplementary Fig. S8). The spectral position for highest OPL can be adjusted by changing the thickness of the SiO 2 . Even more remarkably, the OPL of single-, bi-, triple-, and quadruple-layer MoS 2 scales almost linearly with the number of layers, offering the exciting opportunity of controlling the OPL using the number of layers of MoS 2 . When the layer thickness increases by 1 nm, the OPL increases by over 50 nm. Such a rapid change of OPL with thickness allows us to control the phase front of an optical beam very effectively using only an atomically thin structure. The theoretical and numerical predictions (Figure 2d) were well supported by the experimental data as shown in Figure 1d.
Next, we demonstrate phase-front engineering by fabricating the world's thinnest lens based on a few atomic layers of MoS 2 ( Figure 3). We started with a flake of uniform 9L MoS 2 (6.28 nm in thickness, Supplementary Fig. S9) and then used FIB to mill a predesigned bowl-shaped structure (20 mm in diameter) into the flake (Figure 3a and 3b). The gradual change of MoS 2 thickness, from the center to the edge, led to a continuous and curved OPL profile for an incident beam, and this served as an atomically thin (reflective) concave micro-lens (Figure 3c). Based on the measured OPL profile, the focal length f of this MoS 2 micro-lens was calculated to be -248 mm ( Supplementary Fig. S10). In order to realize the precise design for this MoS 2 micro-lens, we used the statistical calibration curve between the OPL values of MoS 2 flakes and their layer numbers (Figure 3d). All the OPL values were measured by PSI and the layer numbers were confirmed by AFM. The OPL of MoS 2 increased almost linearly with increasing the layer number when the layer number was less than five. We noticed the nonlinear response of OPL versus layer number when the layer number is more than five and this could be attributed to the fact that the single path absorption in MoS 2 layer become more when the layer is thicker. Thus, the amplitude of the light will decrease faster in one round trip. When the amplitude of the light becomes very small after several roundtrips, its contribution to the OPL becomes negligible.
We used a far-field scanning optical microscopy (SOM) to characterize the fabricated MoS 2 micro-lens ( Supplementary Fig. S11).
The SOM system used a green laser (at 532 nm) that was focused onto the focal plane of an Olympus 10X (NA 5 0.25, depth of focus 18 mm) objective lens. The setup offered the best collection efficiency for light emitted from a small volume located around the focal plane. The micro-lens was moved along the z-axis in steps of 10 mm by a piezoelectrically driven stage. The camera recorded a series of the intensity distributions ( Supplementary Fig. S12) with the MoS 2 micro-lens positioned at different z values. A three-dimensional data set was generated by data processing and a cross-section profile was obtained along the xand z-axes to illustrate the average distribution of the light intensity in these directions (Figure 3e). When the MoS 2 micro-lens was placed at a distance 2jf j above the focal plane, the focused incident light would be exactly reimaged which is equivalent to the light coming from a point source (Supplementary Fig. S12d). Therefore, the camera recorded a well-focused light spot. The focal length f of the MoS 2 micro-lens was measured to be -240 mm (2f5 -480 mm), which matched very well with the simulated value (-248 mm) using the measured OPL profile of the micro-lens. For comparison, we also ran the same characterization by using a planar substrate without the MoS 2 micro-lens, and obtained the intensity distribution shown in Figure 3f and Supplementary Fig. S13. The lensing effect is clearly demonstrated by comparing the difference between Figure 3e and 3f. In addition, the measured focal length of the MoS 2 micro-lens shows weak polarization dependence ( Supplementary Fig. S14), due to the low anisotropic dielectric response of MoS 2 . This makes MoS 2 suitable for ultra-thin optical elements. The efficiency of light scattering is another critical parameter for advanced light manipulation. Devices that employ photonic band gaps 25 , Anderson localization 26 , and light trapping such as with thinfilm solar cells 27 all rely heavily on strong light scattering. Unfortunately, in typical 2D materials, such as graphene, the scattering efficiency is very small, making it impossible to rely on collective scattering of nanostructured graphene to achieve functionalities such as gratings. Here, we show that single-and few-layer structured MoS 2 film have extraordinarily high scattering efficiency, enabled by the combination of high index in a thin structure. The scattering efficiency is determined by the strength of the electric field in the material. Normally, the electric field inside a bulk material, particularly a high-index material is much weaker than that of incident light because of the impedance mismatch. The boundary condition of Maxwell's equations requires the tangential component of the electric field to be continuous across any interface. Because the layer is thin, this condition indicates that the electrical field inside a 2D material is almost as strong as the tangential component of the incident field. As a result, there is a strong polarization p~[ 0 (n 2 {1)E 0 , where E 0 is the electric field of s-polarized incident light, n is the index of the material and [ 0 is the electric permittivity of free space. The scattering power is proportional to the p 2 and, therefore, scales roughly as n 4 . This scaling rule greatly favors high-index materials and is again uniquely available in ultra-thin materials. In contrast, for nanoparticles, the scattering power is proportional to n 2 {1 n 2 z2 2 , which does not increase appreciably with the refractive index 28 . Here we use the finite element method to explicitly calculate the scattering efficiency of 2D ribbons by solving Maxwell's equations. Figure 4a shows the calculated scattering cross-section of an infinitely long ribbon (30 nm wide and 0.67 nm thick) in air for s-polarized light at normal incidence. The scattering cross-section has units of nanometers because the length of the ribbon is considered infinite. The scattering cross-section increases by orders of magnitude when the index increases by just a few times (Figure 4a). For example, the scattering cross-section of a single-layer MoS 2 ribbon is around 670 times, 54 times, and 18 times of those in 0.67 nm SiO 2 , a single-layer graphene, and 0.67 nm of gold, respectively. Metal is generally considered as one of the strongest scattering materials and it is important to note that MoS 2 even displays much stronger light scattering than gold. Moreover, the angular response of the scattering cross-section is also Atomically thin optical lenses and gratings J Yang et al 4 isotropic (Supplementary Fig. S15). Such favorable scaling for high-index materials is uniquely available in ultra-thin materials. The giant scattering efficiency in high-index 2D materials makes it possible to achieve sophisticated light manipulation based on collective scattering by patterns of nanostructures. Next, we experimentally demonstrate efficient optical gratings made from only a few layers of atoms. Because of the giant scattering efficiency, the efficiency of MoS 2 gratings is orders of magnitude greater than those made from conventional materials, such as SiO 2 and gold, and other low-index 2D materials. We used FIB to mill grating patterns on 1L, 2L, 6L, and 8L MoS 2 flakes (Figure 4 and Supplementary Fig. S16, S17, and S18). Grating parameters used in experiments, such as the periodicity and filling ratio, were based on optimal configuration predicted by simulations (Supplementary Table S1). The gratings were characterized using an spolarized green laser (at a wavelength of 532 nm). The laser beam has a diameter of around 200 mm, which was large enough to fully cover the grating. First-order and second-order diffraction beams were observed and the measured diffraction angles agreed with the predictions of the diffraction equation d( sin h d z sin h i )~ml, where h d and h i are the diffraction angle and incident angle, respectively; d is the period of the grating elements; and m is an integer characterizing the diffraction order. The power of the first-order diffraction beam was measured and the grating efficiency g was determined by g~ðP d =P i Þ Ã ðS b =S g Þ, where P d and P i were the measured powers of the diffracted and incident beams, respectively; S b and S g were the measured areas of the incident beam and the MoS 2 grating, respectively. The measured grating efficiency is a function of the incident angle, which agrees well with our simulation (Figure 4g). The maximum grating efficiencies for the 1L, 2L, 6L, and 8L MoS 2 gratings were measured to be 0.3%, 0.8%, 4.4%, and 10.1%, respectively, which also agree well with the simulations (Figure 4h, Supplementary Table S1). For comparison, we also fabricated a grating from a graphene sheet deposited by large-area chemical vapor deposition (Supplementary Fig. S19a and S19b). The intensity of diffracted beam from the graphene grating was lower than the noise level of our light detection system, and thus had a maximum efficiency no greater than 0.02%. From our simulations, the maximum grating efficiency of mono-layer graphene would be only 0.0078%, which is around 47 times lower than that of a single-layer MoS 2 grating. As another comparison, a SiO 2 grating with 2 nm thickness was also fabricated (Supplementary Fig. S19c and S19d). Again no diffracted beam could be observed from the SiO 2 grating due to the low grating efficiency in accordance with our numerical predictions (Figure 4h, Supplementary Table S1).
The efficiency of the MoS 2 grating can be further improved by using a metallic mirror to replace the Si substrate. Based on simulations of optimized designs, the first-order grating efficiency of an 8L MoS 2 grating can be up to 23.7% (Supplementary Table S2, Supplementary Fig. S20 and S21). In addition, an asymmetrical profile as used in high-efficiency gratings is expected to further improve the efficiency.
CONCLUSIONS
In conclusion, we have shown that high-index 2D materials have extraordinary elastic interactions with light, enabled uniquely by the ultrathin nature of 2D materials. As a result, wavefront shaping 29,30 and efficient light scattering can be accomplished with atomically thin 2D materials, enabling a new class of optical components entirely based on high-index 2D materials. Moreover, compared to conventional diffractive optical components, the spatial resolution of phase-front shaping is much smaller than the wavelength, and is only limited by the nano-fabrication resolution, making it possible to eliminate undesired diffraction orders 30 2D materials also offer many unique advantages.
Firstly, the extremely uniform thickness and the prefect surfaces with atomic roughness in layered high-index 2D materials provide us fantastic ways to precisely control the phase front of a wave. Secondly, the unique and large tunability of the refractive index by electric field 31 in layered MoS 2 will enable various applications in electrically tunable atomically thin optical components, such as micro-lenses with electrically tunable focal lengths, electrical tunable phase shifters with ultrahigh accuracy, which cannot be realized by conventional bulk solids. Thirdly, we also observed similar giant OPL in other TMD family YX 2 (Y5Mo, W; X5S, Se, Te) semiconductors, such as WS 2 and WSe 2 ( Supplementary Fig. S22). The availability of different functional materials offers rich opportunities for the combination of optical and electronic properties, such as stacked atomically thin heterostructures for 2D optoelectronics. Fourthly, high-quality 2D TMD semiconductors can be deposited directly onto (or transferred to) various substrates with large size by chemical vapor deposition at low-cost 32 , potentially enabling low-cost flexible optical components. Lastly, quasi-2D optical components represents a significant advantage in manufacturing compared to conventional 3D optical components because different functionalities can all be achieved in a 2D platform sharing the same fabrication processes and this will greatly facilitate the large-scale manufacturing and integration. In summary, our work here opens an exciting opportunity to use high-index 2D materials to control the flow of light.
AUTHOR CONTRIBUTIONS Y R L and Z F Y designed the project; J Y, R J X, and S Z carried out sample mechanical exfoliation and microscope imaging; J Y carried out the OPL, Raman, and PL measurements, AFM imaging, grating and micro-lens fabrication, grating efficiency measurement, and micro-lens characterization, with partial assistance from Y R L; Z W and Z F Y conducted the simulations and grating/micro-lens designs; F W and C J built the optical characterization setup for gratings and micro-lens. B L-D set up the PSI measurement system and provided technical support for the OPL characterization. J T and Q H Q undertook data processing for the micro-lens images. All authors contributed to the manuscript. | 2017-11-14T18:08:32.137Z | 2014-11-23T00:00:00.000 | {
"year": 2016,
"sha1": "b8eb3dd9d85d9955906c5721da5ead636d6f543d",
"oa_license": "CCBYNCND",
"oa_url": "https://www.nature.com/articles/lsa201646.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9f6c87c394cda616ad9448ab8678b3a8d8f4c464",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Physics",
"Materials Science"
]
} |
260246529 | pes2o/s2orc | v3-fos-license | Trends in Alcohol-Related Deaths by Sex in the US, 1999-2020
Key Points Question Are there sex-based differences in the contemporary burden and trends of alcohol-related mortality in the US? Findings In this cross-sectional study of 605 948 alcohol-attributed deaths, male individuals had a significantly higher burden of alcohol-involved mortality than did female individuals, with a male to female ratio of 2.88. Temporal trends revealed an increase in alcohol-related deaths among both sexes, with a significantly higher rate of increase observed for female individuals than for male individuals. Meaning Although alcohol-related deaths have historically been more prevalent among men than women, recent temporal trends suggest a narrowing of this gap, with increasing rates of alcohol-related deaths among female individuals compared with male individuals.
Introduction
Recently, the World Health Organization declared that even small amounts of alcohol consumption are detrimental to human health. 1 In the US, alcohol ranks as the fourth leading cause of preventable death, trailing tobacco, poor diet and physical inactivity, and illegal drugs, resulting in more than 140 000 deaths annually. 2,3 Alcohol is implicated in 18.5% of emergency department visits and 20% of prescription opioid deaths. 2,4 Although alcohol consumption is associated with adverse health outcomes, the distribution of harm varies across the US population. 5,6 Sex differences in alcohol-related complications have been observed, with a historically greater burden among men than women. 7 However, recent studies indicate a narrowing sex gap, attributed in part to increased alcohol use, high-risk drinking, and alcohol use disorder (AUD) among women. [8][9][10][11][12] This reversal raises public health concerns because heightened alcohol consumption among women may be associated with elevated complications due to metabolic and physiological differences.
Women tend to have a higher percentage of body fat and a lower percentage of body water compared with men, resulting in higher alcohol blood concentrations and potentially increasing vulnerability to complications. 13,14 Hormonal fluctuations throughout the menstrual cycle can influence alcohol processing, with certain phases heightening sensitivity to alcohol's effects. 15,16 Women also have lower levels of alcohol-metabolizing enzymes, such as alcohol dehydrogenase, leading to slower alcohol metabolism, prolonged exposure to harmful byproducts (such as acetaldehyde), and potentially more severe physiological and organ damage over time. 17,18 Consequently, women with AUD face an elevated risk of developing liver diseases, circulatory disorders, breast cancer, fertility problems, and early menopause. 10,14 Although recent studies note a narrowing sex gap in alcohol-related harm, it remains unclear whether this convergence extends to alcohol-related death rates. The existing literature has limitations, including using secondary designs 10 and outdated data 5,11 or focusing on nonmortality variables, such as alcohol consumption or alcohol-associated liver disease. 19 In addition, some studies have primarily examined short-term changes associated with the COVID-19 pandemic. 20,21 For instance, Angus et al 20 explored "deaths of despair," including alcohol-related mortality, with a specific focus on the association between the COVID-19 pandemic and mortality. Similarly, White et al 21 examined recent data emphasizing the association between COVID-19 and alcohol-related deaths. Given the public health significance of alcohol and the reported changes in female alcohol consumption, there is a need to conduct a comprehensive assessment of sex differences in alcoholassociated deaths using contemporary data. This study aims to use recent national mortality data from the National Center for Health Statistics, assessing sex differences in alcohol-related mortality within the US from 1999 to 2020.
Data Sources
and the requirement for informed consent by the Hofstra University institutional review board because the data obtained from the CDC WONDER are deidentified and publicly available.
We defined the study period as being from 1999 to 2020 based on data availability, with 2020 being the most recent year for which data are accessible. Crude or age-adjusted mortality rates (AAMRs) were abstracted by age (15-24, 25-44, 45-64, or Ն65 years), sex (male or female), race and ethnicity (American Indian or Alaska Native, Asian or Pacific Islander, Hispanic, non-Hispanic Black, or non-Hispanic White), cause of death (alcohol poisoning, alcoholic liver disease, mental and behavioral disorders due to use of alcohol, or other), and census region (Northeast, Midwest, South, or West).
Race and ethnicity data of the deceased individuals were collected from their death certificates, adhering to the guidelines set by the Office of Management and Budget. 23 The information documented on the death certificate relied primarily on the input of the funeral director, a report by an informant or, in the absence of an informant, physical observation. Sex information is recorded as "male" or "female" in CDC WONDER and reflects mortality data obtained from death certificates. 25 These certificates are collected by state registries and subsequently shared with the National Vital Statistics System for US residents. 25 Including race and ethnicity in this study was essential for multiple reasons. First, alcoholrelated mortality burden exhibits significant variation among different racial and ethnic groups, reflecting disparities in alcohol consumption patterns, health care access, socioeconomic factors, and cultural influences. 6 By using race and ethnicity as stratification variables, we investigated whether distinct patterns of alcohol-related mortality existed across these groups.
Second, examining the association of race and ethnicity with alcohol-related mortality allows us to gain a better understanding of the social determinants of health and identify potential health disparities within the population. This information plays a crucial role in formulating targeted interventions and policies to address these disparities and foster health equity. 6 Finally, incorporating race and ethnicity as variables in our analysis facilitated exploration of potential interactions or modifying effects between these factors and sex. These interactions may offer insights into the complex relationships and underlying mechanisms contributing to alcoholrelated mortality disparities.
Statistical Analysis
We calculated the sex-based mortality rate ratios by dividing the mortality rate among male individuals by the mortality rate among female individuals. The 95% CIs for these estimates were derived using the Taylor series method. This method was selected (eg, instead of Poisson regression) due to its simplicity and computational efficiency, facilitating direct estimation of the sex-based mortality rate ratios. We assessed temporal trends in AAMRs using joinpoint regression, a statistical technique that initially assumes a linear trend in AAMR throughout the study period. We then added a joinpoint to signify an inflection point (ie, change in trend) and used the permutation test to assess the significance of this joinpoint relative to the initial null model. 26 If the joinpoint was significant, we retained it; otherwise, we excluded it from the analysis. We repeated these steps, using the Bonferroni correction for multiple testing, until an optimum number of joinpoints was obtained from 4499 Monte Carlo permutations-the default Monte Carlo sample of permuted data sets. 27, 28 We derived 95% CIs using the parametric method. All statistical analysis was conducted using the To account for the potential association of the COVID-19 pandemic with alcohol-related mortality rates, we conducted a sensitivity analysis by excluding data from the year 2020. This approach allowed us to examine the trends in alcohol-related deaths before the onset of the pandemic and evaluate the robustness of our findings. We modeled the log-transformed AAMR as a function of the year of death. To account for heteroscedasticity or correlated errors, we confirmed constant variance using the Breusch-Pagan test and fitted an uncorrelated errors model. We imputed the interval type as "annual" to permit yearly trend estimations. We used default options for the method (grid search), the number of joinpoints (0-4), the model selection method (permutation test), the overall significance level (P < .05), the number of permutations (4499), the average annual percentage change segment ranges (entire range), the annual percentage change (APC), the average annual percentage change, and the tau confidence intervals (parametric method).
Results
Between 1999 and 2020, a total of 605 948 individuals died in the US due to alcohol-related causes,
Sex Differences in Alcohol-Related Mortality Trends
Overall When stratified by race and ethnicity, recent trends in alcohol-related mortality were found to have increased in both male and female individuals. Non-Hispanic White individuals, non-Hispanic Black individuals, and American Indian or Alaska Native individuals showed higher recent trends among female individuals than male individuals. In contrast, Asian or Pacific Islander and Hispanic male individuals had higher trends than female individuals in the most recent time segment (Table 2).
JAMA Network Open | Substance Use and Addiction
Finally, when stratified by census region and sex, recent trends in alcohol-related mortality increased among both male and female individuals, but with differences in the rates of increase. In the Southern and Western regions, recent trends increased at a higher rate for male individuals than female individuals, while in the Northeastern and Midwestern regions, trends increased at a relatively higher rate for female individuals than male individuals (Table 2).
Sensitivity Analysis
The analysis of alcohol-related mortality rates from 1999 to 2019 revealed distinct patterns. Initially, the rates remained relatively stable from 1999 to 2005 (APC, −0.2%; 95% CI, −1.4% to 0.3%), followed by a gradual increase at an annual rate of 1.7% (95% CI, 0.9%-2.8%) from 2005 to 2011 (Table 3). Subsequently, the rates accelerated significantly, with a more pronounced increase of 3.8% (95% CI, 3.5%-4.4%) per year from 2011 to 2019. On further examination by sex, both male and female individuals experienced increasing trends in alcohol-related mortality rates, but the rate of increase was higher among female individuals than among male individuals. The changing patterns of alcohol consumption among women are an important consideration in understanding these trends. Women are now drinking alcohol at higher amounts and frequencies than in the past, likely due to the normalization of alcohol use for female individuals in society. [8][9][10][11][12] The change in the mortality rate trends perhaps parallels the changing patterns in general alcohol consumption as well as in disordered or harmful patterns of consumption (such as binge drinking)
JAMA Network Open | Substance Use and Addiction
where the sex gap has also been closing globally. 9,10,31 A study conducted among 2 nationally representative survey samples, comprising a total of 43 093 participants, found that women exhibited a greater increase than men in 12-month alcohol use, high-risk drinking, and Diagnostic and Statistical Manual of Mental Disorders (Fourth Edition) AUD. 9 According to a meta-analysis of studies on birth cohort changes in male to female ratios in indicators of alcohol use as well as in alcoholrelated harms, during the past century, there has been a steady decrease in male to female ratios for problematic alcohol use and alcohol-related harms, from approximately 3 to 1 among those born in the early 1900s to approximately 1 to 1 among those born in the late 1900s. 12 The motivation for drinking is an important factor that may vary between male and female individuals and across age and race and ethnicity subgroups. Coping with stress is one of the main motivations for initiation of alcohol misuse for both male and female individuals. 32 Stress also plays a major role in the development and maintenance of disordered drinking behaviors and, ultimately, addiction. In fact, development of AUDs is thought to be associated with distinctive neuroadaptations, including the upregulation in the brain stress system to counteract the effects of the chronic influx of dopamine release induced by persistent alcohol use. 33,34 It is likely that the
JAMA Network Open | Substance Use and Addiction
Trends in Alcohol-Related Deaths by Sex in the US, 1999-2020
JAMA Network Open | Substance Use and Addiction
Trends in Alcohol-Related Deaths by Sex in the US, 1999-2020 narrowing gap in sex differences for alcohol mortality rates, which also parallels the narrowing gap in the patterns of alcohol use and misuse, 12 may be reflective of an increase in stress levels and stressrelated disorders among women in recent decades and, particularly, in recent years.
Age, racial and ethnic, and regional differences were observed in sex-subtyped trends in alcohol-related mortality. Among adults aged 65 years or older, the rate of change in alcohol-related mortality was higher among female individuals than male individuals. This finding perhaps points to the larger burden of accumulating harms of chronic alcohol use among female individuals compared with male individuals rather than suggesting a higher amount of alcohol use by female individuals aged 65 years or older because the narrowing of the male-female gap is most prominent among young adults rather than adults aged 65 years or older. 12 Recent mortality trends have increased at a higher rate among non-Hispanic White, non-Hispanic Black, and American Indian or Alaska Native women than men. Women in the Southern and Western census regions have recorded a higher increase than men in mortality rates in recent years. But, overall, the mortality rates in the Western census region are almost double that of any other census region for both male and female individuals. Despite the consistent pattern of the lowest rates of alcohol consumption, 35 the Southern region showed comparable mortality rates with the Northeast and Midwest regions. These findings highlight the importance of addressing underlying factors as well as the interaction among factors associated with excessive alcohol consumption and alcohol-related harm, which may differ across age, regional, and race and ethnicity subgroups and also involve social, cultural, economic, and even religious factors that may be at play in shaping drinking habits of people at an individual level.
Given the rates of alcohol-related mortality, it is important to acknowledge the limited knowledge of how current pharmacologic treatments for AUD specifically affect women. As Although these medications have shown potential in improving health outcomes, their effectiveness in reducing alcohol-related mortality remains uncertain, particularly for women. 36 For example, although naltrexone has demonstrated efficacy in reducing drinking and cravings, women may experience more adverse events, leading to higher rates of treatment discontinuation. 36 Recognizing these gaps in understanding and considering sex and gender differences are crucial to developing interventions that target women's alcohol use and have the potential to mitigate the rates of alcoholrelated mortality among women.
Alcohol-related deaths in the US may have been associated with the COVID-19 pandemic, as well as with the observed sex differences. 20,21 However, our sensitivity analysis demonstrated that our findings remained robust even when excluding data from the year 2020, with the latest trends increasing at a higher rate among women compared with men. Although our study sheds light on this trend, further research should be conducted to fully understand the underlying factors associated with the increased alcohol-related mortality among women. In addition, future studies should explore the potential association of the COVID-19 pandemic with alcohol-related deaths in more depth, considering various socioeconomic, psychological, and health care-related factors. Such investigations would help inform the development of targeted interventions and policies to address the growing public health issue of alcohol-related mortality, particularly among women.
Limitations
This study has some limitations. It is primarily descriptive and does not explore the factors associated with alcohol-related mortality trends in both male and female individuals. Future research should incorporate predictive factors to provide a more comprehensive understanding of this public health issue. Another limitation is the restricted examination of age-specific trends, as well as the analysis of period and cohort effects. Due to data constraints, we were unable to delve deeply into these dimensions. A more detailed exploration of age-specific trends would have allowed for a better understanding of how alcohol-related mortality rates vary across different age groups. Moreover, investigating period and cohort effects could have provided valuable insights into the association of historical and generational factors with alcohol-related mortality rates. Future studies should address these limitations and provide a more nuanced understanding of how age, period, and cohort are associated with alcohol-related mortality rates. Finally, there were insufficient death counts for female individuals aged 15 to 24 years, which prevented us from calculating trends for this specific age range. Alternative data sources could be explored to bridge this gap and provide a more comprehensive analysis of alcohol-related mortality among female individuals in this age group.
Conclusions
This cross-sectional study presents a comprehensive analysis of sex differences in alcohol-related mortality in the US from 1999 to 2020. Although male individuals continue to experience a higher burden of alcohol-related deaths, the findings suggest a trend of increasing rates of alcohol-related deaths among female individuals, indicating a narrowing sex gap. These trends may be associated with a combination of sociocultural, economic, biological, and behavioral factors, including the normalization of cultural practices surrounding alcohol consumption. Further research is necessary to identify the psychosocial and environmental factors associated with these trends and guide evidence-based interventions aimed at reducing alcohol-related mortality risks for all individuals, with a particular focus on developing targeted treatments to address alcohol use among female individuals. | 2023-07-29T06:16:16.774Z | 2023-07-01T00:00:00.000 | {
"year": 2023,
"sha1": "e24b91e6f4a5aff2dd1d0d4caadcc5b51643a2bc",
"oa_license": "CCBY",
"oa_url": "https://jamanetwork.com/journals/jamanetworkopen/articlepdf/2807706/karaye_2023_oi_230759_1689861540.8146.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4fdddd3fc6dc1b937587e956eeb43af3a598d346",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
213040029 | pes2o/s2orc | v3-fos-license | Caracterización física y mecánica de compuestos de Guazuma crinita Mart. a base de polipropileno virgen Physical and mechanical characterization of Guazuma crinita Mart. composites based on virgin polypropylene
Wood plastic composite materials based on Guazuma crinita wood particles (White Bolaina) from forestry thinning of 4, 5 and 6 years old and virgin polypropylene (PP) were prepared, using polypropylene maleic anhydride (MAPP) as a coupling agent. Specimens were made by extrusion, thermal compression and laser cutting. White bolaine particles were sieved with ASTM mesh sizes: -40/+60, -60/+80 and -80/+100. Proportions of polypropylene/Bolaina mixture were: 70/30, 80/20 and 90/10. All formulations included 2 % MAPP as a coupling agent. Physical properties as moisture content, density, absorption and swelling were assessed, as well as the mechanical properties of static bending, tension and impact resistance. Additionally, for discussing the results an anatomical characterization of the White Bolaina wood fiber (length, diameter, wall thickness, lumen diameter and slenderness coefficient) and a chemical analysis of its components (extractives, holocellulose, lignin and ash) was carried out. Results allow to appreciate a direct relationship between the variable mixing ratio and the mainly physical properties, as well as between the variable particle size with respect to tension. Wood age did not represent a significant source of variability.
Introduction
Many lignocellulosic fibers have been proposed as reinforcement in composite materials throughout human history. Of special interest is the use of fast-growing wood fibers, as it is a natural, low-cost, renewable and highly available resource for its industrial purposes; as well as for his contributions in the physical and mechanical characteristics in related final products (Satyanarayana et al., 2009).
In addition, society's demand for the use of products originated by waste from industrial and single-use processes, such as sawdust and plastic, respectively, must be considered as well. All this has allowed sectors of the construction and automobile industry to develop a variety of products, including railings, window frames, door panels, moldings, floors, seat upholstery, etcetera (Clemons, 2002).
In the composite materials the optimum particle size is sought, as well as its adequate proportion, according to its final use. Composite materials reinforced with vegetable fibers have been positioning themselves strongly in the market, favored by their low cost, high durability and the use of waste in their elaboration (Wolcott and Englund, 1999;Klyosov, 2007).
There is a poor research background related to plastic and wood composites in Peru (Lázaro et al., 2016a;Lázaro et al., 2016b;Gonzáles et al., 2018), despite the efforts made by universities, organizations of the State and private entities for its promotion; there is only one national company that produces plastic and wood and markets them.
Its main market is the construction companies that use them for multi-family projects, due to their low cost of installation and maintenance, as well as their long service time, a valuable feature for this purpose.
The definition of the raw material to be used is important, since it must come from a promising species industrially and with existing plantations. Such is the case of the Guazuma crinita Mart. (White Bolaina), a widespread forest species as an alternative for forest plantations in the Peruvian Amazon. The main product of White Bolaina is sawn wood, with which tongue and groove are manufactured for interiors and Revista Mexicana de Ciencias Forestales Vol. 10 (56) Noviembre -Diciembre (2019) exteriors, glued boards, moldings, furniture interior covers and other types of carpentry (Guerra et al., 2008).
Based on the above, in the present study the aim was to achieve a characterization of virgin polypropylene-based composite materials, reinforced with White Bolaina particles, through the evaluation of physical and mechanical properties.
Materials and Methods
Wood plastic composite materials were made, for which, as a reinforcement material, thinning wood of 4, 5 and 6 years of Guazuma crinita (White Bolaina) was used from a forest plantation in the province of Puerto Inca, department of Huánuco, Peru.
As a thermoplastic matrix, a polypropylene homopolymer with a flow rate of 12.5 g / 10 min -1 (2.16 kg / 230 °C -1 ) was used. As a coupling agent, maleic polypropylene anhydride (MAPP) was used -which already has several efficiency studies (Correa et al., 2007) -with a melting temperature of 167 °C; in 2 % concentration for all formulations.
The wood was allowed to peel, bark, grind and sift until obtaining particles of three mesh sizes (ASTM) -40/+60, -60/+80 and -80/+100. The sieved particles were dried in an oven at 103 °C ± 2 °C for 48 h to obtain a moisture content <5 %. The different proposed formulations were prepared (Table 1). The extrusion of the materials was carried out in a single screw extruder machine, at a temperature between 160-175 °C and 30 rpm, then the extruded material was milled to continue pressing. The composite boards were made in a hydraulic press, at a speed of 0.9 cm s -1 and a pressure of 40 bar; curing of the material took between 4-5 minutes at a temperature between 177-195 °C. A total of 1 080 specimens were made in an 80 W power laser machine;40 were tested for each treatment, according to the following standards: ASTM D1037-99 for moisture and density (ASTM, 1999) The anatomical characterization of the White Bolaina fibers was carried out in accordance with the procedure standard for studies of wood anatomy Ibama (1991).
The values of length, width and wall thickness of at least 25 fibrous bundles obtained after the defibration process were taken, with a LEICA ICC50 HD camera coupled to a LEICA DM500 microscope with magnifications of 4X, 10X and 40X.
The chemical description of G. crinita fibers was carried out following the standards:
Results and Discussion
An evaluation of the physical and mechanical properties of the PP-White Bolaina composite material (Table 2) was performed, as well as a characterization (Table 3) and chemistry of the G. crinita fiber in its three ages.
Anatomy Description
The exclusive analysis of the fibers was made because of its possible influence on the physical and mechanical properties of the composite material.
The average fiber length reached its highest value for the age of 5 years (1 554 µm), followed by 4 years (1 399 µm) and 6 years (1 100 µm), respectively. According to IAWA (1989), these dimensions are classified as medium length fibers (900-1 600 µm); therefore, all of them were considered that way. The average fiber diameter was 28, 27 and 26 µm, for ages 5, 4 and 6 years without confirming a significant difference.
According to Ibama (1991), the fiber diameter is described as medium. The average wall thickness for the three ages (2 µm) is defined as very thin (IAWA, 1989). The slenderness coefficient (also known as long/wide ratio) reached its highest value for age 5 years (57), followed by age 4 years (53) and age 6 years (44), respectively.
These results are shown in Table 3.
Chemical caracterization
The contents of extractives, holocellulose, lignin and ashes were very similar for the three ages evaluated. The content of extractants and ashes were numerically lower than the values recorded by Oluwadare and Asagbara (2008), who carried out a study of the chemical composition of Sterculia setigera Delile, a species of the same family as Guazuma crinita (Table 4). The results of lignin content kept numerical similarity with the values found by Oluwadare and Asagbara (2008). Holocellulose content values were numerically higher than those recorded by Pettersen (1984) for Guazuma tomentosa Kunth, a species of the same genus as Guazuma crinita. water (Caulfield et al., 2005;Bouafif et al., 2009). Likewise, the average holocellulose content (cellulose and hemicelluloses) in the G. crinita fibers for their three ages under study is above 70 %, confirming a strong affinity between the fibers and the surrounding humidity. In plastic-wood composite materials, lignocellulosic components are responsible for moisture gain; matrixes usually have a hydrophobic character (Klyosov, 2007;Caicedo et al., 2015). Cárdenas (2012) refers to a maximum range of acceptability for moisture content in composite materials of 2 %. The same author recorded moisture content values of 0.27 to 0.31 % for composite materials made by the injection method.
Statistical analysis indicated that the variables age and mixing ratio had a highly significant influence (p≤0.0041). The double age * particle size interaction and the triple interaction behaved in a similar way on the moisture values (p ≤0.0040). Statistical analysis indicated that the variables age, particle size and mixing ratio had a highly significant influence (p≤0.0015); however, double interactions and multiple interaction did not act that way upon density (p≥0.0405).
In regard to particle size, the treatments that include the smallest particles had the highest density values, given the greater ease of encapsulation of the material (Fabiyi, 2007;Klyosov, 2007;Cárdenas, 2012). Likewise, there is a slight increase in density values when the proportion of particles in the composite increases. Although the G. crinita wood has a low density, its increase favors the density of the composite material. The average absorption values vary from 17.4 to 5.7 % for year 4, from 18.8 to 6.0 % for year 5 and from 11.6 to 5.1 % for year 6. Although the age variable presented a highly significant difference (p≤0.0001), the chemical composition of G. crinita fibers for their three ages is very similar, so it is evident that it does not influence the results from absorption.
Absorption and swelling
A slight decrease in absorption values is noticed when using smaller particles, which is consistent with Fabiyi (2007) and Fuentes-Talavera et al. (2014). When the interface in the composite material is homogeneous and compact, the fibrous elements are embedded within the matrix unable to absorb moisture from the outside. Large particles are difficult to embed through the matrix, leaving exposed regions where they absorb moisture (Simonsen and Rials, 1996;Caulfield et al., 2005). Likewise, a direct relationship between absorption values and the proportion of particles is observed. Klyosov (2007) points out that most plastics used in composite materials practically do not absorb water; therefore, the incorporation of cellulosic particles is responsible for significantly increasing water absorption.
Soattiyanon (2010) reported absorption values between 8 and 9 % for different types of composite materials during periods of immersion greater than 6 months in materials processed by injection, a process that ensures a better coating of the fiber and therefore a greater resistance to absorption. In turn, Lázaro et al. (2016a) reported absorption values of 14 to 15 % in polypropylene and bamboo composites, results similar to the present investigation.
Statistical analysis indicated a highly significant influence (p≤0.0001) for the three variables; in a similar way, the double interactions age * particle size and age * mixing ratio, as well as the triple interaction, had significant influence on the absorption values (p≥0.0007).
The average swelling values varied between 5.8 and 3.1% for year 4, from 5.8 to 3.3 % for year 5 and from 5.2 to 3.6 % for year 6. Ages 5 and 6 years had the greatest increase in swelling for the first two hours of immersion in water. In general, treatments with larger particles reached the highest swelling values, since the fiber is not fully encapsulated, a trend reported in different studies (Okubo et al., 2004;Mattos et al., Revista Mexicana de Ciencias Forestales Vol. 10 (56) Noviembre -Diciembre (2019) 2014). A direct relationship between the swelling values and the proportion of particles, as in absorption. This is due to the hydrophilic nature of wood particles, especially the presence of hydroxyl groups (OH) in cellulose and hemicelluloses, major components in wood (Caulfield et al., 2005;Bouafif et al., 2009).
Cárdenas (2012) Other researchers such as Idrus et al. (2011) and Ravi et al. (2014) fiber showed almost no differences in its three ages.
Regarding the size of the particles, a moderate increase in the values of the MOR in tension is detected when larger particles are incorporated. In an investigation, Stark and Berger (1997) argued that this characteristic increases when the particle size increases to reach 250 µm (60 ASTM), at which point the tension MOR begins to decrease. However, Nourbakhsh et al. (2010) and Bledzki et al. (2005) consider that the tensile strength in PP-wood composite materials increases when particle size decreases, and they attribute this behavior to an improvement in interfacial adhesion between the wood particles and the matrix.
A significant increase in the MOR values in tension is observed when the proportion of wood particles in the composite material is reduced (p = 0.0001). This phenomenon has been pointed out by other researchers (Klyosov, 2007;Ravi et al., 2014), who agree that high concentrations of wood particles reduce the MOR of the composite material. Caulfield et al. (2005) recorded tension MOR values of 44.9 MPa for polypropylene and poplar fiber composites (30 % of total weight). In another investigation, Stark and Rowlands (2003) reported MOR values of 29.4 and 37 MPa for wood flour composites at 40 and 20 % of the total weight, respectively. Cárdenas (2012) reported MOR values between 19 and 25 MPa for polypropylene and pinewood composites made by injection method.
Statistical analysis indicated for the variables age, particle size and proportion of particles, a highly significant influence (p = 0.0001); in the same way, the double interactions age * particle size and age * mixing ratio, as well as the triple interaction, had significant influence on the values of MOR in tension (p≤0.0093).
Córdova et al., Physical and mechanical characterization of Guazuma... The treatments with the highest proportion of particles obtained higher values of MOE in bending. Wood fibers generally exhibit good bending behavior, which is why composite materials with more fiber demand more effort to achieve deformation (Caulfield et al., 2005;Idrus et al., 2011). However, reinforcing with more particles the composite material does not necessarily produce improvements in the MOE in bending. Ravi et al. (2014) indicate that the empty spaces, the low interaction between For the mixing ratio variable, the statistical analysis indicated a highly significant influence (p = 0.0017) on the values of MOE in static bending.
Modulus of elasticity (MOE)
The average values of MOE in tension vary from 1.0 to 0.5 GPa for year 4, from 0.9 to 0.6 GPa for year 5 and from 0.9 to 0.7 GPa for year 6.
A slight increase in the values of the MOE in tension is verified when larger Bolaina particles are used, as it occurred with the rupture module. The highest values of MOE in tension correspond to treatments with larger particles, and they were lower than those mentioned by Caulfield et al. (2005), Lisperguer et al. (2013) and Cárdenas (2012).
The low inter-phase adhesion between the G. crinita particles and the polypropylene matrix has possibly generated areas of high heterogeneity inside the composite material, reducing its resistance to deformation (Essabir et al., 2015). Likewise, the anatomical characteristics of the Bolaina fibers in their three ages registered low values, with medium-sized fiber lengths and very thin wall thicknesses, undesirable characteristics for stress tests (García et al., 2003).
For the particle size variable, the statistical analysis indicated a highly significant influence (p = 0.0001); in a similar way, the double interactions age * particle size and age * mixing ratio had a highly significant influence on the MOE values in tension (p≤0.0005). The values for 4 years sample were slightly higher, although the slenderness coefficient of the G. crinita fiber had almost no differences in its three ages.
Impact resistence
The treatments with larger and smaller particles, respectively, achieved the highest impact resistance values. This irregular behavior can be explained by the poor reinforcement/matrix interaction in the composite material due to the manufacturing method. The presence of wood as reinforcement in the PP matrix generates areas where the effort is concentrated, which leads to the beginning of cracks and the potential failure of the composite material (Nourbakhsh et al., 2010). Stark and Berger (1997) observed that as the particle size increased, the impact resistance for different composite materials also did. However, this does not fit the results of the present study in which the matrix is primarily responsible for absorbing the energy produced by the impact. Durowaye et al. (2014) indicated that increases in the amount of wood particles reduce the ability to absorb energy from the matrix, which decreases the impact resistance on the composite material, influence also reported by Yuan et al. (2008).
Statistical analysis indicated that the variables age and particle size exerted a highly significant influence (p≤0.0002); in a similar way, double interactions and triple interaction affected impact resistance values (p≤0.0004).
Conclusions
Wood age did not have a numerically significant influence, except for the physical property of absorption and the mechanical property of impact resistance.
The proportion of particles in the PP-White Bolaina composite material showed a direct relationship with the physical properties of moisture content, density, swelling and absorption, as well as with the mechanical property of MOE in static bending; while the relationship was inverse with respect to the values of MOR in tension and static bending, in addition to impact.
The particle size in the PP-White Bolaina composite described a direct relationship with respect to the values of MOR and MOE in tension. | 2020-03-05T10:57:55.120Z | 2020-01-27T00:00:00.000 | {
"year": 2020,
"sha1": "d160916b2b6c582f024f146b53d3c5172ef7cd0c",
"oa_license": "CCBYNC",
"oa_url": "https://cienciasforestales.inifap.gob.mx/index.php/forestales/article/download/621/1847",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4e21ccaacebe78997a663cd36d1447ef7b33eb71",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
220497916 | pes2o/s2orc | v3-fos-license | Numerical Modeling Shows Increased Fracturing Due to Melt-Undercutting Prior to Major Calving at Bowdoin Glacier
Projections of future ice sheet mass loss and thus sea level rise rely on the parametrization of iceberg calving in ice sheet models. The interconnection between submarine melt-induced undercutting and calving is still poorly understood, which makes predicted contributions of tidewater glaciers to sea level rise uncertain. Here, we compare detailed 3-D simulations of fracture initiation obtained with the Helsinki Discrete Element Model (HiDEM) to observations, prior to a major calving event at Bowdoin Glacier, Northwest Greenland. Observations of a plume surfacing at the calving location suggest that local melt-undercutting influenced the size of the major calving event. Therefore, several experiments are conducted with various local and distributed (front-wide) undercut geometries. Although the number of undercut experiments is limited by computational requirements, one of the conjectured undercut geometries reproduces the crevasse leading to the observed major calving event in great detail. Our simulations show that undercutting leads to initiation of wider fractures more than 100 m upstream of the terminus, well-beyond the directly undercut region. When combining a moderate distributed undercut with local amplified undercuts at the two observed plumes, fracture initiation also increases in between the local undercuts. Thus, our results agree with previous studies suggesting the existence of a “calving amplifier” effect by submarine melt, both upglacier and across-glacier. Consequently, the simulations show the potentially large impact of submarine melt-induced undercutting on iceberg size.
INTRODUCTION
Marine-terminating outlet glaciers of the Greenland Ice Sheet thin, accelerate and retreat faster than any other part of the ice sheet (e.g., Pritchard et al., 2009;Hill et al., 2017). The Greenland Ice Sheet lost 1, 827 ± 538 Gt of ice due to glacier discharge from 1992 to 2018, accounting for 48% of the total mass loss (IMBIE Team, 2019). Future mass loss predictions, and thereby sea level rise predictions, are strongly affected by the representation of marine-terminating outlet glaciers in numerical ice sheet models (Catania et al., 2019;Goelzer et al., 2020). Up to 40% of the uncertainty of sea level rise projections is caused by the uncertainty in calving parameterizations (Bulthuis et al., 2019). Despite recent major advances in modeling calving, it remains challenging to formulate a robust calving law for ice sheet models that calculates mass loss induced by the range of observed calving styles . Since calving is the mechanical detachment of icebergs from the glacier terminus, the location and timing of calving events are determined by fracture initiation and propagation (Benn et al., 2007). Fractures in the ice are both influenced by and affect the stress state of glaciers (Colgan et al., 2016). This interconnection contributes to the complexity of parameterizing calving in large-scale ice sheet models.
Additionally, several physical processes that are known to affect iceberg calving are poorly constrained by observations (Benn et al., 2007). This is particularly the case for calving associated with submarine melting of the ice front, because melting and calving processes are not independent. Melt-induced undercutting of termini may cause an increase in calving by altering stresses up-or across-glacier, a so-called "calving amplifier" effect (O'Leary and Christoffersen, 2013;Benn et al., 2017;Cowton et al., 2019). However, confirming such an effect is difficult since directly observing both submarine calving and melt rates is challenging. Estimated melt rates rely on modeling or hydrographic data taken at a distance from the glacier terminus (Slater et al., 2016). Using these methods, submarine melt rate estimates range from 0.7 to 10 m d −1 for Greenland (Rignot et al., 2010;Sutherland and Straneo, 2012;Xu et al., 2013;Inall et al., 2014). Where glacier runoff reaches the calving front through channels below sea level, buoyant plumes appear that entrain relatively warm seawater and thereby increase submarine melt locally (Jenkins, 2011). Ambient melt, outside of such plumes, was previously thought to be insignificant compared to plumedriven melt (Cowton et al., 2015;Carroll et al., 2016). However, repeat multibeam surveying-derived ambient melt rates are as high as ∼5 m d −1 at LeConte Glacier, Alaska (Jackson et al., 2020), two orders of magnitude higher than predicted by existing plume-melt parameterizations (Jenkins, 2011;Cowton et al., 2015). Therefore, the relative importance of distributed and localized melt and their effect on calving is still unclear.
Besides measuring melt rates, observing submarine glacier front geometries is also challenging. Rare geometry observations have shown that a plume can cause a locally undercut glacier front, with undercut lengths into the glacier that can be as large as the water depth (Fried et al., 2015;Rignot et al., 2015;How et al., 2019). Fried et al. (2015) found that 80% of the terminus of a West Greenland glacier is undercut and they observed many deeply-undercut outlets even for subsurface plumes with small discharge fluxes. Cowton et al. (2019) showed that the location of such local undercuts determines whether submarine melt can act as a calving amplifier. Their model study shows that localized melting near the lateral margins might trigger increased calving of the entire glacier terminus.
At Bowdoin Glacier, Northwest Greenland, a few kilometerscale calving events form a large part of the annual mass loss by calving (Figure 1, Jouvet et al., 2017;Minowa et al., 2019). Therefore, understanding the mechanisms of such individual major calving events contributes to understanding of Bowdoin Glacier's calving behavior. Unmanned aerial vehicle (UAV) surveys revealed the opening of a crevasse prior to a major calving event in 2015 and Jouvet et al. (2017) found that a crevasse penetrating half the glacier thickness was required to cause the observed opening rates. A terrestrial radar interferometer (TRI) installed on a hill opposite the calving front revealed that crevasse opening prior to a major calving event in 2017 was fastest at low tide (van Dongen et al., 2020). Using the ice flow model Elmer/Ice (Gagliardini et al., 2013), we modeled crevasse opening rates by prescribing the observed crevasse location (van Dongen et al., 2020). We identified the water level inside the crevasse as a key driver of modeled crevasse opening rates and found that undercutting may have contributed to destabilizing the calving front. While the mechanisms leading to crevasse opening have been investigated for Bowdoin Glacier in the aforementioned studies, the crack initiation that preconditions calving remains an open question. In this paper, we use the elastic-brittle Helsinki Discrete Element Model (HiDEM, Åström et al., 2014) to study crevasse initiation prior to the major calving event observed in 2017. In contrast to continuum flow models such as Elmer/Ice, discrete element models are capable of modeling ice fracturing processes explicitly (Faillettaz et al., 2011;Åström et al., 2013;Bassis and Jacobs, 2013;Riikilá et al., 2015). We test whether HiDEM is capable of reproducing the initiation of the crevasse responsible for the major calving event and to what extent submarine undercutting is necessary to explain the observed event. HiDEM has been used previously to study the influence of undercutting using both conceptual (Benn et al., 2017) and real-world glacier simulations (Vallot et al., 2018). However, whereas Vallot et al. (2018) compared modeled calving rates to satellite-derived calving rates, we here use high resolution field observations which give us a unique opportunity to validate the model results and to improve our understanding of calving mechanisms.
STUDY SITE
Bowdoin Glacier is a marine-terminating glacier located in Northwest Greenland (77 • N, 68 • W, Figure 1A). The approximately 3 km wide glacier was up to 250 m thick at the calving front in 2013 (Sugiyama et al., 2015). The terminus position was fairly stable from 1987 to 2008, when the glacier started retreating at an average rate of 0.22 km yr −1 (Sugiyama et al., 2015). Since 2013, the calving front has stabilized, but the glacier has been thinning at a rate of 4 m yr −1 (Tsutaki et al., 2016). Bowdoin Glacier's ice flow is characterized by a stagnant region in the southeast and fast flow in the central region, causing a shear zone at the terminus (outlined in Figure 1B). An almost crevasse-free medial moraine is present ∼1 km away from the southeastern glacier margin, close to the zone of highest shear (Figures 1B,C).
In July 2017, the break off of a 650 m wide, 80 m long iceberg was observed in detail during a field campaign, at least 5 days after the formation of a large surface crevasse (van Dongen et al., 2020). The fracture leading to the major calving event was the only crevasse that crossed both the shear zone and moraine ( Figure 1D). A very similar scale event, at the same location across the shear zone, took place in July 2015, 15 days after fracture initiation (Jouvet et al., 2017). For both observed events, a plume was visible on the sea surface at the calving location ( Figure 1C, Jouvet et al., 2017). Therefore, submarine meltinduced, local undercutting may have influenced the stress-state and thereby the observed calving events. In 2017, a second plume surfaced through the ice mélange in the northwest ( Figure 1C). Jouvet et al. (2018) found that in 2016, the northwestern plume originated from Bowdoin Glacier itself, whereas the southeastern plume was fed by discharge from the nearby land-terminating Mirror Glacier. The plume's origin was recognized since it appeared approximately 24 h after the outburst of an icedammed, marginal lake fed by a river transporting meltwater from Mirror Glacier ( Figure 1A).
Crevasse Detection
To facilitate the comparison of model results and observations, fractures are extracted from a 0.5 m resolution UAV-derived ortho-image of 5 July 2017 ( Figure 1C). Various edge detection algorithms have been tested, but a simple threshold on intensity of the gray-scale ortho-image was the most successful in producing a crevasse map while limiting extraction of false positives (shadows, debris on the glacier etc.). Fractures are extracted by selecting the pixels with intensity below 30% from the ortho-image.
Model
We use the Helsinki Discrete Element Model (HiDEM, Åström et al., 2013, 2014) to study crevasse initiation prior to the observed large calving in 2017. HiDEM models ice as a brittleelastic solid. Dynamics is induced by elasticity, fracture, and sliding. The model neglects viscous deformation, which can be ignored during fracture initiation and calving due to the distinct deformation timescales involved (∼10 2 − 10 6 s for viscous deformation compared to ∼10 −2 s for crack propagation, Benn et al., 2017).
HiDEM represents ice as an assemblage of particles, arranged to form glacier geometries. Neighboring particles are connected by massless breakable beams, that act as rigid joints keeping particles together. Regardless of whether two particles are connected by a beam, they interact via inelastic contacts that dissipate energy through a damping force. The version of HiDEM used in this study (doi: 10.5281/zenodo.1402603) most closely resembles the one described in Åström et al. (2014). A detailed model description is given in the Supplementary Material. Extensive benchmarking and validation of the model are reported in Riikilá et al. (2015) and Åström et al. (2013).
HiDEM computes the displacement of each particle using a discrete version of Newton's equation of motion with dissipation terms (Equation S1). Calculating the trajectory of each particle is computationally expensive, which restricts the duration of HiDEM simulations. During initial simulations, it became clear that the short timescale of our HiDEM runs (5 s) are sufficient for fractures to occur, but insufficient for sliding. Without glacier dynamics by sliding, only very limited fracture initiation occurs. To be able to simulate at least moderate sliding, the friction parameter (C in Equation S1) was rescaled by a factor of 10 −5 . With this rescaled friction coefficient, a few seconds of HiDEM simulation reproduces an amount of glacier sliding and fracturing that would normally require tens of hours. Only basal friction is scaled; all other forces such as gravity and particle interactions remain unscaled.
To start a simulation, the glacier is divided into a hexagonal close packed lattice of spheres of equal size. Initially, 10% of the beams are randomly selected and broken, representing small pre-existing cracks in the ice. Because the initial arrangement of particles is an undeformed lattice, there is no load on the beams to counter the forces induced by gravity and buoyancy. Therefore, the initially imposed lattice must deform slightly to reach a forceequilibrium before fracture initiation can be modeled. We first switch on the dynamics, including sliding but without fracture, and let the glacier deform under its own weight. The settling of particles toward force-equilibrium initially causes some internal oscillations that must be damped out before the model ice can be allowed to break. After 10 s of simulation, the kinetic energy of the glacier (Equation S3), mainly induced by these oscillations, is reduced by more than an order of magnitude (Figure S1). The mean particle displacement in this phase is less than 10cm in the horizontal direction and less than the particle size in the vertical direction (<2 m). Once the system has reached force-equilibrium, the load on beams can be expected to model forces within the glacier in a realistic manner. This is a suitable initial state for fracture computations and fracturing is allowed to occur in the second phase. Beams can break if the strain on a beam exceeds a fracture threshold, either by tension or bending (Equation S2). Fractures are irreversible, there is no reconnecting of beams. If all beams would break, the particle motion represents granular flow.
It should be noted that not all observed crevasses are formed instantaneously, many crevasses formed earlier and were advected to the terminus. Mottram and Benn (2009) measured strain rates across crevasses on a glacier in Iceland and found that almost half of the tested crevasses (19 out of 44) were closing due FIGURE 2 | HiDEM input data on a rotated coordinate system, equal to the coordinate system of the HiDEM simulations. (A) Friction distribution overlayed on July 5 ortho-image, showing low friction (blue, 3 × 10 9 kg s −1 ) and high friction (red, 1 × 10 11 kg s −1 ). (B) Glacier thickness and (C) topographic depressions, calculated as the difference of the DSM and a smoothed surface obtained from the DSM by a 2-D median filter with a 51 × 51 kernel.
to compressive stress. Since relict crevasses may no longer be in equilibrium with prevailing stresses, it is expected that HiDEM does not reproduce such fractures. However, the one crevasse that was observed to lead to calving was located in the shear zone, which is normally crevasse free (Figures 1C-E). This crevasse was therefore very likely formed in place, similar to observations in 2015 (Jouvet et al., 2017), and is thus suitable to study by HiDEM simulations.
Model Domain
The computational domain extends to 500 m upstream of the calving front ( Figure 1A). We use a highly detailed, 1 m resolution, Digital Surface Model (DSM) of 5 July 2017 obtained by UAV photogrammetry and bed elevation is derived from radar and sonar measurements (van Dongen et al., 2020). Glacier thickness in the model domain is shown in Figure 2B. The glacier front is grounded but close to flotation (van Dongen et al., 2020). Since we use a high resolution DSM, topographic depressions are present in the initial geometry. Figure 2C shows these depressions, calculated as the difference of the DSM and a smoothed surface obtained from the DSM by a 2-D median filter with a 51 × 51 kernel. We find that the topographic depressions do not affect the location and orientation of modeled fractures significantly, compared with a simulation imposing the smoothed surface (section 2.2 of Supplementary Material). Figure 3 shows the HiDEM domain after the relaxation phase, for a particle size of 1.75 m using ∼50 million particles. A sensitivity analysis was done to find the optimal particle size, defined as particle diameter, that is computationally feasible and realistically reproduces fracturing on Bowdoin Glacier (section 2.1 of Supplementary Material). For large particles (≥4 m), hardly any fractures initiate. For small particles (≤2 m), the glacier becomes fragile and fracture initiation dominates the modeled velocity. Therefore, we use 2.5 m particles such that the model reproduces both fracture initiation and observed velocities.
Boundary Conditions
Particles at the upstream and lateral boundaries are fixed in the horizontal plane. Unless specified otherwise, the glacier front is assumed to be vertical. A buoyancy force is applied to all ice particles below sea level, not only the particles at the calving front. This is equivalent to applying a water pressure at the calving front, the floating ice base and inside crevasses (as explained in the Supplementary Material). Basal friction is applied as damping force, which is linearly related to particle velocity (Equation S1). We apply a simple friction distribution, based on satellite-derived velocity. A shear line is identified where the highest velocity gradients are observed ( Figure 1B). Two different friction values are applied on either side of the shear line: a low friction of 3 × 10 9 kg s −1 and high friction of 1 × 10 11 kg s −1 (Figure 2A). The values are averaged friction values found by inverting for basal friction based on the satellite-derived velocity using the ice flow model Elmer/Ice (van Dongen et al., 2020). The values of all model parameters are listed in Table S1.
Experiment Design
We define a control simulation, which has the dual basal friction distribution shown in Figure 2A and a vertical ice cliff. Besides the control simulation, we test the influence of the basal conditions and undercutting.
Usually, no fractures are present in the shear zone ( Figures 1B,C), similar to observed at Antarctic suture zones (e.g., Hulbe et al., 2010). However, the fractures leading to the major calving events in 2015 and 2017 did cross the shear zone. Therefore, it has been argued that influence of the shear zone on fracture formation is important to understanding the observed calving event (Jouvet et al., 2017;van Dongen et al., 2020). To assess the influence of the observed high shear, we compare our control simulation to a set-up with the low friction value everywhere.
Since a plume surfaced at the calving location, we expect that local undercutting influenced the major calving event. Therefore, several experiments are conducted with a submarine meltinduced undercut. As there has not been in-situ measurement of the shape of the submarine ice-cliff at Bowdoin Glacier, we have to assume ice front geometries based on observations at other glaciers (Fried et al., 2015;Rignot et al., 2015;How et al., 2019). We assume linear undercuts reaching up to sea level, similar to those used by Benn et al. (2017). We vary the undercut length (UC), which is defined as the distance upstream the undercut reaches. We let the undercut length depend on the local water depth (D w , Figure 4). Three different types of undercuts are introduced: a distributed undercut along the entire calving Combined Heterogeneous Heterogeneous basal friction means that friction is as shown in Figure 2A, whereas homogeneous means that low friction (3 × 10 9 kg s −1 ) is applied over the entire domain. The distributed undercuts are applied along the entire calving front, whereas local undercuts are applied where plumes are observed through the ice mélange ( Figure 1C).
front, local undercuts where plumes are observed through the ice mélange ( Figure 1C) or a combination of a smaller distributed and larger local undercut. An overview of the simulations is given in Table 1 and the undercut geometries are outlined in red on Figures 6B-E.
The maximum applied distributed undercut in our simulations is D w /4. Observed undercut lengths averaged across a terminus, hence similar to our distributed undercut, vary from D w /10 − D w /3 for Greenlandic glaciers (Fried et al., 2015;Rignot et al., 2015). The reported largest local undercut lengths vary widely per glacier as well: from D w /12 for Tunabreen (Svalbard), D w /2 for Kangilernata Sermia to D w for Kangerlussuup Sermia, Store Gletscher, and Rink Isbrae (Fried et al., 2015;Rignot et al., 2015;How et al., 2019). Because of high computational demands, the number of experiments is severely limited and not the entire range of observed local undercuts has been simulated.
Observed Crevasse Distribution
Detected crevasses are shown in Figure 5A. Besides crevasses, a few dark features are also extracted, such as the medial moraine and some patches with debris close to the moraine, but the criterion is chosen such that shadows are not extracted. Figure 5A shows four regions of different fracturing patterns. Long, transverse crevasses are observed in the fast flowing area (1250 ≤ x ≤ 2500 m). Close to the western glacier margin (x > 2500 m), narrow along-flow crevasses are visible besides wider across-flow crevasses. Except for the crevasse leading to calving (Figures 1C-E), very few crevasses are observed in the shear zone close to the moraine (850 ≤ x ≤ 1250 m). Close to the eastern margin (x ≤ 850 m), crevasse density increases again but the fracturing pattern is more chaotic than in other regions of the glacier terminus. Since only minor sliding is expected in this area ( Figure 1B, Jouvet et al., 2017), these crevasses are presumably produced under a dynamical regime of slow stretching mostly due to viscous deformation.
The same four regions (western marginal, central, shear zone and eastern marginal) will also be referred to in comparisons of model results versus observed fracturing pattern. We have computed the density of black pixels in Figure 5A per region, as a measure of observed crevasse density (both abundance and width), as shown in Table 2. For the calculation of crevasse density in the shear zone, the moraine-covered area (1050 ≤ x ≤ 1100 m) is excluded since the gray-scale threshold falsely detects it as a crevasse. The crevasse densities in Table 2 for each region are given relative to the density in the entire domain. Table 2 shows that the observed crevasse density is highest in the western marginal area, closely followed by the central area. The observed crevasse density is lowest in the shear zone.
In sections 4.2 and 4.3 we assess whether HiDEM reproduces the observed crevasse pattern. Subsequently, section 4.4 addresses the influence of undercut geometries on the initiation of the crevasse that was observed to lead to major calving (Figures 1C-E), and other crevasses close to the front that could induce calving.
Control Simulation
All simulations are run for 5 s in the second simulation phase when fracturing is allowed. The 5 s of modeled glacier dynamics resemble the amount of sliding that is observed in approximately one day (Figure 5D). This is as expected, since the basal friction coefficient is scaled by a factor 10 −5 , and the sliding velocity is approximately proportional to friction coefficient (Equation S1). As such, we can interpret the modeled fractures to represent the amount of fractures that initiate during approximately one day. HiDEM reproduces the observed high shear close to the moraine ( Figure 5D). The velocity distribution is partly characterized by fracture initiation, visible as discontinuities in the modeled velocity field (Figure 5D).
Modeled fracture strain on the surface is shown in Figure 5B. The strain is only shown for broken bonds where the strain is Combined, Relative to Control 1.9 14.9 1.6 1.8 2.0 28.6 In the first two rows, densities are given relative to the density in the total domain. For all other rows, densities in each column are given relative to the density in the same column for the control simulation. The observed fracture density is derived from the number of black pixels per m 2 in Figure 5A, excluding the area of the moraine (1050 ≤ x ≤ 1100 m). The modeled fracture densities of each simulation are derived from the number of broken bonds per m 2 . Labels of the simulations are explained in Table 1 at least ten times the fracture strain. Strain magnitude reflects fracture width: if the strain is 1, this corresponds to a fracture width equal the original bond length which is slightly larger than the particle size (2.8 m). Generally, modeled fracture orientation agrees with observations, but the modeled fracture density is much lower than observed. Fractures are mainly initiated in the central area ( Table 2), which matches the area where the long, transverse fractures are observed ( Figure 5A). However, the fractures do not extend as far to the west as observed. This results in a low modeled fracture density in the western marginal area, which contradicts observations ( Table 2). Very few crevasses are modeled in the shear zone, which is consistent with observations, and the few crevasses modeled there are narrow (low strain in Figure 5B). One of the modeled fractures is similar to the crevasse that lead to the observed calving event, only along half of its extent. In the almost stagnant eastern marginal area, a very low fracture density is modeled, contrary to observations, but this should be expected because our model set-up ignores viscous deformation (see section 4.1 and 5).
Basal Conditions
The low friction set-up causes significantly more fractures ( Figure 5C): the proportion of broken bonds is almost twice as high as in the control set-up ( Table 2). By lowering the friction in the eastern marginal area, the ice velocity is generally higher than in the control simulation (cf Figures 5D,E). Hence, the increased glacier sliding causes increased fracture initiation. At first sight, the low friction set-up therefore does a better job in reproducing the observed fractures than the control simulation, which showed a lower fracture density. Especially in both marginal areas, more fractures are modeled in comparison to the control simulation (Table 2). However, we do not expect our model set-up to reproduce crevasses in the eastern marginal area. Therefore, we do not interpret the higher modeled fracture density in the east as an improvement compared with the control simulation. Furthermore, the low friction set-up no longer reproduces the almost stagnant area in the east. Almost four times as many fractures initiate in the low friction set-up in the shear zone (Table 2), where very few fractures are observed. The shear zone is of main interest in this study, since calving was observed there. Because the control simulation better reproduces the shear zone, all subsequent simulations assume the friction distribution as in the control simulation (Figure 2A), despite the better reproduction of the western marginal area in the low friction set-up.
Melt-Induced Undercutting
Four different undercut geometries are applied (see Table 1). Moderate (UC = D w /8) or larger (UC = D w /4) distributed undercuts are applied along the entire calving front, as well as a local undercut (UC = D w /4, Figure 4). Finally, a local undercut (D w /4) at the plumes which gradually decreases to a moderate distributed undercut (D w /8) everywhere else is applied. For all simulations, the surface strain is shown in Figures 6B-E Figure 5D (10-15 s velocity average for the control simulation) and Figure 6F (14-15 s velocity average for the same simulation), the 10-15 s averaged velocity is dominated by smooth sliding, whereas the velocity from 14 to 15 s is dominated by discrete fracturing. Since we are mainly interested in the fracturing for these undercut simulations, the velocity during the final second of simulation is shown, such that the fractures that are actively opening are visualized. The quantity of modeled broken bonds, wide crevasses and crevasses in the shear zone, relative to the control simulation, are given in Table 2.
The model results suggest that the larger distributed undercut (UC = D w /4) destabilizes the entire glacier terminus. Figure 6B not only shows more surface fractures but also higher strain, hence wider fractures (more than forty times as many wide fractures, Table 2). The velocity cross-sections furthermore show that fractures extend to the ice base and ice chunks are in the process of rapidly detaching up to 200 m upstream, across the whole terminus ( Figure 6G). As such, the modeled fractures can be interpreted as a precursor to a very large calving event which spans almost the whole glacier width. On the other hand, the moderate distributed undercut has a very limited effect on fracture initiation (Figure 6C, four times as many wide fractures, Table 2) and velocity ( Figure 6H). Only in the western marginal area, a few fractures are initiated where no fractures were modeled in the control simulation ( Figure 6A), but the fracture density in the west is still lower than observed.
The modeled surface strain shows that a local undercut ( Figure 6D) does not affect fracture initiation much. Fracturing is still limited in the west and the quantity of modeled fractures is very similar to the control set-up (Table 2). However, the combined effect of a local larger undercut, gradually decreasing to a distributed moderate undercut, produces wider fractures than either the local or distributed moderate undercut ( Figure 6E). The total increase of fractures is slightly larger than for the distributed D w /4 undercut, but fewer wide crevasses are modeled (less than half the increase, Table 2). The wider fractures do not extend as far upstream the calving front as for the larger distributed undercut (cf. extent of yellow in Figures 6B,E) and fractures are opening less rapidly (cf. Figures 6G,J). For the combined local and distributed undercut, one wide fracture outlines the observed fracture that lead to calving (Figure 6E). The velocity cross-sections show that a fracture extends to the glacier base near the northwestern plume and the ice chunk near the southeastern plume that was observed to calve off is detaching (Figure 6J).
The model results of the combined local and moderate distributed undercut are compared with observations in Figure 7. The observed and modeled velocity show a similar discontinuity where calving is observed (Figures 7A,B), although the velocity distributions do not agree. Whereas the iceberg is observed to have the highest detachment velocity on the southeast, the modeled velocity is lower there. Figure 7C shows that the modeled fracture does not exactly follow the applied undercut length, but is initiated further upstream, close to where calving is observed ( Figure 7D). Both the distributed D w /4 undercut and combined local and distributed undercuts results show a major crevasse in close alignment with the crevasse that was observed to lead to major calving. In order to quantify the difference between the modeled and observed crevasse for both simulations, we calculated the area between the closest modeled crevasse and the observed crevasse. We divided this area by the observed crevasse length to get the average distance between the modeled and observed crevasse. For the distributed UC = D w /4, the closest crevasse is on average 15.3 m away from observations, whereas the combined local and distributed undercuts give a crevasse on average 6.5 m close to observed, less than three times the particle size.
DISCUSSION
In an earlier study (van Dongen et al., 2020), the relative importance of several physical processes that could affect crevasse opening was investigated by comparing ice flow model results to observations. Crevasse water level and thus hydro-fracturing was found to be a first-order control on opening rates. Submarine melt-undercutting was identified as a second-order process, possibly accelerating opening rates. However, the previous study only addressed opening of the crevasse leading to calving, after it had initiated. Here, we investigate fracture initiation, using the elastic-brittle model HiDEM. The simulations serve to increase our understanding of the calving pattern observed at Bowdoin Glacier and to assess the effect of melt-undercutting.
The high-shear zone in the southeast, close to the medial moraine (Figure 1B), has been suggested to influence the calving pattern of Bowdoin Glacier (Jouvet et al., 2017). HiDEM produces high shear when using a dual basal friction distribution with higher friction in the slow-flowing area ( Figure 5D). The almost crevasse-free area in the shear zone is in this case well reproduced ( Figure 5B). On the other hand, if applying low friction everywhere, the fracture density increases by a factor of almost four in the shear zone ( Table 2). These results support the suggested importance of basal conditions to explain the observed fracturing pattern in the shear zone.
Besides geometry and basal friction, the only model input consisted of conceptual ice-cliff profiles, based on locations of plumes at Bowdoin Glacier and measured ice-cliff profiles at other glaciers (Fried et al., 2015;Rignot et al., 2015;How et al., 2019). Due to high computational demands, only a small set of geometries could be tested. We demonstrate that HiDEM nevertheless manages to closely reproduce the fracture initiation prior to the observed large calving of 8 July 2017, on average 6.5 m close to observed, as shown in Figure 7.
The HiDEM results suggest that the modeled fracture initiation and thus the calving behavior are strongly controlled by submarine melt-induced undercuts (Figure 6). When applying a large distributed undercut (UC = D w /4 along the entire calving front), HiDEM predicts collapse of almost the entire width of the ice cliff ( Figure 6G). As such collapse is not observed, we interpret the applied undercut to be unrealistically large, since the simulation suggests that calving would have occurred before the undercut could grow this large. The impact on fracturing is limited when applying a moderate distributed undercut (UC = D w /8) or local undercuts restricted to plumes (UC = D w /4, Figures 6C,D,H,I). However, HiDEM reproduces the observed fracture that lead to calving very closely when the moderate distributed undercut is combined with larger local undercuts (Figures 6E,J, 7), although we cannot exclude that other combinations of local and distributed undercuts not tested here could have produced a similar result. The simulation combining a moderate distributed and larger local undercuts also shows fractures near the western plume that could lead to calving ( Figure 6E). However, these fractures are narrower and do not extend all the way to the front, which suggests that they would not lead to detachment of an iceberg yet ( Figure 6E). Sentinel-2 imagery confirms that calving occurred in this region between July 30 and August 19.
The assumed undercut lengths are in the range of observed undercuts for West Greenlandic glaciers, where the majority of the calving front is undercut (range of distributed UC from D w /10 to D w /3) and plumes cause local deeper undercuts (up to D w , Fried et al., 2015;Rignot et al., 2015). The occurrence of a calving amplifier (O'Leary and Christoffersen, 2013) can be examined by comparing the upstream extent of the applied undercuts to the modeled initiation of wider fractures (strain>0.2, yellow in Figures 6B-E). In our simulations, increased fracturing is limited in case of a moderate distributed undercut (D w /8) or larger local undercuts (D w /4, Figures 6C,D). With a larger distributed undercut, wider fractures are initiated over 200 m upstream in the central part of the terminus, more than four times further than the undercut itself (D w /4, Figure 6B). Besides that, the combination of a larger local and a moderate distributed undercut increases fracture initiation over 100 m upstream, both at and in between the local undercuts ( Figure 6E). Hence, our results exhibit the calving amplifier effect both upglacier (as in O'Leary and Christoffersen, 2013) and across-glacier (as in Cowton et al., 2019). However, the calving amplifier does not appear in our simulations for local undercuts alone (cf. Figures 6A,D). This is in line with Todd et al. (2018), who used a crevasse-depth calving model to show that distributed undercutting most strongly affects retreat (cf. Figures 6A,B), whereas concentrated melting generally has little influence on fracturing (cf. Figures 6A,D), unless a plume is situated at a "key stone, " where stress bridges provide lateral support to the ice front.
The findings of our highly detailed simulations agree with previous HiDEM studies, which showed that undercutting is necessary to explain satellite-derived mean volumetric calving rates for Kronebreen (Svalbard, Vallot et al., 2018) and that sufficiently large undercuts may induce calving lengths of several times the undercut length for conceptual glacier geometries (Benn et al., 2017). Whether an undercut can grow large enough to act as an amplifier may depend on the frequency of lowmagnitude calving events (Benn et al., 2017). If small calving events are rare and an undercut is able to grow, instability builds up and the terminus may approach a critical state which increases the probability of large calving events. This process can also be described by self-organized criticality (SOC, Åström et al., 2014). SOC systems have a sub-critical regime-in this case distinguished by infrequent and small calving events, allowing an instability to build up with time-and a super-critical regimedistinguished by large calvings and widespread relaxation of the instability. Our simulations with small or no undercut show typical sub-critical behavior, characteristic of quiescent periods of calving. In contrast, the larger distributed undercut shows supercritical collapse of the entire calving front, which is unlikely to happen in nature. This explains the behavior of Bowdoin Glacier which shows infrequent large calving events-such as observed in recent years (Jouvet et al., 2017;Minowa et al., 2019)-that relax its terminus back to sub-criticality and let undercuts grow again by submarine melting, destabilizing the front, yielding a cyclic calving pattern.
Our results thus confirm a complex interaction of a distributed melt-undercut along the entire ice cliff, and enhanced meltundercutting at the locations of plumes. Similar detailed observational and modeling studies are required on more glaciers to quantify the links between melt-undercutting and calving at tidewater glaciers such as Bowdoin.
A major limitation of our modeling approach is the short simulation duration (a few seconds, corresponding to approximately one day of sliding), which does not permit modeling of the entire calving event from fracture initiation to detachment of the iceberg (at least five days according to observations).
A shortcoming of our control simulation is the relatively low modeled fracture density in the western marginal area, both compared with the low friction set-up and with observations. Most likely, the higher modeled fracture density can be explained by the modeled ice velocity, which is not only lower in the eastern marginal area compared with the low friction simulation, but across the terminus (Figure 5E). Comparison of modeled and observed velocity shows that the velocity is underestimated in the western marginal area for both the control and low friction setup. Whereas the area of high velocity is observed to extend to the western margin, modeled velocities in the west are lower than in the central area, which can presumably explain the low modeled fracture density in the west (compare Figures 5D,E, 1B).
Furthermore, the modeled density of surface fractures is generally lower than observed (Figure 5A), which can have two causes. First, fracture simulations lack history, whereas part of the observed crevasses are formed upstream and advected to the terminus. Future work could employ more detailed observations to distinguish actively opening crevasses from relict crevasses which are not in equilibrium with prevailing stresses. This would allow a more quantitative comparison of observations of active crevasses and modeled fracture initiation, whereas our comparisons remain rather qualitative.
Second, viscous deformation is not included in the simulations. The force imbalance at the ice-cliff is key to understanding the emergence of surface crevasses at a glacier terminus. Because the outward-directed cryostatic pressure is greater than the inward-directed hydrostatic pressure, viscous stretching is required to balance the gradient in longitudinal stress (Benn et al., 2007). Consequently, viscous stretching will increase tensile fracture at the glacier surface. This is especially the case in the almost stagnant eastern marginal area, where only minor sliding is expected, and fractures are presumably mainly induced by viscous stretching. Employing a viscoelastic rheology for ice would therefore improve modeling work of this kind.
CONCLUSIONS
This study investigated the calving mechanisms of Bowdoin Glacier, Northwest Greenland. A major calving event from summer 2017 was studied, by comparing numerical calving model simulations with observations. Using the Helsinki Discrete Element Model (HiDEM), we modeled fracture initiation prior to the calving event. The HiDEM results show that the fracturing pattern is strongly controlled by basal conditions and undercuts induced by submarine melt. The almost stagnant area on the eastern glacier margin creates a shear zone in which very few fractures initiate. On the other hand, we find that submarine melt-induced undercutting may amplify calving. Experiments with various undercut geometries show that the modeled increase in fracture initiation generally reaches further upstream than the applied undercut. However, the interaction between the undercut geometries and fracture initiation is complex. Local undercuts at the plumes alone do not increase fracturing, whereas combination with a smaller distributed, front-wide undercut leads to wider fractures across the terminus, both at and in between the plumes. Therefore, it is complicated, if not impossible, to quantify the modeled interaction between undercutting and fracturing towards a parameterized calving law. Nonetheless, our results show the importance of submarine meltinduced undercutting for calving behavior at grounded tidewater glaciers such as Bowdoin and motivate further detailed studies on this topic.
DATA AVAILABILITY STATEMENT
The input files for HiDEM (geometry and simulation parameters) are available at https://doi.org/10.5281/zenodo.3872912.
AUTHOR CONTRIBUTIONS
ED and GJ designed the study. ED carried out the numerical simulations and prepared the figures with support from JÅ, JT, and DB. MF organized the fieldwork at Bowdoin Glacier. ED drafted the manuscript with support from GJ and JÅ. All authors contributed to the final version.
FUNDING
This research was part of the Sun2ice project (ETH Grant ETH-12 16-2), supported by the Dr. Alfred and Flora Spälti and the ETH Zurich Foundation. Fieldwork was funded by the Swiss National Science Foundation, grant 200021-153179/1. The HiDEM simulations were performed under the Project HPC-EUROPA3 (INFRAIA-2016-1-730897), with the support of the EC Research Innovation Action under the H2020 Programme; in particular, the authors gratefully acknowledge the computer resources and technical support provided by CSC-IT Centre for Science in Finland. DB and JT were funded by NERC grant NE/P011365/1 (CALISMO: Calving Laws for Ice Sheet Models).
ACKNOWLEDGMENTS
We thank the members of the 2017 field campaign on Bowdoin Glacier and in particular Shin Sugiyama for co-organizing the expedition and Andrea Walter for collecting and processing the TRI data and operating the UAV. We thank Fabian Walter for valuable discussions and supervision. We acknowledge Julien Seguinot for providing Sentinel-2A satellite images processed with Sentinelflow (Seguinot, 2018). The simulated graphics in Figure 3 have been rendered by Jyrki Hokkanen (CSC-IT Centre for Science). The authors thank the editor and the referees for their constructive comments, which contributed to improve the manuscript. | 2020-07-14T13:02:25.530Z | 2020-07-14T00:00:00.000 | {
"year": 2020,
"sha1": "c5a65d18426816bf3b8a25cf7610c3b4f2c77acd",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3389/feart.2020.00253",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "c5a65d18426816bf3b8a25cf7610c3b4f2c77acd",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Geology"
]
} |
239021348 | pes2o/s2orc | v3-fos-license | Targeting Sphingolipids for Cancer Therapy
Sphingolipids are an extensive class of lipids with different functions in the cell, ranging from proliferation to cell death. Sphingolipids are modified in multiple cancers and are responsible for tumor proliferation, progression, and metastasis. Several inhibitors or activators of sphingolipid signaling, such as fenretinide, safingol, ABC294640, ceramide nanoliposomes (CNLs), SKI-II, α-galactosylceramide, fingolimod, and sonepcizumab, have been described. The objective of this review was to analyze the results from preclinical and clinical trials of these drugs for the treatment of cancer. Sphingolipid-targeting drugs have been tested alone or in combination with chemotherapy, exhibiting antitumor activity alone and in synergism with chemotherapy in vitro and in vivo. As a consequence of treatments, the most frequent mechanism of cell death is apoptosis, followed by autophagy. Aslthough all these drugs have produced good results in preclinical studies of multiple cancers, the outcomes of clinical trials have not been similar. The most effective drugs are fenretinide and α-galactosylceramide (α-GalCer). In contrast, minor adverse effects restricted to a few subjects and hepatic toxicity have been observed in clinical trials of ABC294640 and safingol, respectively. In the case of CNLs, SKI-II, fingolimod and sonepcizumab there are some limitations and absence of enough clinical studies to demonstrate a benefit. The effectiveness or lack of a major therapeutic effect of sphingolipid modulation by some drugs as a cancer therapy and other aspects related to their mechanism of action are discussed in this review.
INTRODUCTION
Sphingolipids are key structural components of cellular membranes containing a backbone of sphingosine (aliphatic amino alcohol) as the base of their structures. They are synthesized, metabolized and trafficked among several cell organelles. Sphingolipids are remarkably diverse and have crucial roles in maintaining barrier function and fluidity, as well as regulating the cell cycle, cell motility, differentiation, adhesion, and apoptosis (1).
De novo generated ceramide is the central hub of the sphingolipid pathway and subsequently has several fates ( Figure 2). It is phosphorylated by ceramide kinase (CK) to form ceramide-1phosphate or it can be glycosylated by glucosylceramide synthase to form glycosphingolipids (cerebrosides, globosides, gangliosides). In addition, ceramide can be converted to sulfatides by the action of galactosylceramide synthase followed by cerebroside sulfotransferase (CST). Additionally, ceramide is also converted to sphingomyelin by the addition of a phosphorylcholine headgroup by sphingomyelin synthase (SMS). Finally, ceramide may be degraded by ceramidase (CDase) to form sphingosine. Sphingosine may be phosphorylated by sphingosine kinase 1/2 (SPHK1/SPHK2) to form sphingosine-1phosphate (S1P), which has a prosurvival role and is critical for immunomodulation (1,4,5) (Figure 2). SPHK1/2 are overexpressed in numerous cancer cell types, but catabolic pathways allow the reversion of S1P to ceramide by sphingosine-1-phosphatase (SPP1/2) and ceramide synthase. The complex glycosphingolipids are hydrolyzed to glucosylceramide and galactosylceramide. These lipids are then hydrolyzed by betaglucosidases and beta-galactosidases (GCDase) to regenerate ceramide. Similarly, sphingomyelin may be degraded by sphingomyelinase (SMase) and ceramide-1-phosphate by ceramide-1-phosphatase (C1PP) to form ceramide (4) (Figure 2).
In addition to their roles in the organization of the plasma membrane, sphingolipids also play roles as key molecules in signaling processes [for reviews, see (1,4)]. A classic example is the increase in ceramide and sphingosine levels caused by chemotherapy, radiation, and/or oxidative stress and the subsequent induction of apoptosis by these molecules. In contrast, sphingosine-1-phosphate displays antiapoptotic and prosurvival properties. Because some of these enzymes regulate the abundance of sphingolipids, their aberrant expression or activity exerts a negative effect on cancer (5). Thus, numerous studies have been performed targeting the enzymes that catabolize ceramide, generate S1P, or regulate sphingolipid levels. Generally, different strategies have been used to exploit the potential antitumor effects of sphingolipids. Among them, we highlight the following biological processes: autophagic cell death, apoptosis induction, including mitochondrial activation (mitophagy), proliferation inhibition, and cell cycle arrest, and effects on angiogenesis and migration ( Figure 3).
CHEMOTHERAPY AND SPHINGOLIPID-RELATED DRUGS
species that ultimately lead to cell survival (7). In this sense, many inhibitors or modulators of sphingolipid metabolism have been developed to kill tumors and reverse chemotherapy resistance (8). These drugs have been employed in preclinical studies using cancer cell lines and orthotopic mouse models, as well as in clinical trials ( Table 1). In the next sections, this review highlights the drugs most frequently used to target sphingolipid signaling, indicates their mechanisms of action and discusses their successes and limitations in preclinical and clinical trials of cancer treatment. The main results from published preclinical and clinical trials are summarized in Table 1.
FENRETINIDE
Fenretinide (N-(4-hydroxyphenyl)retinamide; 4-HPR) reduces the de novo synthesis of ceramide by targeting dihydroceramide desaturase (DES) while inducing an increase in dihydroceramide levels. This enzyme is responsible for the desaturation of dihydroceramide, the final step in the de novo synthesis of ceramide lipid species from dihydroceramide precursors. Dihydroceramides induce autophagy and inhibit cell growth by inducing cell cycle arrest in cancer cells (215,216). In addition to DESs, other enzymes are fenretinide targets (i.e., CerS5).
Preclinical studies have indicated the antitumor activity of fenretinide in vitro and in vivo in several tumor types in the absence of toxicity in mice. However, clinical trials have reported some mild side effects of fenretinide, such as musculoskeletal complaints (55), diarrhea, reversible night blindness, allergic reaction (21), and dermatological disorders (40). Furthermore, fenretinide lacks antitumor activity in most studies (n=13) but has been shown to stabilize the disease or exert protective effects on some cancers (n=6), mostly breast cancer. Fenretinide preferentially accumulates in fatty tissues, such as the breast, which may contribute to its effectiveness against breast cancer (42). Fenretinide has shown a lack of activity against other cancers. For example, fenretinide does not reduce the time to recurrence of renal carcinoma, consistent with low intratumor drug concentrations (33). Additionally, fenretinide does not substantially modulate the levels of several biomarkers in prostate cancer, including transforming growth factor alpha (TGF-a), insulin-like growth factor 1 (IGF-I), insulin-like growth factor binding protein 3 (IGFBP-3), sex hormone binding globulin (SHBG), and prostate-specific antigen (PSA), which are indicative of insufficient biological activity (36,37). The remarkable hydrophobicity of this drug may be one of the factors responsible for its lack of effectiveness in clinical trials. Better formulations, such as encapsulation into nanocarriers for oral administration, have been reported to be a feasible option to increase its activity (13,217).
However, fenretinide induces a positive hormonal (47) and metabolic profile in premenopausal women (50) and exerts a beneficial effect on total serum cholesterol and HDL levels (53). These beneficial effects have been observed in some cancers, such as breast cancer, but not in others, thereby indicating a possible specificity of fenretinide for this tumor type. Interestingly, there are some correlations between oncogenic alterations and the efficacy of this drug. For example, the sensitivity of Ewing's sarcoma cells to fenretinide-induced cell death is decreased following downregulation of the oncogenic fusion protein EWS-Fli1 and p38(MAPK) activity (218). Also, fenretinide caused induction of oncogene c-Fos expression, whereas such an effect was not observed in resistant cells to fenretinide-induced apoptosis (219).
Also, the combination of fenretidine and ABT-263 (Bcl-2 inhibitor) induces the apoptosis of a large number of HNSCC cells, regardless of the human papillomavirus (HPV) or p53 status. The primary targets of apoptosis induced by these drugs are MCL1 (a Bcl-2 family apoptosis regulator), and Bcl-2 like 1 (Bcl-X L ) (220). Remarkably, the nanomicellar combination of lenalidomidefenretinide suppresses tumor growth in a MYCN-amplified neuroblastoma tumor mediated by increased expression of GD2, a disialoganglioside expressed on tumors of neuroectodermal origin (221). Moreover, treatment with a combination of fenretinide, tocilizumab, and reparixin significantly suppresses IL-6 release, IL-8 release, stem cell gene expression, and invasion in CSC populations (222), which may be due to increased ceramide levels and decreased IL6 and CXCR1/2 levels.
SAFINGOL
Safingol [(2S, 3S)]-2-aminoctadecane-1,3-diol] is an inhibitor of SPHK1, PKCb-I, PKCd, PKCϵ, PI3K, and glucose uptake (223). Safingol also affects the balance of ceramide/dihydroceramide levels. The inhibitory effects on signaling, particularly on PKCϵ and PI3k, concomitant with the presence of ROS (67) synergize to induce apoptosis (decreased Bcl-2 levels and increased caspase cleavage) (59,60,(62)(63)(64)(65)68) and/or autophagy (63, 67) ( Figure 3). According to preclinical studies, the combination of safingol with conventional chemotherapy agents, such as doxorubicin (67), irinotecan (66), and mitomycin C (65), potentiates their effects, inducing apoptotic cell death and ROS production in different cell lines. Additionally, the administration of safingol in combination with bortezomib inhibits lung tumor growth and metastasis (through the modulation of NF-kB signaling) in orthotopic syngeneic mouse models (69). Unfortunately, hepatic toxicity, renal toxicity, changes in liver histology, and decreases in body weight have been observed in mice treated with safingol (56,57). Two out of two clinical trials have indicated stable disease or minor responses to safingol in a subgroup of patients (73,74). However, hepatic toxicity has been observed in a clinical trial of safingol (73), resulting in few additional clinical trials of this drug. In resistant cancer types, such as gastroesophageal cancer, treatment with the combination of safingol with other chemotherapeutic agents, such as cisplatin, has been proposed to potentially overcome cytotoxic drug resistance. This conclusion is based on the following observations: i) cisplatin resistance correlates with increased SPHK1 expression and with decreased sphingosine-1-phosphate lyase 1 (SGPL1) expression; and ii) the survival of patients treated with chemotherapy prior to surgery but not patients treated with surgery alone (70).
In vitro studies have indicated that ABC294640 reduces the proliferation and viability of several cancer cell lines and mouse xenografts without any toxic side effects. The decrease in proliferation is mediated by inhibition of SPHK2 activity (82,85,97), S1P depletion ( (Figure 3). The combination of ABC294640 with other drugs, such as regorafenib, sorafenib, PDMP, and ABT-199, induces synergistic potentiation of the treatment effect, reducing chemoresistance in various cancer types (98,99,103,104). For example, SPHK2/SPP1 arbitrates regorafenib resistance by activating signal transducer and activator of transcription 3 (STAT3) and nuclear factor kappa light chain enhancer of activated B cells (NF-kB). SPHK2 targeting by ABC294640 significantly reduces resistance to regorafenib in an in vivo model of hepatocellular carcinoma (HCC) (104). Overall, only one clinical trial for ABC294640 has been reported, and some reversible toxicities (nausea, vomiting, diarrhea, fatigue and nervous system disorders) were documented. These side effects are likely due to off-target effects. The efficacy evaluation indicated stable disease in a subgroup of patients (40%), partial response (7%) and progressive disease (53%) (93).
CERAMIDE NANOLIPOSOMES
Ceramide nanoliposomes (CNLs) are lipid-based nanoparticle formulations composed of ceramide encapsulated within nanoliposomes, inducing apoptosis in the target cells due to lysosomal membrane permeabilization that leads to the leakage of hydrolytic enzymes into the cytoplasm or by conferring PI3K and PKCz tumor-suppressive activities (107,224). Interestingly, CNLs have also been reported to target the Warburg effect in chronic lymphocytic leukemia in vitro and in vivo (106). Ceramide alone is insoluble and has a short half-life; therefore, nanoliposomes increase its solubility and half-life. Upon administration, CNLs accumulate in the tumor environment due to enhanced permeation and retention caused by the 'leakiness' of the tumor vasculature (225). No targeting effect on a tumor marker or tropism of CNL for a particular tissue has been observed. However, one method for increasing the specificity of ceramide derivatives for mitochondria (to induce apoptosis by inducing cytochrome c release) is the introduction of a positive charge on the fatty acid residue by adding a pyridine structure. Pyridine-ceramides localized more readily to the mitochondria, altering their structures and functions and inducing pancreatic cancer cell death (226). Preclinical assays with cell lines and xenografts show that CNLs potentiate the effect of chemotherapy (114)(115)(116)120); reduce tumor proliferation mediated by apoptosis (increased cleavage of PARP and caspases) (110-112, 114, 116-119, 121, 123), autophagy (increased LC3-II and Atg5 levels) (117,122), necrosis (106), necroptosis (109), anoikis (108), mitophagy (mitochondrial membrane permeabilization) (116,119,(121)(122)(123), and cell cycle arrest (increased p53 expression) (116,119); increase ROS levels (110); inhibit lysosomal function (116,122); inhibit integrin affinity (105,107); and target CD44 receptor (108), survivin (111), PI3K (107,114), MAPK (105,114), mammalian target of rapamycin (mTOR) (112,121), Akt and Erk1/2 (110, 115) signaling ( Figure 3). For example, Shaw et al. indicated that the combination of C6-CNLs with chloroquine (an inhibitor of lysosomal function and therefore an autophagy inhibitor) significantly increases apoptosis in response to ceramide by avoiding the repair of mitochondrial damage (122).
To our knowledge, two clinical trials have tested the efficacy of CNLs in cancer. In the first trial, only one patient with cutaneous breast cancer manifested a partial response, yielding a response rate of 4% and a median progression-free survival of 2 months. Topical ceramides were also well tolerated, with no grade 3 or 4 toxicities reported (113). Another clinical trial (phase I) with C6-CNLs concluded that the combination of ceramide and vinblastine is safe and has the potential to treat the heterogeneous nature of acute myelogenous leukemia (AML) through the induction of apoptotic pathways (118); therefore, phase II studies may be conducted.
a-GALACTOSYLCERAMIDE (a-GALCER)
The last decade has revolutionized cancer therapy with the development of immunotherapy, producing good outcomes in patients with a fatal diagnosis. a-GalCer (KRN-7000, agalactosylceramide-pulsed antigen presenting cells) is a glycosphingolipid and synthetic iNKT (invariant Natural Killer T) cell ligand. Dendritic cells are pulsed with a-GalCer and administered to patients for achieving effective presentation and activation to iNKT cells (172). In other approaches, dendritic cells are mixed with iNKT cells or peptides derived from cancer antigens (154). Dendritic cells (DC) capture antigens and present them to several types of T-cells for their activation. Invariant natural killer T (iNKT/type I NKT) cells are a subset of T cells endowed with innate and adaptive effector functions. They are characterized by the expression of invariant T cell receptor chain Va24-Ja18, which recognizes lipid antigens presented by CD1d (229). They exhibit powerful cytotoxic activity mediated by perforin/granzyme B. In addition to their direct antitumor effect, iNKT cells also regulate the damaging activities of NK cells, CD8+ T cells, B cells and innate cells by release of a wide variety of pro-inflammatory cytokines (153,154,172).
Attempts to improve efficacy of iNKT treatments have focused on transduced with CARs (chimeric antigen receptors) (NCT03294954; NCT03774654), chemical modifications to the a-GalCer to stabilize interactions with CD1d, optimizing presentation through encapsulation in particulate vectors, making structural changes that help binding to CD1d, injecting agonists covalently attached to recombinant CD1d. Also, facilitate formation of resident memory CD8+ T cells could find a role in this therapy.
However, no clinical trials have assessed the effectiveness of fingolimod in cancer, potentially due to the impairment of cytotoxic CD8+ T and CD4+ T cell trafficking and activation, which precludes tumor infiltration to kill cancer cells. Fingolimod blocks the immunosurveillance of B cells by suppressing the migration of tumor-specific Th1 cells from lymph nodes to the incipient tumor site, thereby preventing Th1-mediated activation of tumoricidal macrophages (235). Furthermore, it impairs the ability of cytotoxic CD8+ T cells to kill their target cells and reduces IFNg and Granzyme B levels in splenic CD8+ T cells (236,237). Thus, an effective action of this drug in clinical trials is not anticipated, as T cells are the main cells involved in the immune response to tumors. SONEPCIZUMAB Sonepcizumab (LT1009) is a humanized monoclonal antibody against S1P. Sonepcizumab slows tumor progression in murine models with orthotopic tumors by blocking the function of proangiogenic growth factors (decreased VEGF, bFGF, and IL-8 levels) and inducing apoptosis (increased caspase cleavage). Additionally, sonepcizumab inhibits tumor vascularization in vitro and in vivo, and it neutralizes S1P-induced stimulation of proliferation in multiple cell lines (213) (Figure 3). A phase II study of sonepcizumab was terminated because it failed to meet its primary progression-free survival endpoint in patients with metastatic renal cell carcinoma who received three prior therapies. However, researchers were encouraged by the overall survival (21.7 months) and safety profile of sonepcizumab, and they advised "further investigation in combination with VEGF-directed agents or checkpoint inhibitors". Ten percent of patients achieved a partial response, with a median duration of response of 5.9 months. No grade 3/4 treatment-related adverse events were observed in >5% of patients (214).
An increase in systemic S1P concentrations was detected following sonepcizumab treatment, suggesting that S1P signaling was still active, which might explain the limited efficacy of the drug in the clinic. Thus, future studies are needed to improve the neutralization of S1P signaling. In addition, studies testing the efficacy of this drug in combination with SPHK1/2 inhibitors or S1PR2 antagonists are warranted (1).
CONCLUSIONS
Sphingolipid-targeting drugs have been tested against several hematological malignancies and solid tumors, alone or in combination with chemotherapy, and have produced some encouraging results (42,47,48,50,52,54). Treatments targeting sphingolipid exhibit antitumor activity in vitro and in vivo, inducing apoptosis or occasionally autophagy, as well as several other mechanisms of cell death. Among these agents, the most effective and promising treatments in clinical trials are fenretinide and agalactosylceramide. Some plausible explanations for the partial success of these safe drugs in clinical trials have been proposed. Fenretinide accumulation in breast tissue along with the induction of apoptosis or autophagy (in caspase-defective breast cancer cells) by dihydroceramide may be responsible for its success. Researchers presumed that its accumulation in breast tissue (and not in other tissues) might be related to hormone-associated pathways that are active in these cancer types. Regarding a-galactosylceramide, the induction of an antitumor immune response mediated by iNKT, NK, T cells and B cells is the functional mechanism. Among several anticancer therapies, immune checkpoint inhibitors occupy a relevant place because of the activation of the antitumor function of T cells (238), which indirectly indicates an important role for the adaptive immune system in the efficacy of anticancer treatments. However, despite different proposals (mutations that prevent T cells from entering the tumor, inhibition of T cell activation pathways, etc.), researchers have not yet clearly determined why immunotherapy is not efficient against some types of tumors.
Current research gaps in the other drugs are associated with side effects, modest findings or the absence of clinical trials. For example, safingol and ABC294640 induced side effects on humans in clinical trials, which may be the main reason for the limited number of clinical trials. Safingol is an inhibitor of several enzymes (SPHK1, PKCb-I, PKCd, PKCϵ, and PI3K) and glucose uptake (223), which are needed for the proper function of normal tissues. Targeted therapy against protein kinases relies on the upregulation/activation of these molecules in particular tumors. For example, imatinib is a specific inhibitor of the constitutively active Bcr-Abl tyrosine kinase and is used to treat leukemia with the Philadelphia chromosome (Bcr-Abl) (239). Therefore, we understand that off-target effects of sanfingol due to the inhibition of several enzymes and glucose uptake are likely responsible for the hepatic toxicity observed in mouse and human studies. Potential developments in this field to alleviate this limitation might include some chemical modifications designed to increase the specificity for SPHK1 or targeting an upregulated sphingolipid in a specific tumor. Nevertheless, their use is expected to vary depending on the type of cancer, which in turn is determined by the levels of aberrant sphingolipids expressed in each type of tissue, among other factors. In addition, glucose uptake is a universal and vital step for obtaining ATP through glycolysis and oxidative phosphorylation.
CNLs are already being investigated in clinical trials, but the expected results were very modest, potentially because of a lack of CNL tropism for a specific tumor tissue type (i.e., breast). No clinical trials for SKI-II and fingolimod have been reported. For the latter, an effective action in cancer clinical trials is not expected, as this immunosuppressive drug impairs the tissue infiltration and activation of cytotoxic CD8+ T and CD4+ cells, which are the most relevant cells involved in the immune response to tumors. Clinical studies confirm this fact, as spontaneous regression of T cell lymphoma has been observed in patients with multiple sclerosis after discontinuing fingolimod (240).
With respect to sonepcizumab, an increase in systemic S1P concentrations was observed in a clinical trial (214), although it is a monoclonal antibody against S1P. Treatment with this drug resulted in a reduction in the absolute serum lymphocyte levels, which was expected based on the known effect of S1P blockade on peripheral lymphocyte trafficking (214). Moreover, upregulation of the S1PR1-STAT3 pathway enables myeloid cells to intravasate and mediate tumor proliferation and metastasis (241). In addition, S1PR1 signaling in T cells drives Treg accumulation in tumors, limits CD8+ T cell recruitment and activation, and promotes tumor growth (242,243). Therefore, sonepcizumab does not provide effective S1P blockade in clinical trials, and the potential tumor infiltration of Tregs and myeloid cells and reduction of lymphocyte numbers fosters tumor growth.
The exhaustive characterization at several levels, including immunity, pharmacodynamics, pharmacokinetics, dosing and metabolomics, is required in preclinical studies before entering clinical trials. The most relevant factor associated with side effects is the presence of off-target effects, which might be improved by chemical modification of these drugs or new synthesis to increase specificity. For this task, the use of molecular docking based on threedimensional protein structures would be able to develop new and more specific drugs. In addition, the lack of tissue-specific targeting and hydrophobicity of the drugs precludes an effective action. The use of aberrant sphingolipids in specific tumors as targets and nanocarriers or chemical modifications are solutions to these issues.
Aberrant sphingolipid signaling is a consequence (not the cause) of carcinogenesis due to mutations in crucial oncogenes and tumor suppressor genes. Hence, effective treatment with sphingolipid modulating drugs should be based on multiple therapeutic combinations, including immunotherapy (activates antitumor immune CD4+ and CD8+ T cells) and conventional chemotherapy Interestingly, conventional chemotherapy (i.e., tamoxifen) is active against SPHK1 and GCS; thus, the use of tamoxifen might be beneficial in patients who have acquired resistance to these enzymes. One opportunity is based on the fact that many chemotherapeutic agents modulate ceramide levels; therefore, the rational use of these agents with sphingolipid inhibitors could increase lethal levels of ceramide that are more effective at killing the tumor. Overall, an increased understanding of the mechanisms by which sphingolipids control cancer cell signaling together with in-depth studies using animal models will fill these gaps and improve future anticancer therapy based on these compounds.
AUTHOR CONTRIBUTIONS
OC write the manuscript. CM designed the figures accordingly to the literature. YG-M revised the manuscript. ML revised and rewrite the manuscript. All authors contributed to the article and approved the submitted version.
ACKNOWLEDGMENTS
We thank T. Moline and R. Somoza from the VHIR. We thank J.A. Leal for his reading assistance and comments. | 2021-10-19T13:17:06.109Z | 2021-10-19T00:00:00.000 | {
"year": 2021,
"sha1": "42bf7999dc68ed9c81bd24827a766cdbac2bacf9",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2021.745092/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "42bf7999dc68ed9c81bd24827a766cdbac2bacf9",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255988259 | pes2o/s2orc | v3-fos-license | Attraction and oviposition preferences of Phlebotomus papatasi (Diptera: Psychodidae), vector of Old-World cutaneous leishmaniasis, to larval rearing media
As part of a project aimed at developing oviposition attractants for the control and surveillance of Phlebotomus papatasi (a vector of Old-World cutaneous leishmaniasis), we tested the hypothesis that gravid sand flies are attracted to chemical cues emanating from the growth medium of conspecific larvae - predominantly larvae-conditioned host feces that represents a suitable oviposition site. We report the results of a systematic assessment of media from various developmental stages of the sand fly using oviposition and olfactometer behavioral assays. We conducted multiple-choice oviposition assays in 500 mL Nalgene jars. Six treatments were placed on separate filter paper discs at the bottom of the jar: 2nd/3rd larval instar medium, 4th larval instar/pupae medium, frass from expired colonies, larval food (aged rabbit chow and rabbit feces mix), rabbit feces, and a solvent (water) control. Fifty gravid females were introduced into each jar. Cumulative number of eggs laid on each filter paper per jar was counted at different time intervals from digital images. Attraction of gravid sand flies to these six treatments was assayed with a 3-chamber linear olfactometer. Twenty gravid females were transferred to the middle chamber of the olfactometer and their distribution in treatment and control chambers was recorded after 3 h. Almost no eggs were oviposited during the first 72 h following a blood-meal. Cumulative egg deposition increased drastically in the next 24 h (hours 73–96), with a slight non-significant increasing trend thereafter. Comparing mean cumulative egg deposition among the six treatments, we found that significantly more eggs were oviposited on 2nd/3rd larval rearing medium followed by 4th instar/pupae rearing medium. Oviposition preference did not vary over time. The olfactometer results were consistent with the oviposition assays, with 2nd/3rd larval rearing medium being the most attractive, followed by 4th instar/pupae rearing medium. The key finding of this study is that gravid, laboratory reared, Ph. papatasi sand flies are significantly more attracted to rearing medium of the most biologically active larval stages (2nd/3rd instar and 4th instar/pupae). This finding indicates that sand fly-digested host food and feces is attractive to gravid females and suggests that the larvae and larval gut microbiome may be involved in conditioning the oviposition substrate and possibly the production of oviposition attractants and stimulants.
Background
Phlebotomine sand flies can transmit protozoan parasites (Leishmania spp.), as well as bacterial (Bartonella bacilliformis) and viral pathogens (e.g., sand fly fever) [1][2][3][4]. Most significant are the human leishmaniases that, following malaria and dengue, are the most pervasive vector-borne diseases [5,6]. Unfortunately, cost, access, and side effects limit the applicability of existing therapeutic treatments. Therefore, given that no vaccine yet exists, reducing exposure to sand fly bites is the most prevalent disease prevention approach [5,6]. Sand fly control comprises three general approaches: personal protection (e.g., repellents, insecticide-treated clothing or bed-nets), reservoir host control (e.g., rodent removal using rodenticides or burrow plowing), and residual spraying with insecticides [7,8]. The most common approach is residual spraying of insecticides; however, the effectiveness of this approach is highly variable, non-specific, and can drive the evolution of insecticide resistance [7,9,10]. Source reduction using biolarvicides is often used to control some mosquito species, but since sand fly larvae are terrestrial this approach is not practical [11]. Unlike most biting Nematocera, sand flies develop in terrestrial habitats where eggs are typically laid in soil rich in organic material on which the larvae feed and develop through four instars before pupation and adult emergence. The difficulty of finding breeding sites for sand fly control is an important constraint limiting the application of larvicides [12][13][14]. Hence, a more focused, biologically-based, and targeted control method is urgently needed [8].
An alternative approach to delivery of the insecticide to the vector is to bring the vector to the insecticide using attractants [15]. This attract-and-kill approach is commonly used to control agricultural pests and disease vectors using sex pheromones, host odors, sugar meal sources, and bacterial mediated oviposition site attractants [16][17][18][19]. In the context of controlling disease vectors, oviposition-site attractants are expected to be the most effective because they lure physiologically older females that have blood-fed at least once and are, therefore, more likely to be infected with pathogens [18,20]. Therefore, by targeting gravid females, control efforts can simultaneously reduce pathogen transmission and control population growth [20].
Most research on oviposition attractants of disease vectors has focused on mosquitoes [20,21]. With sand flies, most research has focused on Lutzomyia longipalpis (Diptera: Psychodidae), the main vector of New-World visceral leishmaniasis [3]. In a series of experiments, conspecific eggs were found to enhance oviposition and dodecanoic acid was identified as the active compound from eggs [22]. Organic matter also stimulates oviposition in Lu. longipalpis and hexanal and 2-methyl-2-butanol were isolated from fresh chicken or rabbit feces as the active compounds [23][24][25][26]. In contrast, only few studies have examined oviposition in Old-World sand flies. Phlebotomus papatasi, the main vector of Old-World cutaneous leishmaniasis (due to Leishmania major), is distributed from Morocco to the Indian subcontinent and from southern Europe to central and eastern Africa [3,4,27]. It was shown to lay more eggs on substrates containing conspecific eggs [28,29] or organic matter of various sources [29,30]. For example, Wasserberg and Rowton [28] compared the relative effectiveness of conspecific eggs and organic matter (frass extract) and found frass to be a much more potent oviposition stimulant than eggs; they also found that the combination of eggs and frass was not more effective than frass alone. Schein et al. [30] showed in the field that cow manure is highly attractive to gravid and non-gravid Ph. papatasi females. Chelbi et al. [31] demonstrated that rabbit feces was highly attractive to Ph. papatasi in peridomestic environments in Tunisia. Wasserberg [32] used fresh rabbit feces as bait and was able to attract Ph. papatasi from as far as 250 m from the nearest potential source. Radjame et al. [33] isolated soil bacteria from a variety of putative sand fly breeding sites (human dwellings, termite mounds, cow sheds) and tested their effect on oviposition responses of gravid females. In bioassays of soil bacterial isolates, Bacillus licheniformis and Staphylococcus saprophyticus were shown to enhance Ph. papatasi oviposition response.
Our general goal is to discover, develop and optimize a lure that attracts oviposition-site seeking gravid females and that could be used for surveillance and control of Ph. papatasi sand flies. Because larval sand flies are coprophagic [1,3,12], we hypothesized that gravid sand flies are attracted to chemical cues associated with the decomposition of organic matter of (predominantly) fecal origin as indicators of suitable oviposition sites. Specifically, given our previous observation that larval rearing substrate is substantially more effective than conspecific eggs in inducing egg deposition [28], our goal in this study was to compare the attraction and oviposition response of gravid Ph. papatasi females among rearing substrates of pre-larval, larval, and post-larval stages in order to identify the most attractive and oviposition-stimulating source material. In this paper, we report the results of the screening of these potential attractant sources using oviposition and olfactometer behavioral assays.
Insects and colony maintenance
Phlebotomus papatasi sand flies originating from Abkük, Turkey (37.39103°N 27.43853°E), were colonized at the Walter Reed Army Institute of Research (Silver Spring, Maryland) and maintained at the University of North Carolina in Greensboro. Rearing of Ph. papatasi sand flies followed the mass-rearing methods described by Modi and Rowton [34] and flies were blood-fed on live anesthetized ICR mice (Harlan) (UNCG IACUC protocol 14-07). Sand flies were maintained in incubators (Model: 6030-1, Caron®, Marietta, Ohio) at 26°C, 80 % RH, and 12:12 light:dark cycle. Colonies were maintained in 500 mL Nalgene jars (Nalgene™, Model 81063, diameter = 11 cm) with a 2.2 cm layer of Whip-Mix® Orthodontic Plaster (Model: 5577352, Henry Schein Inc., Melville, New York) on the bottom to ensure moist substrate and drainage. Larval food was prepared by mixing fresh rabbit feces (New Zealand White strain) and rabbit chow (Purina) at a 1:1 ratio, which was fermented for 3 weeks in a dark chamber, airdried and ground to a powder.
Treatments for oviposition and olfactometer assays
Source material included rearing substrate of two prelarval stages, two larval stages, and one post-larval stage. Pre-larval stage substrates included fresh ground rabbit feces (RF) and unused larval food (LF) (see description above). Larval stage substrates included substrate containing mainly larvae at the 2 nd and 3 rd instar (2 nd /3 rd substrate) or 4 th instar and pupal stages (4 th /pupae substrate). Post-larval substrate was rearing medium of a colony jar from which all sand fly pupae had eclosed (hereafter "expired").
Oviposition assays
We conducted multiple-choice behavioral assays using 500 mL Nalgene jars (similar to the rearing jars) modified for 6-choice assays. Each jar was placed in water for 12 h prior to the start of an experiment to equilibrate the moisture level of its plaster floor. We simultaneously tested the above described five source materials and a solvent (water-only) negative control treatment (Fig. 1). To minimize the potential of cross-contamination, 1 mg of each of these materials (SE = 0.1 mg) was placed on a filter paper disc (2.5 cm diameter) (Model: 09-801-AA, ThermoFisher Scientific®, Waltham, Massachusetts) at equal distance from the center of the cup. Three drops (~0.15 mL) of deionized water were then added to each filter paper. Each experimental session (n = 9 replicate sessions conducted between 3/1/2013 and 4/17/2013) consisted of 7 oviposition jars. During the first 24 h post blood-meal sand flies were left undisturbed in their holding cage to not interrupt the development of the peritrophic matrix around their recently acquired bloodmeal [34]. Then, fifty gravid females were transferred into each of the 7 bioassay jars using a mouth aspirator. Jars were then returned into the rearing incubator. To obtain a time-course of oviposition, the assays were terminated, one jar at a time, 1-7 days after transfer (or 2-8 days post blood-meal) by releasing the females into a separate holding cage. We photographed the filterpapers with a T3i Canon 100 mm macro lens. Eggs laid on each filter paper were counted from high quality digital photos using the counting tools in Adobe Photoshop (Adobe Photoshop CS5 2010, Adobe™, San Jose, California).
Attraction bioassays
Attraction of gravid sand flies to various source materials was assayed with a 2-choice olfactometer (Fig. 2). Briefly, the olfactometer consisted of a cylindrical Plexiglas® apparatus made of three in-line chambers (each chamber: 9.4 cm inner diameter, 10.1 cm outer diameter, 15 cm length). A section of polyvinyl chloride (PVC) pipe (2.5 cm length, 10.15 cm inner diameter), glued to either side of a white Plexiglas square partition (11.4 × 11. 4 cm, 3 mm thickness), coupled the middle chamber to the outer two chambers. Holes in the center of each partition held a 6 cm long (1 cm inner diameter) tube extending 3 cm into the central chamber and 3 cm in an outer chamber. In each olfactometer, test material was placed in one side chamber and the control material in the other side-chamber. Test material (0.5 g) to be tested was placed on a 7.5 mL weigh boat containing 1.2 mL of orthodontic plaster and tested against a blank negative control (similar plaster-bottomed weigh boats but with 3 water drops [ca. 0.15 mL]). In each experimental session (n = 10 replicate sessions conducted between 12/4/2013 and 2/2/2014), we used six olfactometers with source materials including: 2 nd /3 rd substrate, 4 th / pupae substrate, "expired' colony substrate, LF, and RF as well as one olfactometer with blank (water) controls on both sides to test for potential directionality bias. A treatment weigh boat was placed on a plastic stage at one end of the olfactometer, and the other end received a control Six-choice oviposition assays. Each assay jar was constructed of a 500 mL cup with 2.5 cm diameter filter paper discs distributed at equal distance. Six source materials were placed on the filter papers: Control (water-only); rabbit feces RF; larval food (LF); rearing medium from 2 nd and 3 rd larval instars (2 nd /3 rd ); rearing medium from 4 th larval instars and pupae (4 th /pupae); rearing medium and frass from an expired colony (Expired) weigh boat. The ends of the side-chambers were then covered with a fine mesh screen secured with rubber bands (Fig. 2). Twenty gravid Ph. papatasi females (72 h post blood-meal) were transferred to the middle chamber of the olfactometer. The middle chamber was then connected to a vacuum pump (Air Admiral® Cole-Parmer, Vernon Hills, IL) that delivered a total volumetric flow of 1.05 L/min (~7.5 cm/s through each outer chamber). The vacuum pump remained off for the first 60 min of the bioassay and then on for 2 h. The olfactometer was then placed into a −20°C freezer to kill the flies and subsequently the number of females in each chamber was counted. Before each bioassay, olfactometers were cleaned using an odorless cleaning detergent (RBS-35, Model: 27950, ThermoFisher Scientific, Waltham, Massachusetts). All bioassays were conducted in a controlled environment room with temperature and humidity identical to those of the rearing colony incubator. Assays were conducted in the scotophase 3-8 h after lights-off. The olfactometers were randomly assigned locations within the room to avoid directional bias. Treatment side was rotated among replicate session.
Statistical analysis Oviposition assays
In these experiments, data represented cumulative number of eggs laid over a specified number of days until experimental termination of oviposition. To analyze the oviposition time-course, we used the cumulative number of eggs per female per jar. Data were analyzed using Kruskal-Wallis test. To compare the cumulative egg number between the six source materials within each jar (treatments clustered within jars) and to account for the nature of the data (overdispersed, count data), we used random-intercept negativebinomial multiple regression [34]. Specifically, we tested for the effect of source material (as dummy variables), time since blood-meal, and their interaction, on the cumulative number of eggs laid per filter paper disc.
Attraction assays
An Oviposition Attraction Index (OAI) [35] was used to evaluate and compare the responses of gravid sand flies to source materials of different types. This index was calculated as OAI = (N t -N c )/(N t + N c ) where N t and N c are the number of females found in the test or control chambers of the olfactometer, respectively. We used linear regression to test the effect of the different source materials (treated as dummy variables) on OAI. Since OAI statistical distribution is truncated between −1 to +1, we used a robust estimate of the standard error that accounts and corrects for possible violations of normality [34]. For all analyses, significance level of P < 0.05 was used. Analysis was conducted using Stata software (StataCorp., College Station, TX).
Preferences of oviposition substrate
Since almost no eggs were oviposited during the first 72 h following the blood-meal, statistical analysis was performed for data of the subsequent days (days 4 to 8 following blood-meal). Significantly more eggs were oviposited on each of the tested substrates than on the water-only control ( Table 1). The highest number of eggs was oviposited on 2 nd /3 rd larval rearing medium followed by 4 th /pupae rearing medium (Table 1, Fig. 4). There was no significant difference between these top two preferred substrates, but 2 nd / 3 rd larval rearing medium had significantly more eggs than the three lower ranking substrates (Table 1, Fig. 4). There were no significant differences among the three lower ranking substrates (Table 1). As indicated by a nonsignificant treatment-by-time interaction term (random intercept model: Z = 1.61, P = 0.11) the relative preference for the different substrates did not change over time.
Olfactometer attraction assays
We tested the attractiveness of the five substrates in olfactometer assays. Only data from bioassays in which ≥25 % of the females responded were included. No significant bias was found for olfactometers with water-only controls on both sides, as a mean of 4.33 (SE = 0.31) flies chose the right-side chamber and 4.25 (SE = 0.51) flies were in the left-side chamber of the olfactometer (paired t = 0.134, P = 0.895) ( Table 2). Overall, 44 % of the flies responded (i.e., moved to the two sides of the chambers), while 66 % remained in the central chamber; there were no significant differences among treatments, but the 'larval food' treatment elicited a higher total response than the other treatments (Z = 2.13, P = 0.033) ( Table 2). Sand fly females were significantly attracted to four of the five tested materials. As in the 6-choice experiment, 2 nd /3 rd larval Table 1 Oviposition preferences in multiple-choice assays. Random-intercept negative-binomial regression table of the effect of different oviposition substrates on the cumulative number of eggs oviposited per filter paper disc in 6-choice oviposition assays. Table also presents means (±SE) of egg numbers oviposited per filter paper disc for each substrate type. Test materials included larval rearing media of different types and stages including: fresh rabbit feces (RF), fresh larval food (LF), rearing medium containing frass of 2 nd -3 rd instar larvae (2 nd /3 rd ), rearing medium containing frass of 4 th instar larvae and pupae (4 th /pupa), frass of rearing cups from which all larvae had eclosed (expired) and a negative (water) control. Rearing media of 2 nd /3 rd and 4 th /pupa (bolded) induced highest oviposition response rearing substrate was the most attractive material, followed by 4 th /pupae rearing substrate (Fig. 5, Table 2). 'Rabbit feces' was the next most attractive substrate. However, 'larval food' and 'expired colony' substrates did not significantly differ from the control (Fig. 5). Furthermore, the effect of the expired colony substrate was not statistically (albeit marginally) significant (Table 2).
Discussion
The key finding of this study is the observation that gravid Ph. papatasi females were attracted to and stimulated to oviposit in rearing medium of the most biologically active larval stages (2 nd /3 rd and 4 th /pupae). Our experiments clearly indicate that untreated rabbit feces were less attractive and stimulated fewer oviposition events than 2 nd /3 rd larval substrate. Furthermore, adding rabbit chow and fermenting this mix for 3 weeks (larval food preparation process) also did not enhance attraction or oviposition. Only when larval substrate was conditioned through ingestion by foraging larvae were both attraction and oviposition enhanced. This ingestion-mediated conditioning suggests the involvement of digestive processes and the gut microbiome in enhancing the attractiveness of this substrate [36]. Gut microbes are not known to be vertically transmitted in sand flies [37], and the source of their gut microbial community is the environment [36,38,39]. In our experiments, gut microbes likely originated from rabbit feces and larval food. Nonetheless, the larval gut can shape the microbial community as it facilitates the proliferation of some microbes and inhibits others. The idea that bacteria contribute to attraction of gravid sand flies is further supported by our preliminary The right chamber of the olfactometer was assigned as "treatment" and left chamber as "control", but both received water-only analysis showing that a mixture of bacterial isolates from this substrate is as attractive to females as the solid substrate (Kakumanu et al. unpublished data). Furthermore, some of the most attractive bacterial isolates belong to taxa that include insect gut bacteria. Our ongoing research aims to determine the relative contributions of substrate aging and its conditioning by larvae to attractiveness of the substrate to gravid sand flies. We do not yet understand the underlying evolutionary reasons for the patterns of oviposition site selection observed here. However, in most species with relatively sedentary larvae, females tend to seek oviposition sites that maximize larval survival, most often host plants or suitable food resources. Given that decomposing organic matter is the main food source for sand fly larvae [1,3] we hypothesized that natural selection has molded oviposition siteseeking females to detect and orient to olfactory cues that signal the availability of food for their larvae. Indeed, almost all organic matter media that we tested were more attractive and stimulated females to oviposit more than the water control. But not all organic substrates were equally attractive. Larval substrate became more attractive as larvae matured, but then its attractiveness gradually declined as larvae further matured, pupated and eclosed. This initial increase in attraction might appear maladaptive, as older sand fly larvae might be cannibalistic [40]. Yet, as suggested by Wasserberg et al. [21] with respect to mosquitoes, the intraspecific regulation of oviposition site selection is a complex process involving trade-offs between attraction at low-to-medium conspecific densities, where presence of conspecifics indicates site suitability, and repellence/deterrence at high densities that indicate potential adverse competitive effects. This results in a hump-shaped (upside down parabola) curve describing the relationship between attraction and conspecific densities. It is possible that a similar process occurs here in relation to ecological succession of microbes in the rearing medium, with 2 nd /3 rd stage substrate occurring at the optimum successional time-point.
The time-course of oviposition following a bloodmeal indicated that Ph. papatasi sand flies did not start laying eggs within 72 h after a blood-meal (Fig. 3). Subsequently, oviposition sharply increased followed with a slight increasing trend until day 8. Thus, sand flies laid most of their egg clutch once they became physiologically capable of doing so (72-96 h post blood-meal) and then laid additional 1.8 eggs (per capita) approximately every 24 h. Schlein et al. [41] did not observe a sudden increase in oviposition at a particular day but they did observe continuously increasing cumulative egg number between days 7 and 14 post blood-meal. Per-capita egg deposition observed in our study (13.25 per female) is lower than the 15-20 eggs-per-female previously reported by Wasserberg and Rowton [28] or the 33.44 eggs-per-female observed by T. Rowland (personal observations) for individually-reared Ph. papatasi. Given that this experiment took place between early January to mid-April, these lower egg numbers might possibly be related to photoperiodic fluctuation in oviposition activity as previously observed by Schlein et al. [41] who found substantially lower egg-deposition levels during the late fall to early winter period compared with those observed during Ph. papatasi's typical activity period (May -October). It is also interesting to note that timeto-oviposition as observed here is shorter than that observed by Volf and colleagues [40,42] who report 7 days to first oviposition under similar rearing conditions.
We used the oviposition time-course results to guide our olfactometer experiments where we used only females 72-96 h post blood-meal to ensure females were gravid and at the stage where they would be seeking a suitable oviposition site and therefore should be responsive to olfactory cues. In addition, we observed that oviposition-substrate preference of gravid Ph. papatasi females did not change significantly over time. This finding is in contrast to Elnaiem et al. [24] who noted that oviposition preference switched from rabbit feces to the water control for Lu. longipalpis between days 3-4 and day 5 post blood-feeding. Nevertheless, our experimental design using cumulative oviposition might not be ideally suited for detecting temporal changes in oviposition-substrate preference and dayspecific bioassays for a fixed oviposition time window might be better suited.
In conclusion, the sensory acuity of female sand flies to distinguish between rearing media of different larval developmental stages which are apparently very similar is quite remarkable. Our results suggest the involvement of larval gut microbial community in the production of oviposition attractants. Indeed, Peterkova-Koci et al. [39] showed that Lu. longipalpis prefers laying eggs on rabbit feces containing its original bacterial assemblage compared with sterile feces. Furthermore, they showed that these bacteria are beneficial to the sand fly in terms of larval growth and survival. Yet, de-coupling the effect of substrate aging by itself from its conditioning by feeding larvae still warrants further study. The chemical-and microbial-ecology processes driving this behavior are still not understood and are currently being investigated by our group. Finally, once optimal attractive blends are formulated, we will test them in the field.
Conclusion
We found that rearing medium of 2 nd /3 rd instar Ph. papatasi is substantially more attractive than the prelarval (rabbit feces, fresh larval food) or post-larval (expired colony medium) rearing media. These results suggest that larval digestion and possibly the larval gut microbial community contribute to the production of oviposition attractants. Identifying these microbes and the attractive compounds they produce would lead the way for the development of an attractive lure to be used for the surveillance and control of sand flies. | 2023-01-19T22:07:39.410Z | 2015-12-01T00:00:00.000 | {
"year": 2015,
"sha1": "ac1ae8cbacb6b97862ce276d9f9b8e9c58076b05",
"oa_license": "CCBY",
"oa_url": "https://parasitesandvectors.biomedcentral.com/track/pdf/10.1186/s13071-015-1261-z",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "ac1ae8cbacb6b97862ce276d9f9b8e9c58076b05",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": []
} |
260665683 | pes2o/s2orc | v3-fos-license | Dielectric Stability of Triton X-100-Based Tissue-Mimicking Materials for Microwave Imaging
: Microwave imaging is an emerging technology, and has been proposed for various applications, namely as an alternative diagnostic technology. Microwave imaging explores the dielectric contrast of target tissues, enabling diagnosis based on the differences in dielectric properties between healthy and diseased tissues, with low cost, portability and non-ionizing radiation as its main advantages, constituting an alternative to various imaging technologies for diagnosing and monitoring. Before clinical trials of microwave imaging devices for the study of dielectric properties, phantoms are used, mimicking the materials of tissues and simulating the electric properties of human tissues, for device validation. The purpose of this work was to prepare and perform dielectric characterization of mimicking materials for the development of an anthropomorphic phantom of the human ankle with realistic dielectric and anatomic properties. The biological tissues targeted in this investigation were the skin, muscle, cortical bone, trabecular bone and fat, with the mimicking materials prepared using Triton X-100, sodium chloride and distilled water. The dielectric characterization was performed using a coaxial probe, operating at frequencies between 0.5 and 4.0 GHz. Since the stability of the dielectric properties of mimicking materials is one of their main properties, the dielectric characterization was repeated after 15 and 35 days.
Introduction
In recent years, the progress in microwave imaging technology has led to extensive studies for the development of tissue-mimicking materials that are dielectrically accurate, since the dielectric properties, specifically the dielectric constant and conductivity, of biological tissues characterize the interaction of electromagnetic waves with tissue [1,2].
Microwave imaging technology uses the dielectric contrast of the targeted tissues, enabling diagnosis based on the differences in the dielectric properties between healthy and unhealthy tissues, presenting as its main advantages low cost, portability and the nonionizing nature of radiation, constituting an alternative to various imaging technologies used for diagnosis and monitoring [1,[3][4][5].
Microwave imaging has made significant progress as a future alternative imaging technique for medical applications, particularly for breast cancer and cerebrovascular accident diagnosis, but it has also been proposed for in vivo measurement of the dielectric properties of the human calcaneus and wrist for osteoporosis monitoring, and of the tibia for fracture monitoring [1,4,[6][7][8].
Before clinical trials on humans or animals, microwave imaging systems are tested on artificial phantoms that have the same anatomy and dielectric properties as human tissues.Therefore, tissue-mimicking materials are essential to the evaluation of the repeatability, stability, imaging quality and resolution of a microwave imaging system [1,2,5].human skin at four discrete frequencies between 64 and 400 MHz.The results obtained were promising; however, the stability of the dielectric properties of the phantom was only evaluated after one week.
In 2005, Lazebnik et al. [9] created a phantom that was characterized at frequencies from 500 MHz to 20 GHz, with the aim of reproducing the dielectric properties of various biological tissues.This material had a gelatinous base and contained different kerosene and safflower oil solutions, n-propanol, p-toluic acid, formaldehyde and a surfactant.According to the authors, an important feature of these materials was their capacity to create heterogeneous and anthropomorphic configurations with lasting stability in terms of their mechanical and electrical properties.
Pinto et al. [23], in 2014, described the use of gelatine as a phantom in electrical impedance spectroscopy measurements, performed from 100 kHz to 15 MHz.They used unflavored edible gelatine that was diluted in distilled water.Phantoms were also manufactured with different salt concentrations.In all solutions, formaldehyde was used to increase the melting temperature of the gelatine and its duration.In this study, the authors came to the conclusion that gelatine can be applied as a skin phantom in electrical impedance spectroscopy measurements.
Liquid mixtures based on Triton X-100, water and sodium chloride were adopted by Joachimowicz et al. [4] in 2017, Savazzi et al. [7] in 2020 and Amin et al. [1,24] in 2020 and 2021.In the first work mentioned [4], the authors mimicked the dielectric properties of cerebrospinal fluid, brain, blood, bone and muscle, in a frequency range from 0.5 to 6 GHz.Savazzi et al. [7] developed an anatomically and dielectrically accurate phantom of the axillary region, to be applied in experimental imaging evaluation of axillary lymph nodes using microwave imaging technology.Finally, Amin et al. [24] mimicked the dielectric properties of cortical bone, trabecular bone and skin, at frequencies between 0.5 and 8.5 GHz, to develop a three-dimensional phantom of the structure of the human heel.
Many of the described materials simulated, adequately, the dielectric properties of tissues, but only in the narrowband spectra for which they were designed.On the other hand, Triton X-100 has been described as an exceptional candidate for a liquid-based phantom.Mixtures based on Triton X-100 and sodium chloride solutions are able to simulate the dielectric properties of various human tissues in a large frequency range.The liquid nature of Triton X-100 solutions guarantees that complex three-dimensional structures can be filled while avoiding air bubbles.Another advantage of these mixtures is that their electromagnetic parameters can be predicted as a function of Triton X-100 and sodium chloride concentrations, using binary fluid mixture models such as Böttcher's formula.Finally, tissue-mimicking materials are easily produced and time-stable [1,4,5,7,24].
The interactions of electromagnetic radiation with matter can be quantified through complex relative permittivity, ε*, where the permittivity of the medium describes its tendency to be polarized when an electromagnetic field is applied: where ε is the real part of the complex permittivity, generally known as the dielectric constant or relative permittivity, and is related to the ability of the material to store energy from the applied electric field; ε is the imaginary part of the complex permittivity, also identified as a loss factor, and reflects the dissipative nature of the material, converting it to heat a fraction of the absorbed energy; the conductivity, σ, is related to the imaginary part of the complex permittivity via the relationship expressed in Equation (1); ω is the angular frequency and ε 0 is the permittivity of the vacuum [25][26][27][28].
There are several mathematical functions that have been proposed to model the dielectric performance of polar materials and biological tissues.With these models, it is possible to calculate the dielectric constant and conductivity values at a specific frequency range, for which the relaxation equation is suitable.
One of the most frequent models that have been applied to replicate the electrical behavior of organic tissues or aqueous electrolytic solutions is the Cole-Cole model: where ε ∞ represents the permittivity at infinite frequencies due to electronic polarizability and ε s the static (low-frequency) permittivity; σ s represents the static conductivity, related to charge movements; τ is the relaxation time of the material, which is the time taken for the molecules or dipoles to return to their original random orientation; and α is an empirical variable that measures the broadening of the dispersion, the magnitude of which is described by ε s − ε ∞ [5,26,28,29].
The real part of the permittivity, ε , is given by [30]: and the conductivity by [16]: However, in Equation ( 2) is described a single relaxation, and biological tissues are generally described in terms of multiple Cole-Cole poles, where each pole of the equation describes the effect of a particular dispersion region.Therefore, if the dielectric behavior of a material is studied in a wide frequency range, the totality of the dielectric relaxations taking place over that frequency range must considered, and more poles should be introduced to adequately describe the material.In this case, the Cole-Cole equation should be rewritten accordingly: where N is the number of fitting poles of the equation [5,27,29,30].
For the dielectric characterization of biological tissue, the 4-pole Cole-Cole model has been proposed [2,5,29].
The main goal of the work presented in this paper is the preparation, dielectric characterization and study of the dielectric property stability of the tissue-mimicking materials with relevance to human wrist and ankle anthropomorphic phantoms, aiming to enable the diagnosis and monitoring of bone diseases, like osteoporosis, in a non-invasive way, which can be achieved through a microwave imaging system.
Bone mineral density, usually measured via dual-energy X-ray absorptiometry, is considered a primary parameter in the diagnosis of osteoporosis.However, recent developments suggest a correlation between the dielectric properties of bone and its mineral density [3].Amin et al. [3] presented results that suggest that the mean relative permittivity of the femoral head in a patient diagnosed with osteoarthritis is higher when compared to that in an osteoporotic patient.Ruchikerketta et al. [6] showed that as bone mineral density decreases with the beginning of osteoporosis, the dielectric constant and conductivity increase proportionately.Therefore, the study of bone dielectric properties is of supreme relevance for the development of microwave radiation-based medical devices for diagnosing osteoporosis or even assessing bone fracture risk [3,31].
The studied samples, composed of Triton X-100, deionized water and sodium chloride, were prepared to mimic the dielectric properties of skin, fat, muscle, cortical bone and trabecular bone.
The dielectric measurements were performed in a frequency range of 0.5-4.0GHz, at room temperature, using the open-ended coaxial probe technique.
Materials and Methods
Several procedures are available in the literature to prepare liquid mimicking materials of human tissues [32].For the present application, the target tissues were skin, fat, muscle, cortical bone and trabecular bone.
To simulate the dielectric properties of skin, fat, muscle, cortical bone and trabecular bone, several mixtures containing Triton X-100 (Rohm e Haas Co., Philadelphia, PA, USA), deionized water and sodium chloride (Pronalab) were prepared.
The authors used Böttcher's model [4], given by Equation ( 6), to set the Triton X-100 and sodium chloride quantities required to produce the mixtures that mimic the various tissues by fitting the binary mixture equation to the Debye or Cole-Cole models.
where the subscripts m, 1 and 2 stand for mixture, Triton X-100 and sodium chloride solution, respectively.The composition of the prepared tissue-mimicking materials is presented in Table 1.For each composition, an aqueous solution of sodium chloride was prepared and added to the Triton X-100, according to the stipulated volume percentages.The obtained materials were stirred until homogeneity was achieved, and then, stored at room temperature and protected from light.The dielectric measurements were performed using an Agilent 85070E probe connected to an HP 8753D Network Analyzer, in a frequency range of 0.5-4.0GHz.
Prior to the measurement, the system was calibrated using the open/short/deionized water, as a standard procedure [33].
The dielectric properties of the studied materials can be determined by inserting the probe into the sample, with no special fixtures or containers being required.The measurements are non-destructive and can be made in real time [34].
The open-ended coaxial probe was developed as a section of a straight rigid coaxial transmission line.One extremity was assembled as the input port, in the form of a coaxial connector, and the other extremity, after being cut off and machined as an open end, constituted the probe tip.The probe tip was inserted into the sample, and the electric field lines formed between the electrodes of the probe tip changed as they penetrated the material.Consequently, the reflected signal, in the form of the reflection coefficient R*, could be measured at the probe input port using the Network Analyzer.This parameter is a complex quantity defined by its real and imaginary parts.Since R* is a function of ε*, and vice versa, the complex relative permittivity of the sample can be calculated from R* through the software provided by the probe manufacturer and that comprise calculations based on the measurements previously performed during the probe calibration: open (without any sample-open end is in the air), short (open end is shorted by a conductive material) and a known liquid (deionized water in the present case) [35].
The conductivity of the sample was posteriorly calculated from the imaginary part of the complex relative permittivity using the relationship defined in Equation (1).
The schematic representation of the measurement setup is depicted in Figure 1.
The conductivity of the sample was posteriorly calculated from the imaginary part of the complex relative permittivity using the relationship defined in Equation (1).
The schematic representation of the measurement setup is depicted in Figure 1.Since the open-ended coaxial probe technique is very sensitive to Network Analyzer drifts and inappropriate handling of the probe [7], the calibration of the equipment was tested by measuring the dielectric properties of deionized water and comparing the experimental data with the values obtained using the Cole-Cole model, through Equations ( 3) and ( 4).The optimized parameters used were ε = 4.22, εs = 79.9,τ = 8.8 ps and α = 0.013 [34,37].
Figure 2 shows the dielectric constant and the conductivity (inset) of deionized water, measured at room temperature, and the data predicted by the Cole-Cole model.
The average percentage differences between the dielectric constant and conductivity of the measured values in relation to the data estimated by the Cole-Cole model were found to be 0.33% and 5.65%, respectively.The maximum average percentage differences between the dielectric constant and conductivity of the measured values in relation to the mean value were 3.36% and 1.06%, respectively.Since the open-ended coaxial probe technique is very sensitive to Network Analyzer drifts and inappropriate handling of the probe [7], the calibration of the equipment was tested by measuring the dielectric properties of deionized water and comparing the experimental data with the values obtained using the Cole-Cole model, through Equations ( 3) and ( 4).The optimized parameters used were ε ∞ = 4.22, ε s = 79.9,τ = 8.8 ps and α = 0.013 [34,37].
Figure 2 shows the dielectric constant and the conductivity (inset) of deionized water, measured at room temperature, and the data predicted by the Cole-Cole model.
The conductivity of the sample was posteriorly calculated from the imaginary part of the complex relative permittivity using the relationship defined in Equation (1).
The schematic representation of the measurement setup is depicted in Figure 1.Since the open-ended coaxial probe technique is very sensitive to Network Analyzer drifts and inappropriate handling of the probe [7], the calibration of the equipment was tested by measuring the dielectric properties of deionized water and comparing the experimental data with the values obtained using the Cole-Cole model, through Equations ( 3) and ( 4).The optimized parameters used were ε = 4.22, εs = 79.9,τ = 8.8 ps and α = 0.013 [34,37].
Figure 2 shows the dielectric constant and the conductivity (inset) of deionized water, measured at room temperature, and the data predicted by the Cole-Cole model.
The average percentage differences between the dielectric constant and conductivity of the measured values in relation to the data estimated by the Cole-Cole model were found to be 0.33% and 5.65%, respectively.The maximum average percentage differences between the dielectric constant and conductivity of the measured values in relation to the mean value were 3.36% and 1.06%, respectively.The average percentage differences between the dielectric constant and conductivity of the measured values in relation to the data estimated by the Cole-Cole model were found to be 0.33% and 5.65%, respectively.
Besides the deionized water measurements and comparison with the Cole-Cole model, one of the tissue-mimicking materials was measured three times after the equipment calibration.
The maximum average percentage differences between the dielectric constant and conductivity of the measured values in relation to the mean value were 3.36% and 1.06%, respectively.
Results
The measured dielectric properties of the tissue-mimicking materials are shown in Figure 3.Besides the measured values, and for comparison, the values proposed by the IT'IS [38] database and calculated using the Cole-Cole model are also presented.
As previously mentioned, the four-pole Cole-Cole model was adopted, with the optimized parameters obtained from [15].
The dielectric parameters used in the IT'IS database are based on the Gabriel [39] dispersion relationships and, for that reason, in four of the five tissue-mimicking materials, the data from the database and estimated through the Cole-Cole model are very similar or even coincident, in the case of the dielectric constant values.
The dielectric constants of the tissue-mimicking materials are well aligned with the predicted values.Nevertheless, there is a significant deviation between the conductivity of fat-and cortical-bone-mimicking materials and the reference data.
In the case of cortical bone tissue mimicking, this mismatch occurs only at higher frequencies.The increase in the sodium chloride content could promote an increase in conductivity with a minor effect on the dielectric constant [40]; however, this adjustment would only benefit the high-frequency measurements, having a negative impact in the low-frequency zone.Moreover, the low-frequency band ranging from 0.5 to 2.4 GHz has more electromagnetic field penetration depth, which is reduced significantly above 3 GHz.Thus, this band is observed to be more feasible for microwave imaging applications [1].
Regarding the fat sample, composed only of Triton X-100, the experimental values are higher than the predicted ones, which means that the addition of sodium chloride is not the solution.However, other aspects must be taken under consideration, such as the higher viscosity of the samples with a higher percentage of Triton X-100, which can complicate the dielectric property measurements [40], or the fact that Böttcher's model does not work for every mixture [4].
Joachimowicz et al. [40] presented, for a fat-tissue-mimicking material, a percentage difference of 75%, with conductivity measured at 2.45 GHz, when compared to the Debye model, with the experimental value also being higher than the predicted one.
The average percentage differences between the experimental values and the reference data are presented in Table 2.As one can see, in all the samples, the experimental and reference data show better alignment for the dielectric constant.In the case of the fatand cortical-bone-mimicking materials, the deviation from the experimental and reference values cannot be disregarded.
As previously mentioned, the four-pole Cole-Cole model was adopted, with t timized parameters obtained from [15].
The dielectric parameters used in the IT'IS database are based on the Gabri dispersion relationships and, for that reason, in four of the five tissue-mimicking m als, the data from the database and estimated through the Cole-Cole model are ver ilar or even coincident, in the case of the dielectric constant values.
The dielectric constants of the tissue-mimicking materials are well aligned w predicted values.Nevertheless, there is a significant deviation between the condu of fat-and cortical-bone-mimicking materials and the reference data.
In the case of cortical bone tissue mimicking, this mismatch occurs only at frequencies.The increase in the sodium chloride content could promote an incre conductivity with a minor effect on the dielectric constant [40]; however, this adjus would only benefit the high-frequency measurements, having a negative impact low-frequency zone.Moreover, the low-frequency band ranging from 0.5 to 2.4 GH more electromagnetic field penetration depth, which is reduced significantly ab GHz.Thus, this band is observed to be more feasible for microwave imaging applic [1].
Regarding the fat sample, composed only of Triton X-100, the experimental are higher than the predicted ones, which means that the addition of sodium chlo not the solution.However, other aspects must be taken under consideration, such higher viscosity of the samples with a higher percentage of Triton X-100, which can plicate the dielectric property measurements [40], or the fact that Böttcher's mode not work for every mixture [4].
Joachimowicz et al. [40] presented, for a fat-tissue-mimicking material, a perce difference of 75%, with conductivity measured at 2.45 GHz, when compared to the model, with the experimental value also being higher than the predicted one.
The average percentage differences between the experimental values and the ence data are presented in Table 2.As one can see, in all the samples, the experiment reference data show better alignment for the dielectric constant.In the case of the fa cortical-bone-mimicking materials, the deviation from the experimental and referen ues cannot be disregarded.To analyze the stability of the dielectric properties over time, they were me three times, with a break of 15 days between the first and the second measuremen a break of 20 days between the second and third measurements.
Figure 4a, depicts the comparison between the three measured values of the di constants.There does not exist a major difference between the values measured first two dates.Upon analyzing the third measurement, it is observed that there e centuated differences in the case of the muscle and the skin, with the dielectric co showing lower values.In the cortical-bone-mimicking material, the dielectric const bility its more accentuated.
Regarding the conductivity, presented in Figure 4b, it can be verified that ther higher coherence between the values measured on the three dates.
Tables 3 and 4 show the dielectric constant and the conductivity, respectively three measurements, at 2.45 GHz.Moreover, the percentage differences between t and the second measurements, and between the first and the third measurements, a presented.
These results are promising, since one of the fundamental characteristics of th icking materials is their stable dielectric properties over time, to guarantee the repe ity of the measurements.To analyze the stability of the dielectric properties over time, they were measured three times, with a break of 15 days between the first and the second measurements and a break of 20 days between the second and third measurements.
Figure 4a, depicts the comparison between the three measured values of the dielectric constants.There does not exist a major difference between the values measured on the first two dates.Upon analyzing the third measurement, it is observed that there exist accentuated differences in the case of the muscle and the skin, with the dielectric constant showing lower values.In the cortical-bone-mimicking material, the dielectric constant stability its more accentuated.
Regarding the conductivity, presented in Figure 4b, it can be verified that there exists higher coherence between the values measured on the three dates.
Tables 3 and 4 show the dielectric constant and the conductivity, respectively, of the three measurements, at 2.45 GHz.Moreover, the percentage differences between the first and the second measurements, and between the first and the third measurements, are also presented.
These results are promising, since one of the fundamental characteristics of the mimicking materials is their stable dielectric properties over time, to guarantee the repeatability of the measurements.
To better understand the relationship between the composition of the tissue-mimicking materials and the stability of the dielectric properties over time, Figure 5a shows the percentage differences for the dielectric constant and conductivity as a function of the Triton X-100 percentage, and in Figure 5b, as a function of the NaCl content, for the first and third measurements, performed at 2.45 GHz.Both the dielectric constant and conductivity show the same trend, with the tissue mimicking the skin presenting the highest percentage differences.Since the high content of NaCl in the tissue mimicking the muscle did not compromise its dielectric stability, and the high percentage of Triton X-100 did not compromise the dielectric stability of the tissues mimicking the fat or the bone, it is valid to infer that the high content of NaCl, combined with the percentage of Triton X-100 of 40%, makes the preservation of the dielectric properties over time unfeasible.age differences.
Since the high content of NaCl in the tissue mimicking the muscle did not mise its dielectric stability, and the high percentage of Triton X-100 did not com the dielectric stability of the tissues mimicking the fat or the bone, it is valid to in the high content of NaCl, combined with the percentage of Triton X-100 of 40% the preservation of the dielectric properties over time unfeasible.
Figure 1 .
Figure1.Experimental apparatus used for the measurement of the dielectric constant and dielectric losses (adapted from[36]).
Figure 2 .
Figure 2. Frequency dependence of dielectric constant and conductivity (inset) of deionized water: measured values and those predicted by the Cole-Cole model.Besides the deionized water measurements and comparison with the Cole-Cole model, one of the tissue-mimicking materials was measured three times after the equipment calibration.The maximum average percentage differences between the dielectric constant and conductivity of the measured values in relation to the mean value were 3.36% and 1.06%, respectively.
Figure 1 .
Figure1.Experimental apparatus used for the measurement of the dielectric constant and dielectric losses (adapted from[36]).
Figure 1 .
Figure1.Experimental apparatus used for the measurement of the dielectric constant and dielectric losses (adapted from[36]).
Figure 2 .
Figure 2. Frequency dependence of dielectric constant and conductivity (inset) of deionized water: measured values and those predicted by the Cole-Cole model.Besides the deionized water measurements and comparison with the Cole-Cole model, one of the tissue-mimicking materials was measured three times after the equipment calibration.The maximum average percentage differences between the dielectric constant and conductivity of the measured values in relation to the mean value were 3.36% and 1.06%, respectively.
Figure 2 .
Figure 2. Frequency dependence of dielectric constant and conductivity (inset) of deionized water: measured values and those predicted by the Cole-Cole model.
Figure 3 .
Figure 3. Frequency dependence of dielectric constant and conductivity (inset) of the tissu icking materials measured, predicted by the Cole-Cole model and proposed by the IT'IS d (a) fat; (b) cortical bone; (c) trabecular bone; (d) skin; (e) muscle.
Figure 3 .
Figure 3. Frequency dependence of dielectric constant and conductivity (inset) of the tissuemimicking materials measured, predicted by the Cole-Cole model and proposed by the IT'IS database: (a) fat; (b) cortical bone; (c) trabecular bone; (d) skin; (e) muscle.
Figure 4 .
Figure 4. Frequency dependence of (a) dielectric constant and (b) conductivity of the ti icking materials, measured on different dates.
Figure 4 .
Figure 4. Frequency dependence of (a) dielectric constant and (b) conductivity of the tissue-mimicking materials, measured on different dates.
Figure 5 .
Figure 5. Dielectric constant and conductivity percentage differences between the first and third measurements, performed at 2.45 GHz, as a function of (a) Triton X-100 percentage and (b) sodium chloride content.
Table 1 .
Composition of the prepared tissue-mimicking materials.
Table 2 .
Average percentage differences between the measured and the reference values pr by the IT'IS database.
Table 2 .
Average percentage differences between the measured and the reference values proposed by the IT'IS database.
Table 3 .
Dielectric constant values, measured at 2.45 GHz, and respective percentage between the first and second measurements and the first and third measurements.
Table 3 .
Dielectric constant values, measured at 2.45 GHz, and respective percentage differences between the first and second measurements and the first and third measurements.
Table 4 .
Conductivity values, measured at 2.45 GHz, and respective percentage differences between the first and second measurements and the first and third measurements. | 2023-08-07T15:34:37.558Z | 2023-08-03T00:00:00.000 | {
"year": 2023,
"sha1": "ac89badd3c2db2e6c1e9bd2c52141b22c7869ff9",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2813-446X/1/2/7/pdf?version=1691042117",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "aad69f01e78dfa2b9de3234ee2f14f27f595d9d5",
"s2fieldsofstudy": [
"Medicine",
"Materials Science",
"Engineering"
],
"extfieldsofstudy": []
} |
264585309 | pes2o/s2orc | v3-fos-license | Children with developmental coordination disorder are less able to fine-tune muscle activity in anticipation of postural perturbations than typically developing counterparts
The majority of children with developmental coordination disorder (DCD) struggle with static and dynamic balance, yet there is limited understanding of the underlying neuromechanical mechanisms that underpin poor balance control in these children. Eighteen children with DCD and seven typically developing (TD) children aged 7–10 years stood with eyes open on a moveable platform progressively translated antero-posteriorly through three frequencies (0.1, 0.25 and 0.5 Hz). Myoelectric activity of eight leg muscles, whole-body 3D kinematics and centre of pressure were recorded. At each frequency, postural data were divided into transition-state and steady-state cycles. Data were analyzed using a linear mixed model with follow-up Tukey’s pairwise comparisons. At the slowest frequency, children with DCD behaved like age-matched TD controls. At the fastest frequency, children with DCD took a greater number of steps, had a greater centre of mass variability, had a greater centre of pressure area, and tended to activate their muscles earlier and for longer than TD children. Children with DCD did not alter their postural response following prolonged exposure to platform movement, however they made more, non-structured postural adjustments in the medio-lateral direction as task difficulty increased. At the faster oscillation frequencies, children with DCD adopted a different muscle recruitment strategy to TD children. Activating their muscles earlier and for longer may suggest that children with DCD attempt to predict and react to postural disturbances, however the resulting anticipatory muscle excitation patterns do not seem as finely tuned to the perturbation as those demonstrated by TD children. Future work should examine the impact of balance training interventions on the muscle recruitment strategies of children with DCD, to ensure optimal interventions can be prescribed.
Introduction
Developmental coordination disorder (DCD) is a movement disorder characterized by reduced motor competence and poor motor coordination, in the absence of other identifiable neurological and/or medical disorders (American Psychiatric Association, 2013).Affecting 5-6% of school-aged children (Zwicker et al., 2012), children with DCD experience significant problems in their fine and/or gross motor skills (Geuze et al., 2001).Most children with DCD also experience significant difficulties with both static and dynamic balance, which can lead to secondary issues such as non-participation in physical activity (Fong et al., 2011) and an increased risk of tripping and falling (Scott-Roberts and Purcell, 2018).As balance is integral in the successful performance of most functional skills (Huxham et al., 2001), it is essential to study the underlying mechanisms that may underpin poor balance control in children with DCD, to ensure that optimal interventions can be prescribed.
It is well established that, even for highly repetitive or simple balance tasks, human movement patterns are varied (Hausdorff, 2007;Turnock and Layne, 2010).However, this variation is not random, with patterns that can be quantified evident in the changes that occur.This time-based organization of variation, or structure, in movement patterns is recognized as an important feature of a neuromuscular system that can adapt to perturbations and changes in the surrounding environment (Bolton, 2015).Variation in walking characteristics of typically developing (TD) children (age 3-14 years) is less structured (more random) than those of adults (Hausdorff et al., 1999).Therefore, studying structure within movement patterns can reveal variations in the growth and maturation of the motor control system.Structure also exists in the muscle activation and coordination that drives movements (Hodson-Tole and Wakeling, 2017;Wakeling and Hodson-Tole, 2018).These structures can change in response to postural control challenges (Ferrari et al., 2020), highlighting the importance of neuromuscular drive in determining motor behaviors.
Postural control can be distinguished into reactive (feedback) or anticipatory (feedforward) responses, whereby postural adjustments are either made subsequent, or prior, to a balance perturbation.Responses to postural disturbances also scale to the level of postural threat (Adkin et al., 2000) and depend of the size of the perturbation.For instance, during smaller perturbations, an ankle strategy is often effective, whereby torque generated about the ankle joint is sufficient to maintain balance (Massion, 1994).In larger perturbations, a more severe response may be required, such as a hip strategy, whereby large, rapid movements are generated about the hips to regain centre of mass (COM) equilibrium (Horak and Nashner, 1986).As we develop across the lifespan, we learn to adapt to different perturbations through mechanisms that are dynamic and flexible (Haddad et al., 2013).However, individuals with DCD often present with a poor organization of body movements in relation to the global environment (Green and Payne, 2018), therefore it is important to assess the postural responses of those with DCD during balance perturbations.
Reactive and anticipatory mechanisms of postural control have been described previously for single discrete perturbations in children with DCD.During unexpected perturbations, Cheng et al. (2018) found that children with DCD reacted later than TD children to a forward push, whereas Fong et al. (2015) reported no group differences when reacting to a backward moving platform.During planned movements, children with DCD presented with fewer anticipatory muscle activations when kicking a ball and climbing stairs (Kane and Barden, 2012), and had a shorter duration between muscle activity onset time and peak activation than TD children during a Y-balance test, which was suggested to be a potential mechanism to compensate for a less-effective feedforward control system (Yam and Fong, 2019).Whilst knowledge of postural control during single perturbations is important, it is also essential to assess movement strategies during continuous dynamic situations (such as a moving base of support), to fully understand the underlying mechanisms that may contribute to poor balance control (Horak et al., 2009).The oscillating platform paradigm causes both reactive and anticipatory postural control strategies to be generated to overcome the same perturbation (Mills and Sveistrup, 2018).
While these reactive and anticipatory postural control strategies have been studied in children with other motor impairments (e.g., cerebral palsy; Mills et al., 2018), to our knowledge, they have not been studied in children with DCD during continuous dynamic movement.Additionally, no previous work has studied the structure of postural sway characteristics in children with DCD, nor evaluated the association with muscle activation and coordination.Therefore, the primary aim of this study was to compare postural responses to continuous platform oscillations between children with DCD and TD children.The secondary aim of this study was to determine if children with DCD were able to modify postural responses after prolonged exposure to platform movement.We hypothesized that children with DCD would be less able to adapt their postural responses compared to TD children after prolonged exposure to platform movement.
Participants
Eighteen children with DCD and seven TD children participated in this study.Children with DCD were recruited through parental support groups on social media (e.g., Facebook).TD children were recruited via social media and convenience sampling (e.g., sibling of child with DCD).Participant characteristics are shown in Table 1.Children in the DCD group satisfied the Diagnostic and Statistical Manual of Mental Disorders (Wilson et al., 2009) to confirm that their child had significant movement difficulties that interfered with balance, did not suffer from any general medical condition known to affect sensorimotor function, and had no diagnosed learning difficulties (DSM-5 criteria B, C, D).If any known medical conditions or learning difficulties were identified, these children were excluded from the study.Children with DCD were required to score ≤ 5 th percentile (overall), reflecting definite motor impairment (DSM-5 criteria A), and ≤ 15 th percentile (balance subscale), reflecting 'risk' of motor impairment, on the Movement Assessment Battery for Children, Second Edition (MABC-2; Henderson et al., 1992).TD children were required to score > 15 th percentile (balance subscale), reflecting no motor impairment.Parents/guardians also completed the Attentional Deficit Hyperactivity Disorder (ADHD) Rating Scale -VI (DuPaul et al., 1998).The institutional research ethics committee granted ethical approval.Written informed consent was obtained from parents/guardians and written assent given by children, in accordance with the Declaration of Helsinki.
Experimental protocol
The experimental protocol for this study was adapted from others described previously (Bugnariu and Sveistrup, 2006;Mills and Sveistrup, 2018).Participants stood upright with eyes open and feet shoulder width apart in the centre of a moveable platform.The platform was driven by electromagnetic propulsion, controlled via custom written software (Labview v19 SP1, National Instruments, Austin, Texas) through a DAQ card (USB-6210, National Instruments).Participants were instructed to maintain balance and avoid taking steps unless necessary.If steps were taken, participants were instructed to return to their initial position as quickly as possible.The platform translated 10 cm peak-to-peak in the antero-posterior direction.Two trials of ten sinusoidal oscillations at a frequency of 0.1 Hz, twenty oscillations at 0.25 Hz, and forty oscillations at 0.5 Hz (Figure 1A) were presented, with frequency changes presented sequentially and automatically.Participants were aware that platform frequency would increase, however they were not informed as to when this would occur.
Full body kinematics were collected at 100 Hz using a 10-camera motion analysis system (Qualisys v2021.1,Gothenburg, Sweden).Passive retro-reflective markers (n = 47) were positioned on all body segments (modified Plug-in Gait model).Two additional markers were positioned on the oscillating platform to record its position.For outcome measures described below, head and trunk angle, and wholebody COM were calculated in Visual 3D (v2021.06.2, C-Motion, Rockville, MD).Bilateral surface electromyography (EMG; Delsys Inc., Natick, United States) from rectus femoris (RF), biceps femoris (BF), tibialis anterior (TA), and medial gastrocnemius (MG) muscles were collected at 1000 Hz in Qualisys.Centre of pressure data were collected using a Kistler force plate (Type 9281B, Kistler Instrument Corp., Winterthur, Switzerland) at 1000 Hz.Force data were recorded in BioWare software (v5.4.3.0),synchronized to motion data by the Qualisys trigger.
Outcome measures
At each platform frequency, the number of cycles containing a step were manually counted at the time of data collection and verified using motion capture data.Centre of pressure (COP) area was calculated using a 90% confidence ellipse.COM displacement variability in the antero-posterior and medio-lateral directions were assessed in terms of each signals standard deviation (SD) and the timescale over which short-term fluctuations in the signal persisted, calculated as the Entropy Halflife (EnHL).In the antero-posterior direction, both absolute and adjusted data are presented, whereby platform displacement was subtracted from COM data.To calculate EnHL, the COM in the antero-posterior and medio-lateral directions were split into equal length epochs containing all cycles within a single platform oscillation frequency.Each signal was high-pass filtered (2nd order Butterworth, 10 Hz cut-off) to attenuate temporal oscillations imposed by the platform movement (Figure 2A).The filtered signal was standardized (mean = 0, SD = 1) and a reshape timescale approach (Zandiyeh and Von Tscharner, 2013) used to generate restructured time series with increasing time intervals (1 ms -6 s) between consecutive data points (Figure 2B).The sample entropy (SampEn) of each reshaped signal was calculated using a freely available software (Goldberger et al., 2000), with m = 1 and r = 0.2.SampEn provides the conditional probability that a time series of m data points remains affiliated, with a tolerance of r, if a data point is added to it (Richman and Moorman, 2000).Resulting SampEn values increase (indicating less regularity) as the reshape scale increases, reflecting the breakdown of short-term signal fluctuations (Figure 2B).The series of SampEn values produced were normalized to the maximum SampEn calculated for the original time series (when m = 0 and r = 0.2).This normalization means the reshape timescale at which SampEn = 0.5 represents the timescale at which the signal transitions from containing regular, structured fluctuations to being random (Zandiyeh and Von Tscharner, 2013) called the EnHL.These analyses were completed using custom written code in Wolfram Mathematica (version 11.1.1).
Head anchoring index (AI) was calculated using Eq. ( 1) (Mills and Sveistrup, 2018) to determine the stabilization strategy of the head in relation to both the global environment and the trunk segment: where σ a is the SD of the absolute head angle relative to the global coordinate system, and σ r is the SD of the head relative to the trunk segment.A positive AI indicates a head-stabilized-in-space strategy.A negative AI indicates a head-stabilized-to-trunk strategy.
To calculate muscle activity onset latencies, EMG signals were decomposed into time-frequency space using an EMG specific wavelet analysis approach (Von Tscharner, 2000).Specifically, a filter bank of k = 11 non-linearly scaled wavelets with central frequencies spanning 6.90-395.44Hz was used to resolve the EMG signal intensities into time/frequency space.Total intensity was calculated as the sum of the signal power contained within wavelets 1 ≥ k ≤ 10, providing a representation of the signal power at each time point whilst removing effects of low frequency signal components (i.e., contained within the first wavelet, k = 0).
The occurrence of muscle activity in respect to the relevant platform change of direction were identified manually using the ginput function in MATLAB (R2022a, MathWorks Inc., Natwick, MS, USA).To be considered for inclusion as muscle activity, EMG intensity had to meet or exceed two SDs above baseline (defined as the quiet period prior to trial start) and last for more than 50 ms (Mills and Sveistrup, 2018).For RF and TA, this was when the platform transitioned from backward to forward direction.For BF and MG, this was when the platform transitioned from forward to backward direction (Figures 1B-D).To remove subjectivity of this method, a custom MATLAB script was subsequently used.Firstly, the EMG intensities at the manually identified muscle activity onset times were determined, and averaged for each muscle to calculate an onset threshold.Activity onset times were then automatically adjusted using the script, so that all activity onsets for a specified participant occurred when EMG intensity surpassed their defined muscle threshold.Lastly, the total activity time of each muscle 'burst' was calculated as the time between activity onset, and the first subsequent instance that the EMG intensity envelope dropped below the onset threshold.All muscle activity data were expressed as a percentage of half-cycle time, to allow for comparisons between different platform frequencies.Muscle activity bursts were coded as anticipatory where they occurred before change of direction, and as reactive where they occurred after change of direction.
For AI and EMG data, platform frequencies were sub-divided into 'transition-state' and 'steady-state' .Transition-state was defined as the first 3 cycles at 0.1 Hz, and the first 5 cycles at 0.25 and 0.5 Hz.Steadystate was defined as a period within the last half of each frequency that contained 5 cycles without stepping at 0.1 Hz, and a period of 8-10 cycles without stepping at 0.25 and 0.5 Hz, whereby the movement of the platform is predictable (Bugnariu and Sveistrup, 2006).
Statistical analysis
All statistical analyses were completed using RStudio (RStudio 1.3.959).Descriptive statistics (Table 1) are reported as mean ± standard deviation (SD).A linear-mixed model (LMM; lme4 package; Bates et al., 2015) was developed to quantify differences for each outcome measure (number of steps, COM SD, COM EnHL, COP area, head anchoring index, muscle onset latency and total excitation time) between groups (DCD vs. TD), platform frequencies (0.1 Hz vs. 0.25 Hz vs. 0.5 Hz) and platform state (transition vs. steady-state) (fixed effects).Participant ID was included as a random effect.Assumptions of linearity and normality distributions of the model were checked visually, and homogeneity of variance assessed using Levene's Test (p > 0.05; Levene, 1960).Estimated means for each variable were derived from the model using the emmeans package, and are reported as mean ± standard error (SE).To identify between-group and between-state differences, Tukey's pairwise comparisons were conducted.
Anchoring index
Despite some individual participants adopting a head-stabilizedin-space strategy or head-stabilized-to-trunk strategy, average data indicate no clear head stabilization strategy in either group (AI of <−0.2 or > 0.2; Figure 5).There were no group differences detected during transition or steady-state cycles at any platform frequency (unclear ESs; p > 0.05), and no state differences detected in either group (unclear ESs; p > 0.05).At 0.25 Hz, children with DCD generally activated their muscles at a similar time between platform states (except TA excitation occurred later in steady-state), however excitation duration was longer in steady-state cycles for the TA (small ES: 0.49 ± 0.61) and GM (small ES: 0.52 ± 0.58) than in transition-state cycles.During steady-state cycles, TD children tended to activate their muscles earlier and for shorter durations than in transition-state cycles, however all effect sizes were unclear.At 0.5 Hz, no clear trends were observed in muscle excitation onset time or excitation duration between platform states in either group.Full ES comparisons can be found in Supplementary Figures S1, S2.
Discussion
This study is the first to assess the postural and neuromuscular responses of children with DCD using a continuous balance perturbation paradigm.As expected, children with DCD were generally more unstable than TD children, particularly at the highest platform frequency.An increase in the number of children who took steps at 0.5 Hz reflects the increased difficulty of the task for both groups (Streepey and Angulo-Kinzler, 2002).However, children with DCD took steps more often than TD children to maintain balance (large ES).Children with DCD also had a greater COM variability (SD) than TD children in both the antero-posterior (large to very large ESs) and medio-lateral (moderate to large ESs) directions (Figure 3), indicating greater postural sway.This was further supported by the greater COP area covered by children with DCD (large ES).Despite the reduced stability of children with DCD, there was no detected difference in their global stabilization strategy compared to TD children.Children with DCD showed no preference for either a head-stabilized-to-trunk strategy, or a head-stabilized-in-space strategy (Figure 5), whereas other populations with known balance deficits, such as children with cerebral palsy (Mills et al., 2018) and adults with Parkinson's disease (Mesure et al., 1999), adopt a headstabilized-to-trunk strategy.This may be explained by a poor organization of body movements in relation to the global environment, often associated with DCD (Green and Payne, 2018).
Children with DCD did however, adopt a different neuromuscular strategy to TD children at the faster platform frequencies.Generally, the organization of muscle excitation was distal to proximal in children with DCD, indicating an ankle strategy was implemented to maintain balance (Massion, 1994).Whilst this was also the case for the anterior muscles of TD children, there were some instances whereby average posterior muscle excitation was ordered proximal to distal (Table 2).This may indicate that TD children were able to switch between an ankle and hip strategy to maintain balance (Horak and Nashner, 1986).Children with DCD tended to activate their muscles earlier and for longer than TD children, regardless of platform state (Table 2).Whilst this does suggest that children with DCD attempt to predict and react to postural disturbances (Cordo and Nashner, 1982), the resultant anticipatory muscle excitations are different to those demonstrated by TD children.Thus, a lack of appropriate muscular reactions to balance perturbations may explain poor dynamic balance control in children with DCD.
Previous work has shown that children with DCD do not make postural adaptations when exposed to repeated discrete perturbations (Cheng et al., 2022).During our continuous perturbations, neither group made postural adjustments with prior knowledge of platform movement at the fastest platform frequency, as both muscle excitation onset time and total excitation duration remained similar between transition-state and steady-state cycles.However, this likely reflects the increased difficulty of the task at 0.5 Hz, as TD children were able to make postural adjustments with prior knowledge of platform While children with DCD exhibited greater postural sway than TD children (Figure 3), the structural organization of the anteroposterior COM variability (EnHL) did not differ between groups (Figure 4).This suggests that to maintain balance, the control strategies adopted by children with DCD resulted in a similar temporal organization of the antero-posterior COM movement as TD children, possibly explaining the similarity in the global kinematic outcome measures described above.However, surprisingly, the EnHL of the medio-lateral displacement of children with DCD became shorter as platform difficulty increased, whereas there was no change in TD children.This suggests that children with DCD made more, non-structured (random), postural adjustments in a plane orthogonal to platform movement as task difficulty increased.Previous work has shown those with DCD to explore more action space during a defined task by increasing available degrees of freedom (Golenia et al., 2018).Therefore, this increased, less structured variability in the medio-lateral plane, may be a compensatory mechanism as a result of the way children with DCD manage the degrees of freedom problem (Latash et al., 2007).It may also be explained by a lack of stiffening and/or appropriately organized recruitment of hip ab/adductor muscles, which are important for medio-lateral stability (Winter et al., 1996).However, as we did not measure muscle activity in these muscles, further work is required to confirm or deny this notion.Some limitations should be acknowledged.Firstly, our sample size is small and does not include an even distribution of male/female participants.While sex differences in postural control have been shown previously in TD children (Smith et al., 2012), exploring sex differences between and within children with DCD and TD children was outside the scope of the current manuscript.Furthermore, it was not possible to accurately explore sex differences due to the insufficient number of data per sub-level (e.g., TD male participants, n = 2).Future work with larger sample sizes is needed.EMG data were only collected for eight lower limb muscles, yet conclusions are generalized to wholebody postural control.Further, our assumption that postural movement in the antero-posterior direction would be solely controlled by flexor/extensor muscles meant that all eight muscles considered for analysis were flexor/extensor muscles.Future work should therefore consider collecting EMG data from more muscles, and consider the role that ab/adductor and rotational muscles may play in ensuring postural stability in the antero-posterior direction.Future work should also consider assessing the EnHL of EMG data, to identify whether there are any differences in the temporal organization of muscle activity.
To conclude, data from the current study indicate that while children with DCD were not able to perform the task as well as TD children (more unstable), they were able to complete the task, actively working toward making similar global postural adjustments as TD children.However, to achieve a similar global stabilization strategy, children with DCD generated this response with a different neuromuscular strategy, activating their muscles earlier and for longer than TD children.Children with DCD also made more, non-structured, movements in a plane orthogonal to platform displacement as task difficulty increased, suggesting they utilize more degrees of freedom to overcome balance perturbations than TD children.Future work should examine the impact of balance training interventions on the muscle excitation patterns and coordination strategies of children with DCD, to ensure that appropriate FIGURE 1 (A) Platform oscillation frequencies.(B) Platform oscillations at 0.5 Hz and corresponding EMG intensities from the rectus femoris (RF), tibialis anterior (TA), bicep femoris (BF), and medial gastrocnemius (MG) during transition-state and steady-state cycles.(C) Identification of anterior muscle activity onset.(D) Identification of posterior muscle activity onset.Solid vertical lines indicate platform change of direction.Dashed vertical lines indicate muscle activity onset.Δt indicates muscle onset latency.
One child with DCD took steps during 1 cycle at 0.1 Hz and 0.25 Hz.Three children with DCD took steps during 1 cycle at 0.25 Hz.No TD children took any steps at either 0.1 Hz or 0.25 Hz.At 0.5 Hz, 16 out of 18 children with DCD, and six out of seven TD children took steps throughout the trial.LMM estimated means showed that FIGURE 2 (A) An example medio-lateral COP displacement signal, from 0.25 Hz platform oscillation, as recorded (left) and after filtering (right).(B) Filtered signal reshaped at timescales of 3 ms (top left), 6 ms (lower left), 16 ms (lower right) and 40 ms (top right).Note the original repeating pattern of fluctuations is reduced as the reshape timescale increases.The normalized sample entropy values (SampEn) for each of these signals, and for all other reshape timescales, are shown in the central graph (log scale on x-axis).The timescale at which the normalized SampEn = 0.5 is highlighted (red), defining the EnHL for this signal as 13.78 ms.
FIGURE 3
FIGURE 3 Linear-mixed model estimated centre of mass variability, based on signal standard deviation, in the (A) medio-lateral, (B) absolute antero-posterior, and (C) antero-posterior direction adjusted for platform movement.Solid horizontal black lines indicate group averages.Effect sizes with 90% confidence intervals from (D) medio-lateral, (E) absolute antero-posterior, and (F) adjusted antero-posterior centre of mass variability comparisons.Positive/ negative effect sizes in (D-F) represent smaller/greater variability for 2nd comparator of each pairing.*Significant difference (p < 0.05).DCD, children with developmental coordination disorder; TD, typically developing children; ML, medio-lateral; AP, antero-posterior; COM, centre of mass.
FIGURE 4
FIGURE 4 Linear-mixed model estimated centre of mass entropy halflife (EnHL; expressed here in milliseconds) in the (A) medio-lateral, (B) absolute antero-posterior, and (C) antero-posterior direction adjusted for platform movement.Solid horizontal black lines indicate group averages.Effect sizes with 90% confidence intervals from (D) medio-lateral, (E) absolute antero-posterior, and (F) adjusted anterior-posterior centre of mass EnHL comparisons.Positive/negative effect sizes in (D−F) represent shorter/longer EnHL for 2nd comparator of each pairing.*Significant difference (p < 0.05).DCD, children with developmental coordination disorder; TD, typically developing children; ML, medio-lateral; AP, antero-posterior; COM, centre of mass; EnHL, entropy halflife.
FIGURE 5
FIGURE 5Linear-mixed model estimated head anchoring index during transition-state (A) and steady state (B) cycles.Dashed lines at ±0.2 indicate the threshold for a given strategy.Effect sizes with 90% confidence intervals from transition-state (C) and steady-state (D) cycles.DCD, children with developmental coordination disorder; TD, typically developing children; HSSS, head stabilised in space strategy; HSTS, head stabilised to trunk strategy.
TABLE 1
Mean ± standard deviation participant characteristics.
LMM estimated muscle activity data for transition-state and steady-state cycles are shown in Table 2.In general, both groups tended to activate their muscles earlier and for longer as task difficulty increased.At 0.25 Hz, muscle excitation occurred earlier in children with DCD in the RF (moderate ES: 1.08 ± 1.07), TA (large ES: 1.62 ± 1.32) and MG (large ES: 1.49 ± 0.85) during transition-state cycles, and in the MG (moderate ES: 1.07 ± 0.85) during steady-state cycles than TD children.Muscle excitation duration of the MG was longer in children with DCD (moderate ES: 0.93 ± 1.03) than TD children during steady-state cycles.At 0.5 Hz, muscle excitation of the MG occurred earlier in children with DCD during both transitionstate (moderate ES: 1.13 ± 0.79) and steady-state cycles (large ES: 1.90 ± 0.81), and for longer in the BF (moderate ES: 1.02 ± 1.04) and MG (large ES: 1.58 ± 0.92) during transition-state cycles, and in the BF (moderate ES: 1.05 ± 1.04) and MG (large ES: 1.31 ± 0.94) during steady-state cycles than TD children.
Table2).At 0.25 Hz, TD children activated their muscles earlier and for a shorter duration during steady-state cycles, which may suggest that they were able to better anticipate platform movement compared to transition-state cycles.In contrast, there were no changes in muscle excitation onset times between platform states in children with DCD, and muscle excitation duration was indeed longer in steady-state cycles.Overall, data from the current study indicate an altered neuromuscular coordination in children with DCD, which should be considered in future training interventions to improve balance control.
TABLE 2
Linear-mixed model estimated mean ± standard error timing of muscle activity during transition-state and steady-state cycles.Negative onset latencies indicate muscle excitation occurred before platform change of direction.*Small, **moderate or ***large effect size difference between DCD and TD.† Small, † † moderate, † † † large, or † † † † very large effect size difference between transition-state and steady-state cycles.RF, rectus femoris; TA, tibialis anterior; BF, bicep femoris; MG, medial gastrocnemius; DCD, children with developmental coordination disorder; TD, typically developing children.10.3389/fnhum.2023.1267424Frontiers in Human Neuroscience 11 frontiersin.orginterventions to improve balance can be prescribed.Future work should also consider the role of attentional deficits of children with DCD on postural control during continuous balance perturbations. | 2023-10-30T15:17:48.904Z | 2023-10-27T00:00:00.000 | {
"year": 2023,
"sha1": "1a5d0dfab510a0d4f45d51de9fdb0881e5a752ea",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "31f2df303471ba41c476ffce6008bc52f10b1d42",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235546444 | pes2o/s2orc | v3-fos-license | The Water–Energy–Food Nexus Discovery Map: Linking Geographic Information Systems, Academic Collaboration, and Large-Scale Data Visualization
: The Water–Energy–Food (WEF) Nexus framework for holistic sustainable development has spawned independent and academic communities around the globe that utilize the framework in research, implementation, policy development, and technological advancement. These communities, however, are geographically and topically segmented and lack large-scale databasing that clearly catalogs and classifies their work. Recognizing this need, the WEF Nexus Strategic Initiative program at The Pennsylvania State University has developed the WEF Nexus Discovery Map utilizing the Arc Geographic Information Systems’ (GIS) Online Dashboard creation toolkit. In real time, users are able to select from 5040 different combinations of filters with the ease of a few button pushes and see projects pop up or disappear from the map located on the dashboard. Projects can then be clicked on to view their specific information, such as the institution that produced the work, local collaborators, relevant web page, and point of contact. The WEF Nexus Discovery Map demonstrates the early new-age of data resource management with the intersection of visuals, advanced search with built-in filters, and community-driven data collection to provide users with exact needs and connections to better facilitate and deploy the holistic sustainability framework of the WEF Nexus.
Introduction
Complexity in the global challenge to reduce and reverse the impacts of human environmental interference has spawned a myriad of outlooks and means of modeling complex systems that quantify the combined impacts of technology, culture, economics, and government. Additionally, human population growth, climate change, and highdensity urbanization have drawn attention to the challenges of providing basic resources for survival [1]. The Water-Energy-Food (WEF) Nexus model proposes a prioritization on the interconnections in its three forenamed aspects as a means to model and deploy sustainable development with equitable discretion [2]. The WEF Nexus model drives equity by encouragement of interdisciplinary collaboration, minimization of disruptions to resource security, acknowledgement of often-disregarded stakeholders, and the largescale mapping of system interactions. The WEF Nexus framework has been utilized to quantify linkages of resources around the globe, from emphasizing targets for sustainable intervention in South Africa, to mitigating the urban-rural resource conflict in India, to promoting synergy in wastewater management and energy production in Germany [3][4][5]. Thus, the WEF Nexus model has grown in popularity in academia, Non-Governmental Organizations (NGOs), and industry [6].
However, the networks and initiatives established by the users of the WEF Nexus model are often contained within themselves. Commonly, user groups digitally host in-depth and curated resources, models, infographics, media, and invitations for collaborations. Currently, a large aggregator for WEF related material is done by Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ), a firm specialized in international corporate collaboration, that maintains a data repository called the "Knowledge Hub", which hosts resources, training, videos, and projects from initiatives around the globe all filterable by category, topic, region, language, and year [7]. While this repository accomplishes many user needs of accessibility and material depth, it only allows filtering of resources broadly by geographical region and lacks the ability to view country-specific projects, which may be a beneficial feature to improve searchability, especially for large continental regions. Since WEF modeling is commonly done with geographic specificity, a map-based visualization would provide an enhanced user experience. The WEF Nexus Discovery Map was conceptualized by the WEF Nexus Strategic Initiative program at The Pennsylvania State University to bring together a data repository with geographical visualization in an easily accessible web tool that encourages community contribution. Developed and hosted on ArcGIS Online, the interface welcomes users from academia, government, and industry to a geographically visualized database of WEF-based policy guides, research, emerging technologies, and implementation projects. In addition to the visualization of projects, a set of customizable filters enable viewing of projects by their category, relation to WEF aspects, region, and land class which assist the user in searching for their desired resource and help facilitate connections with other relevant members of the WEF community. This paper provides background on the goals and creation of the WEF Nexus Discovery Map as well as insights into its limitations and potential for future work.
Dashboard Development
One criterion at the forefront for development of the WEF Nexus Discovery Map was ease of creation, as the development team sought creation tools that did not require extensive knowledge of HTML, Python, or JavaScript programming. The inspiration to utilize the ArcGIS Online dashboard creation toolkit came from viewing John Hopkins' COVID-19 Dashboard produced by their Center for Systems Science and Engineering (CSSE), in which they created a successful display of large-scale data equipped with easy-to-use clickable filters to focus visualized data [8]. Other examples of dashboards highlighted by ArcGIS showcase the range of possible uses of displays from active building permits in Utah to active fires around the world [9,10]. One example dashboard that shares similarities with the goals of this project is that produced by the NGO The Nature Conservancy on sustainable production and conservation initiatives [11]. That dashboard's core traits of displaying a repository of projects on a map represented by custom icons, and the use of clickable filters that work in real time to sort projects, solidified our selection of the ArcGIS Online dashboard creation toolkit for this work.
The ArcGIS cloud-based online platform allows users to create and host map content ranging from simple data point maps to website-like StoryMaps. Similar to ArcGIS, data entry onto a map is commonly done through the use of comma separated (.CSV) files. ArcGIS Online additionally makes many of the map making processes easy by taking user input from its commonly used desktop software. The input includes sets of readily available base maps, libraries of shapefiles for countries or counties, and presets of labels. Development of a customized dashboard follows the nature of ArcGIS Online's map creation in that the capabilities of a dashboard are pre-prepared elements. Elements such as "text box", "image", "filter", "map", and "counter" are simply added and placed into their desired position in the dashboard workspace. Inserting these elements into the dashboard is generally formulaic, in that the designer choses a map, and then selects fields to either filter or quantify the data. Thus, while no new lines of code were required by our team in the creation of the WEF Nexus Discovery Map, concepts such as data management and debugging were necessary as we developed the webtool.
Input Data Generation
For the purpose of developing the WEF Nexus Discovery Map and showcasing its features to encourage community contribution, 100 sample projects from a variety of geographic locations were collected and added to the dashboard for its initial launch ( Figure 1). These projects were collected from WEF Nexus databases hosted by WEFfocused groups, the WEF "Knowledge Hub", and university faculty projects. However, not all selected projects were previously explicitly categorized as WEF nexus related: projects that were determined to be WEF in the nature of their framework and analysis were also added to the database. For example, works that captured key elements of WEF such as detailing and managing interconnections of resource sectors, equitable access to resources, and stakeholder inclusion were regarded as works that embody the WEF framework and thus were added as projects. To maintain the accuracy and reliability in the inner workings of the WEF Nexus Discovery Map, the development of filters and refinement of data entry were concurrently considered during the collection of the 100 sample projects curated for the Discovery Map's initial launch. During this process, it was established that projects would fall into the filterable categories of policy guide, implementation, research, and emerging technology. Additional metadata on projects include their relation and interconnections in water-energy-food, point of contact, institution(s), local collaborator(s), keyword tags, year of publication or years active, land class of the project's geography, likely usergroups, and coordinates of the project location. To meet the criterion of providing accurate geographical visualization, project locations were logged utilizing a spherical coordinate system. Projects specific to a city or town were recorded with their respective coordinates, however, projects regarding a country or region were recorded with their centroid for best representation. These centroid location data were often taken from Google's Developers published dataset [12]. A single file of comma-separated values (CSV) was created using the metadata from all projects and subsequently uploaded onto ArcGIS Online.
The CSV file was used to create the ArcGIS Online map that exists on the dashboard. This was achieved by selecting the location metadata and specifying the field information used for icon generation. Icons for projects serve to inform the user of both the project's category and combination of WEF traits; thus, custom icons were developed to represent the iconography of the category, and the colors blue, yellow, and green were utilized to represent water, energy, and food, respectively. The created map was then transferred to the dashboard creator in ArcGIS Online.
Dashboard Features
As previously described, the dashboard creation toolkit offers a variety of features that can be added to the interface by adding "elements" and simply dragging them to a section of the dashboard. The elements utilized in creation of the WEF Nexus Discovery Map included maps generated outside the dashboard, programmable filters usable on the metadata of each project, embeddable text and images, programmable counters, and a map element list. Embeddable content was used to host images for the map legend and to incorporate the "WEF Nexus Dashboard Guide" as a portable document format (pdf) file, which aids in the user experience. To foster the community aspect of the map, the "indicator" element was utilized to display the available projects in large font when logging onto the dashboard, which changes in real time as filters are applied. Users also have access to a directly linked Google Form where they can submit their own project(s) to be added to the WEF Nexus Discovery Map. The Google Form asks the user to provide the categorical metadata of their project (Figure 2). Additional categorization of the project may be facilitated by the WEF Nexus Strategic Initiative program as needed to uphold consistency and accuracy in categorization of projects.
WEF Nexus Discovery Map Visualization
The WEF Nexus Discovery Map represents a new intersection in databasing that incorporates visualization as not only a means for accessibility, but also to enhance both searching effectiveness and the resulting analytics provided to the user. As of this publication, the WEF Nexus Discovery Map dashboard hosts 100 selected projects with 31 filters to facilitate 5040 different filter combinations. Upon opening the dashboard, users see a world map in the middle of the screen and assorted information on the side panels, including a guide to operate the dashboard, a legend, a list of projects, a description about the selected project, an option to submit their own project, and an assortment of filters ( Figure 3). These filters allow users to filter for a project's category, alignment with WEF, region, and land class. As users click these filters, projects on the map simultaneously disappear and reappear to match those of the filters chosen. Additionally, on the right side of the screen, a scrollable list of projects with their title, location, and WEF alignment changes with filters as well. Projects on the map are displayed with circular icons, each containing a symbol that represents its category, either: policy guide, research, emerging technology, or implementation. The WEF aspects of the project are visualized by the color of the symbol background, with either a gradient or a solid fill of blue, yellow, or green representing water, energy, food, respectively. For example, a project showcasing the interconnection between water and energy has a gradient background of blue and yellow. Similarly, a project on just food has a solid green background. Thus, users can visually see core aspects of a project by visual inspection and receive validation that the filters they have selected are working properly.
Once a project is clicked on in the map or project list, a small "project info" window pops up in the center map displaying a project's details. This includes the project's name, location, point of contact, institution, local collaborators, link to website or publication, year published or years active, and category ( Figure 4). Once another project is clicked, the previous project's information window disappears and is replaced with the newly clicked project. This enables users to quickly click on a project, find its information, then move onto another project from either the map area or the project list. Accessibility was deliberately chosen to be at the forefront of this work to attract a variety of stakeholders who impact investment and policy actions related to sustainability. These stakeholders now have access to concisely categorized work in which results can be extracted to best fit their needs. For example, if a stakeholder is unable to find research on integration of sustainable water management in the Gobi desert plains of China, the stakeholder can utilize the WEF Nexus Discovery Map to filter projects related to water and deserts to find pieces of work highly related to their project, thus increasing the potential to bring greater awareness of existing resources that can be used to advance the field and improve the likelihood of successful implementation.
The WEF Index Map
A second map incorporated into the dashboard is the "WEF Index Map" developed with a dataset on WEF country indexing conducted by the European Commission Joint Research Centre Competence Centre on Composite Indicators and Scoreboards [13]. This dataset utilizes a total of 21 indicators to index a country's performance in water, energy, and food, in addition to an aggregate "nexus score". The "WEF Index Map" on the dashboard utilizes this data by creating a map with the layers of index scores represented by colored gradients, giving the user an idea of how countries in the world are performing in each facet of WEF and where they stand in the security of their nexus of resources. For example, activating the "Water Index" layer will color countries on the map with a blue gradient, with darker blues corresponding to an index score closer to 100 and the lighter blues having a score closer to 0. Countries can then be clicked on individually, similar to clicking a project on the "Project Discovery" map, for a pop-up to appear detailing the country name and the score of the active index layer.
A third map, "Project Discovery and WEF Index Map", is also available on the dashboard and is a combination of the information included in the "Project Discovery" and "WEF Index" maps, with both the country gradient indices and project icons available to be clicked and filtered ( Figure 5). This enhances the user experience, as by visual inspection, one can see which areas of the world lack projects in WEF sectors and their current status in WEF category or nexus.
Given the extent of project classifications combined with the WEF Index data, there is potential to conduct meta-analysis on the repository to provide an evaluation of geographicallybased trends and thereby better identify places of opportunity for application of the WEF Nexus framework. This meta-analysis could be utilized in instances in which a geographic location is identified for a WEF project, and similar geographic locations can be data mined and projects filtered to provide further insight into local project development. Clearly, the capability to do this would be best served by data that are entirely open source, so an effort has been made to link projects to open source data where possible.
Limitations and Future Scope
There are a few in-built limitations of the ArcGIS dashboard creation toolkit that required workarounds to enhance the functionality of the WEF Nexus Discovery Map. One of the most pertinent limitations is the inability for a "reset filters" button to be programmed into the dashboard. Such a feature would greatly increase usability and prevent users from having to refresh the page to reset filters, potentially losing their zoomed-in spot. Another limitation of the ArcGIS dashboard creation toolkit is that its search feature is highly sensitive in that words must be searched explicitly in the search bar for them to be retrieved. For example, a user searching for "ocean"-related content would likely pull no results even if projects have been tagged as "off-shore wind power" or "seaweed management." For this reason, even though projects are currently tagged with keywords, the ArcGIS open search function has been taken out of the WEF Nexus dashboard to remove user confusion. Similarly, projects in the dashboard are not filterable by multiple strings of words that describe a project. The most apparent limitation of this is the inability to simply filter projects by "water, energy, and food" as the dashboard does not have a function to filter by the condition of a string of text in the metadata. To work around this, each aspect of WEF has been given an independent binary category in the comma separated file that is imported into ArcGIS Online. Three separate filters are then created on the dashboard to create three "string conditional" filters available to users. It is understandable that the ArcGIS dashboard cannot accommodate all the needs of a database search tool, as it is not ordinarily made for this use, but rather serves as the best means to create the tool for a basic level of web design capability. Thus, there is opportunity for a web-tool to be built from the ground up to better meet the needs of a geographical-visualized repository.
Conclusions
The forward-thinking and holistic structure of the WEF Nexus framework brings the opportunity to revitalize the nature of a data repository into an interactive user experience. The WEF Nexus Discovery Map captures the traditional data repository features of filtering classified and categorized resources, and also reflects the characteristics of a new-age data repository with the introduction of visual impact from the icons of the projects, real-time filtering, surface-level identifiable trends, and a customized user-oriented experience. Users from academia, industry, and nonprofit sectors can access data in an aesthetically pleasing manner by visualizing the number of projects subdividing as filters are applied. Additionally, the project icons on the map and the counter displayed on the dashboard provide interactive images to entice users to submit their own projects to grow the repository, expose their work to wider audiences, and find potential collaborators. The future for the WEF Nexus Discovery Map is a community-building tool for academics, non-governmental organizations, and industry to accelerate research and implementation utilizing the WEF Nexus framework. By welcoming community submission of additional projects, the WEF Nexus Strategic Initiative program at The Pennsylvania State University additionally hopes to grow an evolving global database of experts who are engaged and at the forefront within the WEF Nexus space.
While the WEF Nexus Discovery Map being hosted on the ArcGIS platform may not facilitate an exhaustive suite of search and filter capabilities, the tool is relatively seamless for web development that does not require programming. The resulting web-tool is user-friendly while presenting vast amounts of data with in-depth metadata available for filtering and viewing. At the time of this publication, the dashboard features an initial set of 100 projects from around the world viewable through 31 filters, culminating in 5040 total combinations for filtering by project category, WEF, region, and land class that work in real time to add or remove viewable projects. Users can click on projects for additional information, such as the institution that produced the work, local collaborator(s), relevant web page, and point of contact. Additionally, users can utilize the WEF Index data imported onto the dashboard to see geographical trends for places of opportunity for implementation of the WEF framework. The new WEF Nexus Discovery Map manages its data repository through an intersection of visuals, advanced search tools with built-in filters, and community-driven data collection to provide users with resources and connections to facilitate advancement of the WEF Nexus for holistic, sustainable development. | 2021-06-22T17:54:37.984Z | 2021-05-07T00:00:00.000 | {
"year": 2021,
"sha1": "b852edc91e52354ef19107b0ea971b7e5d32c408",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/13/9/5220/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "88d16723bb8be16cddaf2923f687afe3b07823e1",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
67907951 | pes2o/s2orc | v3-fos-license | Acta Polytechnica
The paper deals with the examination of basic methods of evaluation of sensor signals in terms of the information content of the given method and the used technical means. In this respect, methods based on classical analog systems, digital systems in the time domain of signal processing, hybrid systems and digital systems evaluating signal in the frequency domain are compared. A significant increase in entropy in individual systems is demonstrated in the case of a more complex signal evaluation. For each measuring system, the experimental setups, results, and discussions are described in the paper. The issue described in the article is particularly topical in connection with the development of modern technologies used in the processes and subsequent use of information. The main purpose of the article is to show that the information content of the signal is increased because the signal is more complexly processed.
of obtaining
information from the available data.In this "mining", various methods and procedures are used by employing among other things, modern information technologies.We no longer meet this concept in the field of control of production and technological processes.It is in this area that information as a basis for decision-making in the choice of appropriate intervention in the process, is of a fundamental importance.With the increasing complexity of processes such as controlled objects, with an increasing computing and communication technology, and progress in science disciplines, such as control theory and artificial intelligence, the classical exact control methods and new or modern methods are offered.These methods are based on the acquisition of qualitatively new types of information about the controlled and monitored process.
We can define the sketched problem of "data mining" from signal sensors in terms of information theory and signal theory.
Inf rmation, from the viewpoint of information theory eliminates uncertainty (i.e., entropy).The measure of information is the increment of probability after receiving the message.
If we accept the A information that we can expect with the probability p(A), then we receiv the amount of information (bit) in he sense of Shannon's entropy theorem [1]:
I(A) = − log 2 p(A) (bit).
(
)1
If we quantify the information according to Shan- non's theorem [2,3] then it is valid that:
p(A , p(A i ) → 1 ⇒ I(A i ) → 0 (bit), p(A i ) → 0 ⇒ I(A i ) → ∞ (bit).
From (1) and from Figure e accept this information [4], we obtain a larger amount of information [5,6].
The theory of information for the purposes of active and by time term periodically process of information receiving defines t e information source [7].With some simplification, based on the information theory and probability theory, we can define the information source as a probabilistic space [8].We can write this space in mathematical formalism as follows:
ϕ = (X * , P ), (2)
where X is the final set of elements X = {x 1 , x 2 , . . ., x n }, which we call the source alphabet and it tter of the source, X * is the set of all the final sequences of the elements of X and represents a set of possible source messages, P is a probability function defined on a set X * .Function P has these properties:
P = { p(x i ); x i ∈ X } ∈ 0, 1 .
Obviously, the longer the fed information, the more information can be sent.Therefore, as an in n source (entropy of an information source), the average entropy of the source per information is used x (probability average) [9].For stationary and ergodic sources of information, we then obtain:
H(ϕ) = − x∈X * P (x) log 2 P (x) (bit). (3)
By analyzing the relation (3), it is possible to come to a serious conclusion, the high he higher the amount of information, which is generated with a uniform probability.That is, the size of the probability space (i.e., the number of elements of set X * ) directly determines the "content" of a specific information source [10,11].
From a functional point of view, we can divide the information acquisition process into several basic functions.It is clear that t e key role in terms of the adequacy of the information obtained and in terms of its quantity is played by the sensor during the measurement [12,13].
Other processes can only damage the acquired information or destroy it altogether.Consequently, it is not possible to add relevant nformation to the measured variable through the processes [14].The problem lies in how to "data mine" and then "use" the maximum information contained in the signal from the sensing element.
Thus, the output analog signal of the sensor in operation can be understood as the bearer of the information, as a continuous informa ion source.It is demonstrated in the literature [15,16] that the maximum amount of information is contained in such sensor signal having a limited average power P m and whose amplitude probability density distribution p(x) is by Gaussian distribution on interval x ∈ x min , x max :
p(x) = 1 √ 2πeP m exp − x 2 2P m . (4)
Its information content (4) then acquires the maximum value:
max H a = 1 2 log 2 (2πeP m ) (bit) expected value.Information content by ( 5) is only the theore y of the sensor to generate at its output infinitely many amplitude levels of the signal from the interval x ∈ x min , x max .With a real sensor, this is not possible due to its limited sensitivity and its inaccuracy.
The sensor with the accuracy class δ can generate a signal of about n = 1 2δ + 1 amplitude levels.This then causes a decrease of the information content of the sensor towards the theoretical value (5) and in accordance with (3).
Another important moment that essentially decides about "data mining" is that the maximum amount of information contained in the analo signal of the sensor is the signal evaluation process itself.At present we can talk about two basic ways:
• evaluation of the amplitude of the analog signal in the time domain by a standard analog or modern digital system;
• evaluating the mplitude of the analog signal in the frequency domain using a digital measurement system.
As mentioned above, from ( ) follows that the sensor as a discrete information source has the higher information content, the more ampl tude levels of its output signal x(t) we can distinguish.
With certain simplifications, when we neglect the sensitivity and accuracy class of the real sensor, we can deduce from the entropy H a equation ( 4) of the analog signal that is continuous both in time and amplitude on the final amplitude range.
Analog measuring system
The measurement system generally represents a summary of the elements that provide the measurement task.The
haviour of the measured
ignal is interpreted mainly by using the signal analysis at certain points of amplitude, time and the frequency view.From these characteristics, it is possible to obtain information about a process that could not be captured using the basic signal processing functions.This includes the processing of average data values, determination of their distribution, correlations, transformations, and also the functions necessary to describe deterministic or stochastic signals in static processes or in transition processes [17].Signal analyses are most often solved by an external host computer without requiring a real-time operation [18].As an example, the determination of the sampling period of a process variable based on the analysis of the frequency spectrum of the measured signal according to Shannon theorem can be used [19].
The processing of this signal in the time domain deals about analysis of its overall amplitude.In the past and in many cases even today, th s is the most common way of evaluating the measurement of physical variables [20].The visual display of the corresponding amplitude of the one-way signal of the sensor is realized by means of an analog apparatus calibrated in the corresponding physical units (see Figure 2).
As mentioned above, analog measurement systems are classified based on the accuracy class δ (%) e.g., 0.01, 0.02, 0.05, 1.0, 1.5 and 2.5.For he accuracy class, it is valid δ = ± max ε range 100 %.It causes the socalled uncertainty band of a relativ width ε = 2δ near the result of a measurement.An analog measuring system with a relative error δ provides n of the distinguishable amplitudes of the measured physical variable, with regards to the following equation:
n = 1 2δ + 1 = 1 ε + 1.(6)
If, for simplicity, we assume an uniform distribution of the probability density of the measured quantity, i.e., all values have the same probability of occurrence p = 1 n , it differential entropy of an analog measuring system with a given accuracy class δ based on (3) into the following form:
H aδ = log 2 n = log 2 1 2δ + 1 (bit).(7)
This equation gives the maximum boundary value of information that one measurement can contain.If, for example, the relative error of the analog measuring system asure 51 different measured values on a given range.Then, according to (7), we receive the information content of this measurement system H aδ = log 2 51 = 5.67 (bit).
Digital measuring system
Nowadays, in the practical applications of the theory of the automatic control or digital signal processing [21], we very often counter the issue of communication of discrete tec
ical devices with a conti
uous environment [22].The bridges, which enable us to connect digital and continuous worlds, are digital-toanalog (DAC) and analog-to-digital (ADC) converters.At present, the evaluation of a measurement of the variable by digital systems prevails.
Digital measurement systems are based on the digitization [23] of the analog signal by the m-bit analogto-digital converter.If the width of the AD converter is m-bits, then this converter will distinguish, o the interval of x ∈ x min , x max , the total n = 2 m amplitude signal levels.The differential entropy of this sampled signal is generally given by (3).In the case of a uniform distribution of signal probability, the simplified equation applies:
H DIGm = log 2 n = log 2 2 m = m (bit). (8)
Figure 3. Digital signal evaluation by digital system.
If, for example, we consider a common 12-bit AD converter in technical practice, this allows us to distinguish up to the given signal range of n = 2 12 = 4096 different levels i.e., measured values.Assuming the uniform distribution of the probability of the measured values, we receive the information content of this measuring system H DIG12 = log 2 4096 = 12 (bit).
From the comparison results that in practice are valid H aδ < H DIGm .Thus, numerical methods achieve significa ey have better static properties, but the price is worse dynamic properties.An illustrative diagram of a signal evaluation by a digital system is shown in Figure 3.
Hybrid measuring system
Another type of digital measurement systems are systems based not on the processing of the sampled sensor signal but on the evalua
on of the analog signal
f the sensor itself.The analog signal of the sensor is evaluated by a special set of analog and digital circuits.This is a hybrid measurement system, although its fundamental is the use of special programmable digital circuits.To calculate the differential entropy of such measuring systems, we usually have to approach them individually.
As an example of a digital or hybrid measuring system, it is possible to include a device for measuring a Young's elastic modulus of steel ropes [24].This is a method of indirectly measuring the elasticity modulus of steel rope under traction based on the measurement of propagation velocity of longitudinal wave caused by a mechanical shock.From a physical point of view, the method relies on a known dependence between the rate of sound propagation in the material v (m s −1 ) and modulus of elasticity E (MPa) of steel rope, whose mass density of material is ρ (kg m −3 ) [25,26].For a more accurate idea of dependence, we also present the following equation:
E = v 2 ρ (MPa). (9)
The velocity of the propagation of the longitudinal acoustic wave in the steel rope can be converted to two time-shifted τ pulses using suitab mplifiers [27].By a time shift τ , the time period after which the mechanical shock from one cross-section of the rope passes to the other is meant.The implemented flip-flop circuit converts these two time-shifted pulses into one width-modulated pulse.The counter is controlled by the impulse so that it only works for the duration τ commensurable to the velocity of the wave propagation velocity and also to indirectly measured modulus of elasticity of a steel rope (see Figure 4).The presented hybrid measuring system is implemented in practice and is still functional for the purpose of assessing the quality and damage of the steel rope [28].
Counting the generated pulses by the counter for a time τ may cause an inaccuracy of a unit size.This means that in one measurement we obtain the uncertainty of ε = τ f G )
−1 for the impulses τ f G .The maximum distinguishable number of levels n of measured variable is:
n = 1 ε + 1 = τ f G + 1,(10)
where the value of one represents a zero amplitude value.This basic equation ( 10) is valid for a digital measurement and shows that an in nguishable levels and thus the entropy of the measurement will be achieved by increasing the clock frequency f G of the generator.It is assumed that this frequency f G of the used generator is determined without an error.If, for example, we use the generator with a frequency f G = 10 (MHz), then on a unit scale τ ∈ 0, 1 in seconds, we can distinguish n = 10 7 levels, which correspond to an entropy H HYB = log 2 10 7 = 23.25 (bit).
Processing the signal in the frequency domain
The basic method of a signal processing in the frequency domain is the analysis of its spectrum (see Figure 5).It is bas
on the fact that the sequence of N samples (i
e., the record x s ) of any real signal can be expressed in terms of the approximation of the sums of the unique series of N harmonic components, each of which has its complex amplitude F k , frequency f k and phase shift ϕ k , k = 0, 1, 2, . . ., N − 1, is valid equation:
x s (t) = N −1 k=0 F k e ik2πf1t+ϕ k , t ∈ 0, T . (11)
Equation ( 11) is valid for the real signal that is limited by the highest frequency component f s /2, while we assu r the first frequency component in the spectrum (the so-called base frequency) f 1 and for the frequency resolution ∆f in the spectrum, it is valid
f 1 = ∆f = 1 T = fs N ,
where T is the length of the record of analysed sensor signal in time units.The record length T depends on the number of samples N and the samplin = N fs .In equation ( 11), the coefficient was limited to the range 0 to N − 1, because in the sense of the discrete Fourier transform (DFT), the number of the spectrum lines must correspond the number of samples in the record.The spectrum is complex, thus comprised of the amplitude spectrum and the phase spectrum.The number of spectral lines represented in the spectrum is equal to the number of N samples in the analysed signal recording.Due to the aliasing and the symmetry of the discrete spectrum around the axis f s , the usable part of the complex spectrum is only until the Nyquist frequency f s /2.Therefore, for the frequency analysis and industrial practice, the usable number of discrete complex spectrum lines is according to (11) N/2 (see Figure 6).To assess the amount of information contained in the signal spectrum, we must build on the number n |F | of possible shapes of amplitude spectrum and also on the number n ϕ of phase spectra.Due to the discreet signal evaluation, this is the final count.For simplicity, consider only the amplitude spectrum analysis, which is more common in practice.When calculating the number of possible amplitude signal spectra, we must realize that this spectrum consists of N/2 spectral lines, each of which can have one of 2 m values.From a combinatorial point of view, there are variations of N/2 class from 2 m elements with the repeating.Each of the amplitude levels can occur across multiple spectral lines.Then it is valid that:
n |F | = V N/2 (2 m ) = (2 m ) N/2 . (12)
Then the entropy H f (bit) of the measurement based on the amplitude spectrum examination of the sensor signal is given by:
H f = log 2 n |F 2 m (bit)(13)
If, for example, we used an m=12 (bit) sensor and an AD converter to digitize the analog signal, and we would he length of N =1024, then the entropy of such measurement would be H f = 1024 2 12 = 6144 (bit).In Figure 6, as an example from practice, a one two-sided c
plex a
plitude spectrum of the accompanying acoustic signal generated in disintegration of the rock by the rotation drilling is exemplified is shown [29].The entropy of the spectrum has a value of 6144 (bit).The measurement was carried out on the horizontal laboratory drilling stand.A record of N = 1024 samples obtained at a sampling frequency of f s = 18 (kHz) from the microphone signal was evaluated using a m = 12 (bit) AD converter.The purpose of analysing this acoustic signal is to find the information in the signal that can be used for an optimal control of the drilling process [30].The basic criteria for optimizing the process are in this case the minimal specific energy of disintegration and the maximum drilling speed [31].
In practice, in some cases, spectrum changes are examined depending on the change of a given variable.For example, in t e technical diagnostics of rotary machines, it is interesting to observe the change of spectrum of their vibration when increasing the revolutions (rpm).We talk about so-called spectrogram, i.e., spectrum dependence on time (or, in the example, on increasing revolutions).
Let's assume that we have measured a number of s spectra corresponding to a time interval of 0, 1, 2, . . ., s − 1.
This sequence of spectra represents the spectrogram as a highly integrative information source.In calculating its entropy s a potential information content, we must calculate the number n |F |s of possible spectrograms consisting of spectra s.
n |F |s = V s (n |F | ) = n s |F | . (14)
Then the entropy H |F |s (bit) of measurement, based on the investigation of spectrogram of the sensor signal is given by the equation:
H |F |s = log 2 n |F |s = log 2 n s |F | = s log 2 n |F | = s N 2 m (bit). (15)
If, for example, we used an AD converter with the width of m = 12 (bit) to digitize the analog signal of the sensor and we would evaluate a spectrogram containing s = 10 complex amplitude spectra, each of which was generated by analys N = 1024 samples, then the entropy of such measurement would, according to (15), have the value of H |F |s = 10 1024 2 12 = 61440 (bit) As an example of the spectrogram investigation, we can present the spectral analysis of an acoustic signal of the accompanying noise in the rock drilling process [32,33].The aim of the analysis is to obtain the information on the actual conditions of the rock disintegration by rotary drilling in terms of an optimal control of this process (see Figure 7) [34][35][36][37].
Thus, the increase of the entropy compared to the classical analog technique as well as in the time domain digital technique is significant in the case of the signal evaluation in the frequency domain.This is illustrated in Table 1.
To highlight the differences in measurement systems, the potential entropy values of the individual signal processing methods were recalculated to the decimal logarithm log 10 H(ϕ).It is shown in Figure 8.
Summary and conclusions
Table 1 shows the comparis n of the individual measuring systems.Based on the entropy values of the sensor signal evaluation, it can be seen that the analog measurement system has the lowest information notice value.This is understandable because this system be ongs to classical measurement systems, but is still used at the lowest procedural level of control.The digital measuring system is an extension of the analog system by a part, which ensures the conversion of the analog variable into a number in a suitable form and for subsequent processing.The hybrid system is an example of a measurement system in which the benefits of both systems are interconnected.The signal processing of the sensor in terms of entropy in the frequency domain has a high information value.This is confirmed by the numerous uses in industrial practice and in various areas ranging from mining (e.g., processing of signals from geological survey wells) through the automotive industry (e.g., signal processing gerenerized by the car and its influence on the driver) to medicine (e.g., EKG cardiac signal processing, EEG brain).The successful implementation of the developed experimental measuring systems, and thus their practical applicability, is always decided by a deployment in a real environment.
It is necessary to say that the current industrial distributed control systems have an increasingly more complex and more extensive transmission and processing of data.Distributed control systems use a variety of communication buses.This means that at the lower levels of control, the necessary technical means are used with the digital processing of information from intelligent sensors, analyzers to PLC systems and workstations.At this lower level, the current state is char
terized by the use of cl
ssic measurement systems along with intelligent or smart elements that are capable of cooperating through industrial communications networks.
The described problem is so serious when implementing new measurement systems or signal processing that it deserves an increased attention.
Verification of the correctness and effectiveness of the presented measuring systems was carried out in the framework of research activities and problem-oriented projects.
Figure 1 .
1
Figure 1.Relationship between the probability of information and its entropy.
Figure 2 .
2
Figure 2. Signal evaluation by analog measuring instrument.
Figure 4 .
4
Figure 4. H brid measurement system with a pulse width modulation signal.
Figure 5 .
5
Figure 5. Signal evaluation by a digital system in the frequenc domain.
Figure 6 .
6
Figure 6.The two-sided complex amplitude spectrum of an acoustic signal from the rock drilling process.
Figure 7 .
7
Figure 7. Spectrogram of acou | 2019-08-01T00:02:38.032Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "468101cea71ef6545d19806e5360d375b730780f",
"oa_license": "CCBY",
"oa_url": "https://ojs.cvut.cz/ojs/index.php/ap/article/download/4862/4864",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "468101cea71ef6545d19806e5360d375b730780f",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
14540618 | pes2o/s2orc | v3-fos-license | The Core Competitiveness of the Wisdom Tourism Food Analyses Based on the Internet of Things
As the world’s largest industry, the tourism Food industry of the world has rapidly developed in recent years. Transferring “digital food” to “wisdom food” means new opportunities and challenges that the sustainable development of the food should face. However, the informational development of this industry still lags behind, the exploitation and utilization of information resources haven’t owned an effective platform, which lack of a benign circulation and interactive mechanism, therefore, to combine the IOT with the food about wisdom has always been a new tendency. This study constructed framework of the tourism Food core competitiveness based on IOT. Then, this study proposed a measurement model for the tourism Food core competitiveness and test the model through exploratory and confirmatory factor analysis, the indicators demonstrate that the model is effective and present that resource protection ability, operation management ability, service ability and Tourism Food service chain integration ability all have influence on the tourism Food core competitiveness.
INTRODUCTION
Along with the tourism Food industry promoting the development of the regional economy, it has also brought large negative impact on the ecological environment and social culture.How to realize the coordinating development of the economic effectiveness, the social effectiveness and the ecological effectiveness, just namely the realization of the sustainable development of the food, has become a hot topic in the world.In order to improve the core competitiveness of this industry, remit and solve the numerous problems the industry has faced and then to realize the sustainable development, this study not only analyzes the relationship about resources of the food and core competitiveness, the technology of the IOT and competitive advantage, but also constructs the framework in view of the core competitiveness on the wisdom of the tourism Food spot of the IOT, which is to provide one route of improving the ability of environmental protection and resource allocation, reducing the consumption of resources and energy and enhancing the core competition ability of tourism Food spots and then to realize the sustainable development of tourism Food spots.
To sum up, in the new historical period, any countries' orientation for the Tourism Food industry has also determined the positioning for industries of the food, which has brought about good opportunities for the development of Tourism Food and tourism Food spot.However, new opportunities also raise new challenges for administrative staff of the food.Only combining advanced science technology and management technology can effectively enhance the development levels of the food and then solve the strategic problems of development in the food.Therefore, it is of great importance to research the application of the IOT to the protection and management in the food.
MATERIALS AND METHODS
The current situation of research about application of the internet of things: Those main problems the tourism Food industry needs to solve are as followed: paying equal attention to protecting tourism Food resources and developing tourism Food, strengthening the construction of tourism Food infrastructure, highlighting the special tourism Food, promoting diversification of tourism Food products, vigorously developing and promoting ecological tourism Food and promoting the development of cultural tourism Food and red tourism Food.These problems are closely in relation to developments of the food, which need be promoted and completed through the development of the food.To strengthen the industry service system and to improve the service quality also puts forward new requirements for the development of the tourism Food spot.Therefore, the food must bear the burden to develop the national tourism Food industry, regarding the food development as the core point, to promote the sustainable development of China's tourism Food industry.
As the core point of tourism Food's development, to develop the food under the new circumstance has also taken a new kind of responsibilities and obligations: how to effectively solve the issue about the environmental protection and economic development, how to make the management level of the food move on a higher stage, how to further improve the quality of service of the tourism Food spot, how to fundamentally ameliorate the industry consumption environment, how to solve those series of problems and then to meet the requirements of the development of the tourism Food industry could be an important task for administrator in the food.
An important way to improve the core competitiveness of one industry is to comprehensively improve the level of Informatization.The IOT is a collection of information technology and promoting the development of the IOT's technology is an effective way to improve the information level.We should set about it to promote the development of the IOT's technology from the two aspects and one is to accelerate the research about the IOT's technology and the formulation of those related standards; the other is to promote the application of the IOT's technology in critical foods.To impel the adhibition of the IOT's technology in the tourism Food industry could point out a quite good direction on development for the food and also provide new ideas for the developing strategies.
Regarding information construction as the main way and improving the efficiency of travel service are directions of tourism Food development.Along with the developments of science and technology, to transform industrial structure of tourism Food by means of information technology and to improve the efficiency of tourism Food industry has become a tendency under the conditions of new technologies.The revolution of management modes brought by new technology could effectively excavate the formats and consumption demands of tourism Food, which will greatly promote transformation of the service forms and the innovation of business models.
Information technology has become the technological bases for the sustainable development of tourism Food.The transformation and upgrade of informatization about tourism Food industry needs the efficiency of information and service levels of Tourism Food require the effective support of informatization, besides, the harmonious developments of regional Tourism Food acquire the reliable guarantee of informatization.The continuous development of information technology has revolutionary effects on the management, marketing and service of the Tourism Food, which provide strong support and forceful power for the development with respect to integrated services of the Tourism Food activities in any segments.
The concept of the IOT was firstly proposed by the Massachusetts Institute of Technology (MIT) in the 1999's international conference on mobile computer and network in the United States and in 2005 the International Telecommunications Union (ITU) issued the 2005's report of ITU Internet: the Internet of things report.International Telecommunication Union UIT (2005), which has put forward that the IOT is mainly composed of four key technologies: RFID technology, sensor technology, intelligent technology and nanotechnology.In recent years, along with IOT having become a pop research topic, many scholars have made a research about it, while the research and development of it have been still in its infancy, as for its orientation and characteristics, we China have not unified, on account of the system model and structure of it having not formed one standard, there are a lot of technological problems for us has to solve.Seeing from the developing process of the IOT, the developing stage of the IOT is one single stage and a large number of single IOTs constructed separately are the starting point for construction and the basic elements of the IOT.Only after the single IOT fully developing, can we realize the cross-domain collaboration and deep-links of the Internet of things.
In 2006, America's Great Wolf Resorts Company introducing RFID technology into Tourism Food management, established the RFID wristbands system in the tourist resort of Pocono mountains.Wearing the RFID wristbands can not only confirm the identity of visitors, but also help visitors pay fees in the tourist zone.Therefore, tourists in the tourist zone need not carry anything such as cash or keys and then they can also be pleasant to experience in Tourism Food activities.Bi et al. (2010) and others point out that in China, the RFID technology has been widely used in other industries, but the application to the Tourism Food industry has just started, mostly staying in the stage of electronic ticket, besides, to monitor the crowd density and research the Tourism Food traceability are still lacking.At present, information construction of tourism Food spots in China has already rapidly developed and digital building has gradually expanded and the applications of RFID have developed from the original ideas of tickets to the tour of ideas.The establishment of digital tourism Food spot still stays in the level of video monitoring, the RFID technology has not been fully used.Lin (2011) applied the Internet of Things in the Tourism Food management and introduced the principal and technologies of IOT, moreover, the research illustrated the design of the intelligent monitoring system.Yao and Lin (2011) and other people think that the key point is to integrate resources to promote the Tourism Food informatization level in Hainan with the application of the Internet of Things technology, combining Tourism Food environment of Hainan island with its own situation, the author analyzes the SWOT issue about the application of the IOT technology in Hainan island.Zhou et al. (2012) and others have applied the IOT technology to design the virtual Forbidden City and then realized the whole simulation of time and space of the virtual landscape of the Forbidden City, which could help visitors get experience in real landscape through this system.Based on the analysis about current situation of the Tourism Food information service in China, Du (2012) has made a research about the developing levels of Tourism Food informatization in China and he points out that, in the process of informatization development of China's Tourism Food industry, it lacks overall awareness and we should build an intelligent public service platform with the mobile communication technology and the IOT technology.
Seeing from the achievements of literature research, the applied research of the IOT has made certain progress, such as in the warehousing logistics, medical care, remote monitoring, remote sensing system, urban planning and intelligent transportation systems, which shows its great application value and will play an important and potential role.The IOT has been developing until now, its core research has gradually shifted from the basic researches to the researches about solutions and product.
Tourism Food industry is a complex system, mainly including the six key elements: food, accommodation, transportation, traveling, shopping and entertainment, which cover several aspects in social life, such as warehousing, logistics, medical, food, transportation, environmental protection, safety.The IOT technology, applied in other industries, has laid a good foundation for its application to the Tourism Food industry.Because food is the core elements of Tourism Food industry, it's more urgent to apply the IOT technology to the tourism Food spot and the tourism Food spot is also ideal application scene for the IOT technology.
At present, the application of IOT to the Tourism Food industry has just started, then there will still exist larger development space.Comprehensively introducing the IOT technology into the Tourism Food industry, especially reforming traditional and backward Tourism Food industrial structure with this technology, there is still a long way to go.Therefore, research concerning the IOT's application to tourism Food spots will be an inevitable trend in the research of the application of the Internet of things.
The wisdom tourism food spot: The construction of wisdom food is an engineering system of intricacy, which needs to make use of modern information technology and also needs to be integrated with the management theory of science and information technology.Construction wisdom for the tourism Food spot is to fully enhance its hard and soft power, create learning organization that owns an internal knowledge base to make decisions and ratiocinate.Informatization construction and the optimization for business process can help the food to get the more thorough awareness and wider inter-connectivity and improve the managerial efficiency and tourists' satisfaction (Barney, 1991).Only combining advanced science technology and managerial technology can effectively improve the development level of foods and solve the strategic problems for the development of foods.The IOT provides an information platform for building the wisdom of tourism Food spots, thus the research about its application modes to the tourism Food spot protection and management is particularly important.
The core competitiveness of wisdom tourism food spot: According to Barney's classification about enterprise resources (Prahalad and Hamel, 1990), the author believes that the resources of the foods can be divided into material capital resources, human capital and organizational capital resources.Material capital resources in foods include natural environment, tourist attractions, tourist facilities and production management techniques; Human capital resources of the food include those related to the people's thoughts, intelligence, experience, training and others; While the organization capital resources include management patterns, plans, control and the coordination system, as well as informal contacts between tourism Food spots and foods.
The tourist resources in food are the core material capitals, which contains tourist attractions, natural environment and Tourism Food infrastructure.The stand or fall of Tourism Food resources, can directly affect tourists' attitude towards the food, which is the foundation of the survival and development of the food, therefore, reasonable development and protection of Tourism Food resources, could be of vital importance to develop the food and then protective abilities on tourism Food resources have become the first and core competitiveness of the food.
The important content to operate and manage the food is to make human capital resources and organizational capital resources become strategic resources of the tourism Food spot.Operation and management abilities affect it that whether the organization, planning, control, coordination system can be operating normally and efficiently, which also determine whether human and organizational capital resources will become strategic resources to develop the food.Therefore, the abilities to operate and manage the tourism Food spot are one of the important core competences to develop the tourism Food spot.
Service facilities and service projects provided by foods can directly affect the effects when tourists experience them on the spot and also directly influence the tourists' satisfaction with the tourism Food spot, which will affect the competitive ability of the tourism Food spot.Thus, providing good facilities for tourist service and service content would be an important way to improve the competition ability and service abilities in the tourism Food spot are also the core competitiveness for necessary development.
According to Prahalad's view, we can understand that the real source of the enterprise's competition is the core competence of enterprises.Short-term competitive advantages come from the control of cost and quality about products, however, the long-term advantages are from establishing the core system as soon as possible than its competitors at low cost.Core competence of the enterprise's ability is to integrate technology, which is also to organize the work and transmit the value.Only the staff in the food has the ability to integrate technology and production skills into core competitiveness, which would help them timely grasp the opportunity to make changes.And it is the true source of competitive advantages.On the basis of the above discussion, protection abilities, operation and management abilities and service ability are important sources of competitive advantages.
The IOT and the core competitiveness in the tourism food spot: Protection in the tourism food spot and the IOT: To protect Tourism Food resources and the natural environment, the first thing is to solve the problem of Tourism Food resources and environmental monitoring.In order to effectively protect Tourism Food resources and the natural environment, it is necessary to monitor the protection objects timely and get the data real-time transmitted, collected, analyzed and processed, which is impossible to be completed only by human forces.Besides, as for those changes about for plant diseases, insect pests, water quality and air quality, which are not very obvious changes people couldn't find in a timely manner, of course, this kind of change that can be found is one that may have caused serious damage to protect this object.So it must be achieved by means of comprehensive information technology.At present, the domestic and overseas have already had many cases of the IOT about environmental protection, which have played a huge role in the natural disaster monitoring, pollution control, ecological protection.And now The IOT technology has become the basic support in the field of environmental protection.
Operation and management in the tourism food spot and the IOT: In new type of management mode, information management is the technological nature of management, the structure, classification and the access of management information decide the organizational structure and managerial patterns under the circumstance of new technology.Operational management in foods involves large amounts of manpower, scheduling and allocating material resources, which produces a large amount of information, the speed that information transfers in the internal food determines the efficiency of management in foods.Introducing the IOT technology can accelerate the transmission of information in internal foods and timely respond to external environment and then provides support for the staff in foods.Therefore, this technology is one necessary guarantee to improve efficiency about operation and management in foods.
Service in the tourism food spot and the IOT: Foods offer visitors services in ticket business, narration, complaints handling, security, diet, accommodation, transport and other aspects, whether they could be convenient to buy tickets, accurate guidance, good complaining channels and quick complaints disposal, security guarantee, timely release of accommodation and traffic information, could effectively improve tourists' satisfaction.All of these need foods to improve the speed to obtain the information, transmit and process it.In order to lessen tourists' time to congest and wait in a line in the tourism Food spot, we need foods own the ability to forecast and dispatch, which can be achieved by the IOT technology.
Integration of tourism food service chain and the IOT: Tourism Food service chain integration mode based on Internet of things is the internal resources of the food, internal capacity building models of another important supplement, in the Internet of things technology support mode of vertical integration strategy, under the use of the Internet of things technology platform, the tourism Food spot with catering, accommodation, transportation, shopping malls, entertainment companies such as Tourism Food related businesses sharing in the process of "eat, live, walk, travel, shopping, entertainment" all aspects of information, thus to guide and control of various behavior of tourists, in order to understand the relevant information, the tourists for visitors to provide a comprehensive range of services, improving the satisfaction of the tourists, form the core competitive edge of the tourism Food spot.
The framework of the core competitiveness of the wisdom foods based on the IOT: The framework of the core competitiveness of wisdom food based on the IOT is shown in Fig. 1.
Equipment layer of wisdom food:
This layer is the nerve ending of wisdom, including all kinds of environmental parameters: sensor nodes, RFID, mobile phone APP, PDA, surveillance cameras, 3 s (GPS, GIS, RS), etc.The core competitiveness layer of wisdom food: The purpose of construction of the wisdom food about the IOT is to improve the core competitiveness of foods, including protection abilities of foods, operation and management abilities, service abilities of foods and integration ability of Tourism Food service chain, finally realizing the strategic target of the sustainable development in tourism Food spots.
RESULTS AND DISCUSSION
Model: The tourism Food core competitiveness is consist of resource protection ability, operation management ability, service ability and integration ability of Tourism Food service chain.The tourism Food can improve the core competitiveness from the four aspects, which can expand the influence of the tourism Food, enhance the attraction and the service ability for tourists and then improve the tourists' satisfactory.All of this can promote the development of the tourism Food and enhance the advantages.Furthermore, it can realize the strategic objectives to Participants: This paper studies various influencing factors of tourism Food core competitiveness and plays an anonymous questionnaire survey, questionnaire 341, recycling questionnaire 322, recovery rate is 94.4%.Eliminating the invalid questionnaire 21, effective questionnaire recovery rate 93.8%.The main objects of questionnaire are the Tourism Food researchers, food managers, Tourism Food enterprise managers, enterprise management personnel, etc. Five-point Likert scale is adopted, score from 1 to 5, the higher the score represents influence value is higher, the stronger the feeling.
Exploratory factor analysis: Principal Axis Factoring was performed on the full set of items intended to measure Integration Ability of Tourism Food Service Chain, Resource Protection Ability, Operation Management Ability and Service Ability.Principal Axis Factoring is the most appropriate approach.Bartlett's test of sphericity was 1318.428(p<0.001) and the Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy was 0.825 which is well above the
CONCLUSION
This study proposes a new tourism Food core competitiveness measurement model, which presents resource protection ability, operation management ability, service ability and Tourism Food service chain integration ability are the main factors of core competitiveness.Through exploratory and confirmatory factor analysis, a conclusion can be drew that the model is efficient.
The direction of the food is to build the wisdom food based on the IOT.New technology can bring new management modes, new forms of Tourism Food industry and new consumption and demand and information construction of foods has become a basic link for construction of foods.The IOT technology provides the technical support for wisdom food, which needs more thorough perception, a wider range of connectivity and more intelligent wisdom technology and then it could supply guarantee to maintain and develop the core competitiveness of the food.Yao, X.D. and M. Lin, 2011
Fig. 1 :
Fig. 1: The framework of the core competitiveness of wisdom food based on the IOT Technical support layer of basic network of wisdom food: Including wireless sensor network, P2P grid, grid computing network and cloud computing, which is the fusion of the network and communication technology guarantee.Infrastructure network layer of wisdom food: Which refers to the Internet network, the wireless local food network, 3 g mobile communication network.Application layer of wisdom food: Which includes the application of mobile law enforcement, surface monitoring, emergency scheduling and remote monitoring in the tourism Food spots.
Fig. 2 :
Fig. 2: Tourism food core competitiveness model the sustainable development.Tourism Food core competitiveness model is as shown in Fig. 2.
Fig. 4 :
Fig. 4: Estimates of the measurement model of tourism food core competitiveness
Table 3 :
Average Variances Extracted (AVE) range from 0.5363 to 0.6605, which are also above the acceptable value of 0.50.From all the above tests for our model, it is obvious that all the test are satisfied.So we can summarized that our model are acceptable. the . The SWOT analysis on application of IOT technology about construction of Hainan international tourism food Island [J].Proceeding of the International Conference on Information, Services and Management Engineering (ISME, 2011), Vol. 3. Zhou, Y.B., L.L. Yang and M. Shen, 2012.Embedded applications of experience about virtual tourism food landscape design and IOT technology-from the virtual forbidden city to the dream city of Elf [J].Tourism Food BBS, 5(3): 27-31. | 2016-01-11T18:29:14.669Z | 2015-01-01T00:00:00.000 | {
"year": 2015,
"sha1": "f1a4648b6c4f06787c3420f21a915f7fa21b59d1",
"oa_license": "CCBY",
"oa_url": "https://www.maxwellsci.com/announce/AJFST/9-52-59.pdf",
"oa_status": "HYBRID",
"pdf_src": "Crawler",
"pdf_hash": "54c5ff3a12b9bad47dee952c9dc82508d1681c7b",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
14867175 | pes2o/s2orc | v3-fos-license | Screening for chronic respiratory diseases in Georgia
Chronic respiratory diseases are an important cause of morbidity and mortality worldwide. Tuberculosis is responsible for 2 million deaths annually, COPD for another 3 million with an ever increasing trend. It is estimated that COPD will be the third most important cause of all deaths in 2020 and the fifth most important cause of DALY’s. COPD, asthma and chronic rhinitis are important causes of morbidity resulting in high costs to society, both direct medical costs and indirect social costs. The World Health Organisation makes estimation in relation to mortality and morbidity in every country relying on available, usually official data. There is however, a general lack of epidemiological studies in most of the countries in Central and Eastern Europe and countries that formed the previous Soviet Union. Global Alliance against Respiratory Disease (GARD), a recently created an organization under the auspices of WHO, is promoting recognition of the importance of respiratory diseases worldwide and is also collecting available data on the prevalence of chronic respiratory diseases [1]. The study by Chkhaidze et al in the current issue of Monaldi Archives of Chest Diseases [2] was performed in the scope of GARD initiatives. It is an important contribution trying to present real life prevalence of chronic respiratory diseases in Georgia. The study they have undertaken is a pilot study of two health districts near the capital Tibilisi comprising approximately 70.000 population. A physician’s administered questionnaire was applied to patients 5+ years old attending primary health care centres. It requested information on the physician’s diagnosis of tuberculosis, asthma and symptoms of asthma, chronic bronchitis, allergy and allergic rhinitis. In a sample of patients with symptoms of chronic cough and sputum of three years duration spirometry was performed. Additional questions assessed demographics, smoking status, occupational exposures and respiratory infections (pneumonia) in the past. The results of the study were compared to official statistics on the aforementioned diseases. A total of 3646 patients were studied, 41% were males, 15% were aged between 5-14. Most of the patients had secondary school education. A total of 733 patients were ever smokers, 712 men and 24 women. Asthma was diagnosed in 4.8%, tuberculosis in 2%, chronic cough in 20%, COPD in 7%, allergic rhinitis in 4%. Among 92 patients with chronic cough and phlegm in whom spirometry was performed COPD was diagnosed in 62 (67%). The prevalence of asthma, tuberculosis, allergic rhinitis complied with official data on these diseases. Incidence (notification) rate of tuberculosis is high, similarly to majority of former Soviet Union countries except the Baltic states [3]. However, government data on COPD grossly underestimated the prevalence of this disease. The most detailed part of survey concerned COPD. All subjects reporting symptoms of chronic bronchitis, complying with the current definition of the disease were subjected to post bronchodilator spirometry. Almost all were more than 40 years of age. The signs of not completely reversible obstruction was found in 24% of them. This is very high figure most probably influenced by the fact that a highly selected population was studied, i.e. a population with a very high probability of the disease [4]. Interestingly, frequency of different stages of COPD in the Chkhaidze study was very close to that found in a family physician setting in Poland [5]. Vast majority of subjects had mild or moderate stage of the disease. Georgian study supports the idea of spirometric screening for COPD in population of high risk for COPD to prevent progression of COPD to severe stage [6]. The prevalence of smoking was low (20%) in the population studied. Among the total population 14+ years the prevalence of tobacco smoking was 24%. One has to note that the prevalence among men was 48% and among women 0,01%. According to The Tobacco Atlas the prevalence of smoking in Soviet Union satellite countries was 50-60% in males and less than 20% in females [7]. The study has several limitations. It was not an epidemiological study, but an assessment of diagnosis of patients attending their primary care physicians. No data on occupational exposure was presented. Countries of Central and Eastern Europe and Central Asia that, since the end of the second World War or since 1918, were separated from the free world present with much higher mortality rates than Monaldi Arch Chest Dis 2009; 71: 4, 139-140.
Chronic respiratory diseases are an important cause of morbidity and mortality worldwide.Tuberculosis is responsible for 2 million deaths annually, COPD for another 3 million with an ever increasing trend.It is estimated that COPD will be the third most important cause of all deaths in 2020 and the fifth most important cause of DALY's.COPD, asthma and chronic rhinitis are important causes of morbidity resulting in high costs to society, both direct medical costs and indirect social costs.
The World Health Organisation makes estimation in relation to mortality and morbidity in every country relying on available, usually official data.There is however, a general lack of epidemiological studies in most of the countries in Central and Eastern Europe and countries that formed the previous Soviet Union.
Global Alliance against Respiratory Disease (GARD), a recently created an organization under the auspices of WHO, is promoting recognition of the importance of respiratory diseases worldwide and is also collecting available data on the prevalence of chronic respiratory diseases [1].
The study by Chkhaidze et al in the current issue of Monaldi Archives of Chest Diseases [2] was performed in the scope of GARD initiatives.It is an important contribution trying to present real life prevalence of chronic respiratory diseases in Georgia.
The study they have undertaken is a pilot study of two health districts near the capital Tibilisi comprising approximately 70.000 population.A physician's administered questionnaire was applied to patients 5+ years old attending primary health care centres.It requested information on the physician's diagnosis of tuberculosis, asthma and symptoms of asthma, chronic bronchitis, allergy and allergic rhinitis.In a sample of patients with symptoms of chronic cough and sputum of three years duration spirometry was performed.Additional questions assessed demographics, smoking status, occupational exposures and respiratory infections (pneumonia) in the past.The results of the study were compared to official statistics on the aforementioned diseases.
A total of 3646 patients were studied, 41% were males, 15% were aged between 5-14.Most of the patients had secondary school education.A to-tal of 733 patients were ever smokers, 712 men and 24 women.
Asthma was diagnosed in 4.8%, tuberculosis in 2%, chronic cough in 20%, COPD in 7%, allergic rhinitis in 4%.Among 92 patients with chronic cough and phlegm in whom spirometry was performed COPD was diagnosed in 62 (67%).The prevalence of asthma, tuberculosis, allergic rhinitis complied with official data on these diseases.Incidence (notification) rate of tuberculosis is high, similarly to majority of former Soviet Union countries except the Baltic states [3].
However, government data on COPD grossly underestimated the prevalence of this disease.The most detailed part of survey concerned COPD.All subjects reporting symptoms of chronic bronchitis, complying with the current definition of the disease were subjected to post bronchodilator spirometry.Almost all were more than 40 years of age.The signs of not completely reversible obstruction was found in 24% of them.This is very high figure most probably influenced by the fact that a highly selected population was studied, i.e. a population with a very high probability of the disease [4].Interestingly, frequency of different stages of COPD in the Chkhaidze study was very close to that found in a family physician setting in Poland [5].Vast majority of subjects had mild or moderate stage of the disease.Georgian study supports the idea of spirometric screening for COPD in population of high risk for COPD to prevent progression of COPD to severe stage [6].
The prevalence of smoking was low (20%) in the population studied.Among the total population 14+ years the prevalence of tobacco smoking was 24%.One has to note that the prevalence among men was 48% and among women 0,01%.According to The Tobacco Atlas the prevalence of smoking in Soviet Union satellite countries was 50-60% in males and less than 20% in females [7].The study has several limitations.It was not an epidemiological study, but an assessment of diagnosis of patients attending their primary care physicians.No data on occupational exposure was presented.
It is our view that health organisations responsible for global world health should focus more on these countries sharing the western world experience in conducting proper epidemiological studies and helping to introduce preventive measures to non communicable diseases and helping to control tuberculosis [10]. | 2014-10-01T00:00:00.000Z | 2009-12-01T00:00:00.000 | {
"year": 2009,
"sha1": "598c58ee86247262ce7a391d6eb782b720daea00",
"oa_license": "CCBYNC",
"oa_url": "https://www.monaldi-archives.org/index.php/macd/article/download/344/332",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "598c58ee86247262ce7a391d6eb782b720daea00",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258397099 | pes2o/s2orc | v3-fos-license | Association of Multitrajectories of Lipid Indices With Premature Cardiovascular Disease: A Cohort Study
Background The multitrajectory model can identify joint longitudinal patterns of different lipids simultaneously, which might help better understand the heterogeneous risk of premature cardiovascular disease (CVD) and facilitate targeted prevention programs. This study aimed to investigate the associations between multitrajectories of lipids with premature CVD. Methods and Results The study enrolled 78 526 participants from the Kailuan study, a prospective cohort study in Tangshan, China. Five distinct multitrajectories of triglyceride, low‐density lipoprotein cholesterol (LDL‐C), and high‐density lipoprotein cholesterol over 6‐year exposure were identified on the basis of Nagin's criteria, using group‐based multitrajectory modeling. During a median follow‐up of 6.75 years (507 645.94 person‐years), 665 (0.85%) premature CVDs occurred. After adjustment for confounders, the highest risk of premature CVD was observed in group 4 (the highest and increasing triglyceride, optimal and decreasing LDL‐C, low and decreasing high‐density lipoprotein cholesterol) (hazard ratio [HR], 2.13 [95% CI, 1.36–3.32]), followed by group 5 (high and decreasing triglyceride, optimal and increasing LDL‐C, low and decreasing high‐density lipoprotein cholesterol) (HR, 2.07 [95% CI, 1.45–2.98]), and group 3 (optimal and increasing triglyceride, borderline high and increasing LDL‐C, optimal and decreasing high‐density lipoprotein cholesterol) (HR, 1.90 [95% CI, 1.32–2.73]). Conclusions Our results showed that the residual risk of premature CVD caused by increasing triglyceride levels remained high despite the fact that LDL‐C levels were optimal or declining over time. These findings emphasized the importance of assessing the joint longitudinal patterns of lipids and undertaking potential interventions on triglyceride lowering to reduce the residual risk of premature CVD, even among individuals with optimal LDL‐C levels.
D yslipidemia is characterized by increasing levels of total cholesterol (TC), triglyceride, low-density lipoprotein cholesterol (LDL-C), or decreasing high-density lipoprotein cholesterol (HDL-C). Globally, dyslipidemia is conventionally considered to play an essential role in the progression of premature cardiovascular disease (CVD). [1][2][3] The World Health Organization estimation demonstrates that >75% of premature CVD is preventable, and risk factor amelioration can help reduce the growing CVD burden on both individuals and health care systems. 4 Therefore, early identification and intervention of premature CVD risk factors, such as dyslipidemia, has important implications to public health because early identification could prevent CVD events before they occur.
One of the major limitations on existing measurement of CVD risk with lipid indices was that the lipid Tian et al Multitrajectories of Lipids With Premature CVD indices were measured at baseline only, ignoring how lipid indices varied within individuals over time and the subsequent effect on CVD. [5][6][7] In this term, some epidemiological evidence suggested age-related changes in the TC, LDL-C, and triglyceride so that they increase up to middle age and then decrease. [8][9][10] Several studies have investigated the separate role of 2 time-point changes-visit-to-visit variabilities and cumulative exposure of each lipid component in the development of CVD-while resulting in inconsistent conclusions among different populations or with different types of lipid indices. [11][12][13][14][15][16] Additionally, the approaches used in these studies may oversimplify the heterogeneity and complex patterns of longitudinal changes in lipids, and most studies did not consider the correlation and joint effect of different types of lipid indices. Thus, the general effects of lipid indices are known to be TC, LDL-C, HDL-C, and triglyceride, but little is known about their combined impact on premature CVD risk. Identifying distinct longitudinal patterns of different lipids may explain the variation of lipid indices over time and facilitate targeted cardiovascular prevention programs, which may carry important implications for improving premature CVD prevention. 17 The person-centered, multitrajectory approach can identify and monitor various lipid index progressions simultaneously, as well as incorporate and account for correlation of lipids within the same participant and over time. 18 Based on the large, prospective, community-based study with repeated measurements of lipid indices, we aimed to identify the jointly developed multitrajectories of triglyceride, LDL-C, and HDL-C over time, and further to examine their associations with subsequent risk of premature CVD among the Chinese population.
Data Availability
The data sets used and analyzed during the current study are available from the corresponding author on reasonable request.
Study Population
Data were obtained from the Kailuan study, which is an ongoing prospective cohort study launched in the Kailuan community in Tangshan, China. The details of the study design and procedures have been described elsewhere. 14,15,19,20 In brief, a total of 101 510 participants aged >18 years who agreed to participate in the study were enrolled in the first survey during June 2006 and October 2007 and completed questionnaires, physical examinations, and laboratory tests. These participants were followed up face-to-face every 2 years until their death or December 31, 2019. In the current study, multitrajectories of lipid indices were developed from 2006 to 2012 to predict premature CVD after 2012. Participants were excluded if they had myocardial infarction (MI) or stroke in or before 2012, or if they had <2 measurements of lipid indices during 2006 to 2012. Following these criteria, a total of 78 526 participants were enrolled in our analysis ( Figure 1). The baseline characteristics of included participants and excluded participants are shown in Table S1, and the number of participants for different lipid measurements is presented in Table S2. The study was performed according to the guidelines of the Helsinki Declaration and was approved by the Ethics Committee of Kailuan General Hospital (approval number: 2006-05) and Beijing Tiantan Hospital (approval number: 2010-014-01). All participants provided informed written consent.
Measures of Lipid Indices
Fasting blood samples were collected in the morning after an 8-to 12-hour overnight fast and transfused into vacuum tubes containing EDTA. Serum was separated immediately and stored at 4°C. The analysis was conducted within 4 hours of blood sample collection using an auto-analyzer (Hitachi 747; Hitachi, Tokyo,
CLINICAL PERSPECTIVE
What Is New?
• This cohort study with 78 526 participants jointly identified 5 distinct multitrajectory groups of triglyceride, low-density lipoprotein cholesterol, and high-density lipoprotein cholesterol during a 6-year exposure period. • The results showed that among individuals with high and increasing triglycerides, the risk of premature CVD remains high despite the levels of low-density lipoprotein cholesterol decreasing over time.
What Are the Clinical Implications?
• The findings highlighted the importance of monitoring longitudinal lipid levels in the clinic and undertaking potential interventions to control triglyceride levels within an optimal range to reduce the residual risk of premature CVD.
Assessment of Outcomes
The primary outcome in the present study was incident premature CVD, defined as the time to the first occurrence of a composite event of stroke and MI before the age of 55 years in men and 65 years in women. [22][23][24] We used International Classification of Diseases, Tenth Revision (ICD-10) codes to identify premature CVD cases (I21 for MI, I60 to I63 for stroke). All participants were linked to the Municipal Social Insurance Institution and the Hospital Discharge Register for incidence of premature CVD, which covered all of the Kailuan study participants. An expert panel was set up to review and confirm the medical records and the participants who were suspected of premature CVD events. The information of outcomes was updated annually during the periods of follow-up. Incident stroke was diagnosed on the basis of neurological signs, clinical symptoms, and neuroimaging tests, including computed tomography or magnetic resonance, according to the World Health Organization criteria. 25 MI was diagnosed according to the criteria of the World Health Organization on the basis of the clinical symptoms, changes in the serum concentrations of cardiac enzymes and biomarkers, and electrocardiographic results. 26
Assessment of Covariates
Data on other related variables were collected through questionnaires, basic anthropometric measurements, and blood tests every 2 years. Body mass index (BMI) was calculated by dividing body weight by the square of height. Active physical activity was defined as physical activity ≥4 times per week and ≥20 min at a time.
Blood pressure was measured in the seated position using a mercury sphygmomanometer, and the average of 3 measurements of the systolic blood pressure (SBP) and diastolic blood pressure (DBP) were recorded. All the blood samples were analyzed using an auto-analyzer (Hitachi 747; Hitachi) on the day of the blood draw. The biochemical indicators tested included fasting blood glucose (FBG), serum lipids, serum creatinine, and high-sensitivity C-reactive protein. Hypertension was defined as SBP ≥140 mm Hg or DBP ≥90 mm Hg, any use of antihypertensive drugs, or a self-reported history of hypertension. Diabetes was defined as FBG ≥7.0 mmol/L, any use of glucoselowering drugs, or a self-reported history of diabetes. Dyslipidemia was defined as any self-reported history or use of lipid-lowering drugs, or total cholesterol (TC) ≥5.17 mmol/L.
Statistical Analysis
Baseline characteristics were expressed as mean±SD for continuous variables and frequencies with proportion for categorical variables. Differences in means and proportions between groups were compared using Student's t-test, ANOVA, or the chi-squared test, as appropriate.
Group-based multitrajectory modeling (GBMTM) was applied to explore the jointly longitudinal changes of triglyceride, LDL-C, and HDL-C, given that all these lipids completely determine TC, with adjustment for age. Group-based multitrajectory modeling was implemented by using Traj in Stata software version 15 (StataCorp, College Station, TX). 27 In brief, this model is a new application of group-based trajectory modeling. Group-based multitrajectory modeling is a semiparametric mixture model, which allows the joint modeling of the trajectories of multiple outcomes. This model identifies latent clusters of individuals who follow similar patterns through multiple outcomes using a maximum likelihood method. 18,27,28 In group-based trajectory models, each individual is assumed to belong to only 1 group, where each group has a distinct trajectory. We applied a censored normal model to identify distinct trajectories of lipid concentrations. Varied group-based trajectory models were run before selecting best model regarding the number of group and trajectory shapes (eg, linear, quadratic, cubic). 18,29 First, to identify the optimal number of district groups to described heterogeneity in the longitudinal development of lipid indices, various models using 3 to 7 distinct groups with fixed slope variances within groups were fitted. Then, different slopes (eg, linear, quadratic, cubic) were added to the model, allowing for curved developmental patterns. The improvement in model fit gained by adding more groups or shape parameters was assessed on the basis of the Bayesian information criterion (BIC). When comparing 2 models with different groups or trajectory shapes, the Bayes factor was also estimated by exp (BIC1-BIC2) , where BIC1 and BIC2 present the BIC values for model 1 and model 2. A 10fold difference in the Bayes factor is considered a significant difference. A model with the least BICs, higher average posterior probabilities, and sufficient sample size in each multitrajectory group was chosen as the best model. 30 Finally, 4 models' fit diagnostic criteria were assessed to test whether our chosen model fit the data well as follows: (1) average posterior probability of assignment for each group j equal to 0.7 or greater for all groups; (2) the odds of correct classification ≥5 for all groups; (3) similarity between the proportion of a sample assigned to a specific group and the group probabilities estimated from the model; and (4) narrow CIs of the estimated proportion.
Person-years was computed from the date of the 2012 survey to the date of premature CVD diagnosis, death, or the end of the follow-up (December 31, 2019), whichever came first. The premature CVD probabilities were estimated by Kaplan-Meier method and compared by ordinary log-rank test. Cox proportional hazards regressions were used to examine the association of the multitrajectory group with premature CVD, premature stroke, and premature MI, taking group 2 with the lowest incidence of outcomes as a reference. Hazard ratios (HRs) and 95% CIs were reported. The models met the proportional assumption criteria according to Schoenfeld residuals and log-log inspection. Three models were conducted. Model 1 was adjusted for age and sex. Model 2 was further adjusted for education, income, smoking status, drinking status, physical activity, BMI, SBP, DBP, and FBG. Model 3 was further adjusted for history of hypertension, diabetes, dyslipidemia, antihypertensive agents, antidiabetic agents, and lipid-lowering agents.
To test the robustness of our findings, several sensitivity analyses were performed. First, the Fine-Gray competing risk model was performed by considering nonpremature CVD deaths as competing risk events. Second, to reduce the possibility of reverse causality, a lag-analysis by excluding incident CVD with onset during the first year of follow-up was performed. Third, participants treated with lipid-lowering agents during the exposure period were further excluded. Fourth, we adjusted all covariables in model 3 at 2012. Fifth, to control the regression-to-the mean influence, we adjusted average BMI, SBP, DBP, and FBG during the exposure period. Additionally, we also adjusted baseline LDL-C, triglyceride, and HDL-C to explore whether the effect of trajectories was independent of baseline levels. Subgroup analyses stratified by age, sex, BMI, hypertension, diabetes, and dyslipidemia were performed to evaluate whether the multitrajectory of lipid indices exhibits a different effect on the premature CVD in special populations. Interactions between stratified variables and trajectories were tested using the likelihood ratio.
All analyses were conducted using Stata software version 15 (StataCorp) and SAS version 9.4 (SAS Institute Inc., Cary, NC). A 2-sided P < 0.05 was considered statistically significant.
Baseline Characteristics
A total of 78 526 participants were enrolled in the current study, with a mean age of 49.82±11.93 years. We identified a 5-group multitrajectory model from all investigated models. The average posterior probability of the 5 trajectories was 0.93. Figure 2 shows the plot of the multitrajectory groups of lipid indices and expected group percentages for each of the groups. The descriptions of each of the multitrajectory groups according to 2016 Chinese Guideline for the Management of Dyslipidemia in Adults 31 are presented in Table 1. Groups 1 and 2 have optimal values of triglyceride and LDL-C, but group 2 had higher levels of HDL-C. In group 3, the LDL-C levels were borderline high and increased over time, and triglyceride and HDL-C had optimal values. Group 4 had the worst value of triglyceride of all the groups. The triglyceride levels were apparently high and increasing over time, while the LDL-C and HDL-C levels were decreasing over time. The triglyceride levels were still high in group 5 but decreased over time.
The LDL-C and HDL-C levels were optimal, with an increasing and decreasing pattern, respectively.
Baseline characteristics among the 5 groups were significantly different (P<0.05). Participants with the Tian et al Multitrajectories of Lipids With Premature CVD highest and increasing triglyceride levels, optimal and decreasing LDL-C, low and decreasing HDL-C over time (group 4), and those with high and decreasing triglyceride, optimal and increasing LDL-C, low and decreasing HDL-C over time (group 5) were more likely to be young; men; current smokers and drinkers; have hypertension, diabetes, and dyslipidemia; more likely to take antihypertensive agents, antidiabetic agents, and lipid-lowering agents; and have a higher level of BMI, SBP, DBP, and FBG, compared with other groups ( Table 2).
Multitrajectory Groups of Lipid Indices and Premature CVD
During a median follow-up of 6.75 years, there were 665 (0.85%) cases of premature CVD identified, including 566 (0.72%) premature strokes and 100 (0.13%) premature MIs. The highest incidence rate of premature CVD was observed in participants with the highest and increasing triglyceride levels, optimal and decreasing LDL-C, low and decreasing HDL-C (group 4) (HR, 2.85 [95% CI, 2.11-3.86] per 1000 person-years), while the lowest incidence rate was observed in participants with optimal and stable triglyceride, optimal and slightly increasing LDL-C, and high and increasing HDL-C (group 2) (HR, 0.69 [95% CI, 0.51-0.95] per 1000 person-years). The Kaplan-Meier curves also showed that participants in group 4 experienced higher risk of premature CVD, stroke, and MI than those in other groups during the 6.75-year follow-up (P<0.0001 for log-rank test; Figure 3). After adjustment for potential confounding factors, the highest risk of premature CVD was observed in individuals with the highest and increasing triglyceride, optimal and decreasing LDL-C, and low and decreasing HDL-C overt time (group 4) (HR, 2.13 [95% CI, 1.36-3.32]), followed by those with high and decreasing triglyceride, optimal and increasing LDL-C, and low and decreasing HDL-C over time (group 5) (HR, 2.07 [95% CI, 1.45-2.98]) and those with optimal and increasing triglyceride, borderline high and increasing LDL-C, optimal and decreasing HDL-C over time (group 3) (HR, 1.90 [95% CI, 1.32-2.73]). However, participants with optimal and stable triglyceride, optimal and increasing LDL-C, and optimal and decreasing HDL-C (group 1) did not have a significantly high risk of premature CVD (HR, 1.17 [95% CI, 0.84-1.64]), compared with those with optimal and stable triglyceride, optimal and increasing LDL-C, and high and increasing HDL-C over time (group 2) ( Table 3). In the subtype analyses of CVD, similar results were yielded for premature stroke and premature MI, participants with the highest and increasing triglyceride, optimal and decreasing LDL-C, and low and decreasing HDL-C were at the higher risk than those in other groups with a 1.93-fold risk of premature stroke (HR, 1.93 [95% CI, 1.31-2.83]) and 3.20-fold risk of premature MI (HR, 3.20 [95% CI, 1.04-9.82]) ( Table 3).
Sensitivity analysis with competing risk model (Table S3), 1-year lagged analysis (n=77 373; Table S4), excluding those with lipid-lowering agents during exposure period (n=76 858; Table S5), adjusting for covariates at exam 4 (year 2012; Table S6), and adjusting for mean values of variables (Table S7), all generated similar findings to the primary analysis ( Figure 4). Additionally, adjustment for baseline lipid levels did change the associations materially (Table S8). Subgroup analyses showed the association between multitrajectory groups of lipid indices and premature CVD were consistent across different subgroups. There was no significant interaction between stratified variables and multitrajectory groups in relation to the risk of premature CVD (all P values for interaction were >0.05; Table S9).
DISCUSSION
This prospective cohort study identified 5 distinct combined multitrajectory groups of triglyceride, LDL-C, and HDL-C during 6-year exposure period. The results showed the following: of participants with the highest and increasing triglyceride levels, optimal and decreasing LDL-C, and low and decreasing HDL-C over time exhibited the highest risk of premature CVD, followed by those with high and decreasing triglyceride, optimal and increasing LDL-C, and low and decreasing HDL-C over time and those with optimal and increasing triglyceride, borderline high and increasing LDL-C, and optimal and decreasing HDL-C over time, the lowest risk of premature CVD was observed in those with optimal and stable triglyceride, optimal and increasing LDL-C, and high and increasing HDL-C over time. Similar patterns were observed for premature stroke and premature MI. The trend remained robust among stratified analyses and multiple sensitivity analyses.
The identified trajectories in this study extended the results from previous studies in this field, which have explored the longitudinal lipid component change with the risk of CVD separately. Data from Korean National Health Insurance showed that increased cholesterol levels over time were associated with high CVD risk. 13 Another prospective study found that 2-time-point increased LDL-C and decreased HDL-C were significantly associated with risk of MI, while the association was not observed for change in triglyceride levels. 12 Wang et al reported that high visit-to-visit HDL-C and LDL-C variability were associated with an increased incidence of ischemic stroke and hemorrhagic stroke, respectively, while the significant associations attenuated to an insignificant level regarding TC and triglyceride variability. 14 Additionally, previous studies using trajectory modeling only investigated the development of each lipid component separately. As acknowledged, there were 2 studies considering the trajectory of these lipids jointly. Dayimu et al 17,32 enrolled 9726 participants and found 3 distinct trajectory classes (U-shape class, progressing, and inverse U-shape), and Koohi et al 17,32 enrolled 14 373 US participants and identified 7 multitrajectory lipid groups. The sample size was relatively small in these investigations. However, these investigations confirmed the application of this method in epidemiological study. In the current study, we enrolled 75 609 Chinese participants and jointly analyzed 3 lipids (triglyceride, LDL-C, and HDL-C) as a whole, which overcame some spurious results from previous studies. A novel finding in our study was that these lipid indices can be jointly characterized into 5 distinct trajectory classes over time.
The most striking finding of our study was that individuals in different combinations of lipid indices showed a different risk of premature CVD. Participants with the highest triglyceride, optimal and decreasing LDL-C, low and decreasing HDL-C over time had the highest risk of premature CVD. Moreover, when we excluded those who had been receiving lipid-lowering medications, the trend remained relatively unchanged. Our results indicated that even after decreases in LDL-C, a considerable amount of premature CVD risk remains, suggesting the impact of the increasing trend of triglyceride on residual cardiovascular risk. The findings were consistent with study conducted by Fatemeh et al, which also showed that despite a decline in LDL-C over time, a significant amount of residual risk for CVD remains. Elevated concentrations of triglyceride-rich lipoproteins or remnant cholesterol, generally marked by elevated triglyceride, have been supported to be associated with the risk of CVD in both observational and genetic studies. [33][34][35] Triglyceride-rich lipoproteins and the remnant cholesterol carried in particles have the capacity to cross the arterial wall and are taken up by macrophages and smooth muscle cells. 36 The accumulation in the arterial wall of the remnant cholesterol may play a causal role in atherosclerosis development, similar to LDL-C. 37 Previous evidence has confirmed that elevated triglyceride levels even within the optimal range (150 mg/dL) were also associated with increased CVD. [38][39][40] This evidence suggests that a biologically "optimal" level may be even lower for triglyceride, as the American Heart Association also indicated that an "optimal" fasting triglyceride level is <100 mg/dL. 41 In our present study, the optimal range of triglyceride was defined as 150 mg/dL, and those with longitudinally high and increasing triglyceride over 150 mg/dL was significantly associated with the risk of premature CVD. Taken together, our current data emphasized the importance of controlling triglyceride levels in an optimal normal over time in the prevention of premature CVD. Additionally, our study showed that compared with participants with the highest and increasing triglyceride, optimal and decreasing LDL-C, and low and decreasing HDL-C, the risk of premature CVD was lower in participants with high and decreasing triglyceride, optimal and increasing LDL-C, and low and decreasing HDL-C over time, and even in those with optimal and increasing triglyceride, borderline high and increasing LDL-C, and optimal and decreasing HDL-C over time. This finding was supported the results from the Brigham and Women's Hospital Research Patient Data Repository, which showed that high triglyceride and low HDL-C levels contribute strongly and synergistically to coronary heart disease when LDL-C is well controlled. These findings suggested that although LDL-C is appropriately the principal lipid target for treatment, triglycerides might have greater importance in participants with optimal rather than greater LDL-C concentrations. 39 Recently, treatment for hypertriglyceridemia yielded conflicting results. Clinical trials of agents that lower triglycerides, specifically fenofibrate and niacin, have failed to demonstrate a reduction in CVD outcomes when administered in addition to appropriate medical therapy. 42,43 A recent study of N-3 fatty acid products did not show a benefit in patients receiving statin therapy. 44 However, the Reduction of Cardiovascular Events With Icosapent Ethyl-Intervention Trial study indicated that the risk of major ischemic events, including CVD death, was significantly lower with icosapent ethyl compared with placebo in patients with elevated triglyceride levels. 45 Another systemic review and metaregression analysis of randomized controlled trials also showed a significant association between triglyceridelowering and cardiovascular risk reduction, even after adjusting for LDL-C lowering. 46 Considering the high residual CVD risk caused by high triglyceride levels, more investigations are needed to explore the effective strategies on triglyceride lowering, which may be an important target to reduce premature CVD.
The key strength of the present study is the use of an innovative multitrajectory modeling technique to identify subgroups of longitudinal lipid index trajectories on the basis of multiple lipids, and to estimate. The multitrajectory analysis can incorporate the intercorrelations among multiple lipid indices to improve the accuracy of individual-specific probabilities of group membership, while the conventional group-based trajectory modeling clusters longitudinal trajectories on the basis of 1 index. Several limitations also needed to be noted. First, we measured multitrajectory groups of lipid indices within the first 4 waves and did not investigate long-term trajectory of multiple lipid concentrations. This design was chosen to maximize the number of participants with lipids measurements before 2012 and to allow a longer follow-up period to capture the occurrence of premature CVD. Second, we collected information only on premature stroke and MI, and we may have underestimated the prevalence and incidence rates of premature CVD, which has broader subtypes. Third, the use of the latent class analysis creates subgroups with very different sizes; for instance, the proportion of participants with the highest triglyceride, optimal and decreasing LDL-C, and low and decreasing HDL-C over time was small, making it difficult to compare subgroups in terms of statistical power. However, the results were robust in multiple sensitivity analyses, indicating that the small sample size has no important effect on the results. Additionally, a different outcome linked to a certain trajectory over time was known, which may limit the generalization. Fourth, we cannot exclude the possibility of residual or unmeasured confounding given the observational study design of the present analysis. Fifth, our study was conducted in northern China, which might limit the generation of the findings; thus, our results needed further validation in another cohort/population. Finally, since our findings are exploratory and do not address treatment questions, future research should focus more on the effects of triglyceride-lowering strategies in reducing residual cardiovascular risk, even among those with the optimal LDL-C level.
CONCLUSIONS
In conclusion, we jointly identified 5 distinct multitrajectory groups of lipid indices, including triglyceride, LDL-C, and HDL-C, during a 6-year exposure period. The results showed that among individuals with high triglycerides, the risk of premature CVD remains high despite the levels of LDL-C is decreasing over time. The findings demonstrated the impact of an increasing trend of triglyceride on the risk of premature CVD, highlighting the importance of monitoring longitudinal lipid levels in the clinic and undertaking potential interventions to control triglyceride levels within an optimal range to reduce the residual risk of premature CVD.
Affiliations
Adjusted for age, sex, education, income, smoking status, drinking status, physical activity, body mass index, systolic blood pressure, diastolic blood pressure, fasting blood glucose, hypertension, diabetes mellitus, dyslipidemia, antihypertensive agents, antidiabetic agent, and lipid-lowering agents other than variables for stratification. | 2023-04-30T06:17:25.843Z | 2023-04-29T00:00:00.000 | {
"year": 2023,
"sha1": "636f192eddd8d60703f2604b2387a2703d82d064",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1161/jaha.122.029173",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d34e89de3f35d68a8218ee685a7127a326f36efc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
232771549 | pes2o/s2orc | v3-fos-license | Association of sonic hedgehog signaling pathway genes IHH, BOC, RAB23a and MIR195-5p, MIR509-3-5p, MIR6738-3p with gastric cancer stage
Gastric cancer is the leading cause of cancer-related mortality worldwide. Given the importance of gastric cancer in public health, identifying biomarkers associated with disease onset is an important part of precision medicine. The hedgehog signaling pathway is considered as one of the most significant widespread pathways of intracellular signaling in the early events of embryonic development. This pathway contributes also to the maintenance of pluripotency of cancer stem cells pluripotency. In this study, we analyzed the expression levels of sonic hedgehog (Shh) signaling pathway genes IHH, BOC, RAB23a and their regulatory miRNAs including MIR-195-5p, MIR-509-3-5p, MIR-6738-3p in gastric cancer patients. In addition, the impact of infection status on the expression level of those genes and their regulatory miRNAs was investigated. One hundred samples taken from 50 gastric cancer patients (50 tumoral tissues and their adjacent non-tumoral counterparts) were included in this study. There was a significant difference in all studied genes and miRNAs in tumoral tissues in comparison with their adjacent non-tumoral counterparts. The lower expression of IHH, BOC, RAB23, miR-195-5p, and miR-6738-3p was significantly associated with more advanced cancer stage. Additionally, IHH upregulation was significantly associated with CMV infection (P < 0.001). Also, receiver operating characteristic (ROC) curve analysis indicated that mir-195 was significantly related to several clinicopathological features including tumor stage, grade, age, gender, and infection status of gastric cancer and can be considered as a potential diagnostic biomarker for gastric cancer. This study confirms the important role of Shh signaling pathway genes in gastric cancer tumorigenesis and their potential as novel molecular biomarkers and therapeutic targets.
Scientific Reports
| (2021) 11:7471 | https://doi.org/10.1038/s41598-021-86946-0 www.nature.com/scientificreports/ sex and age of gastric cancer patients (P = 0.0325). Tumors were mainly located at proximal position of the stomach (cardia, fundus, and body). Overall, 45.5% of tumors were located in the cardia region, and body and antrum regions of the stomach were the second and third most common sites, with 29.5% and 22.7% abundance, respectively. Overall, 74% of tumors were in stages I/II, and 26% in stages III/IV.
Expression levels of Shh signaling pathway genes and their regulatory miRNAs.
Expression levels of Shh signaling pathway genes (IHH, BOC, and RAB23) and their regulatory miRNAs (miR-195-5p, miR-6738-3p, and miR-509-3-5p) were evaluated in 50 gastric cancer patients using comparative relative real time PCR, and by comparing the expression in tumor tissues with their paired normal counterpart tissues. The results indicated that IHH, BOC, and RAB23 mRNA expression were significantly downregulated (Fig. 1). Similarly, miR-195-5p, miR-6738-3p, and miR-509-3-5p expression were also decreased significantly in gastric cancer tissues. The mean tumoral tissues expression levels for IHH, BOC, and RAB23 were 0.71, 0.68, and 0.57 respectively. Also, the expression levels for miR-195-5p, miR-6738-3p, and miR-509-3-5p in tumoral tissues were 0.46, 0.7, and 0.57 respectively. Scatter plot analysis indicated that IHH, BOC, and RAB23 were significantly downregulated in 52%, 58%, and 50% of tumoral tissues in comparison with their adjacent non-tumoral counterparts, respectively. Also, out of 50 GC patients, 70%, 54%, and 58% showed statistically significant downregulation of Table 1. Sequences of primers used for evaluation of SHH signaling genes and their regulatory microRNAs. The target specific portion of mRNA and micro-RNA is showed in red bold. Stem-loop sequence is underlined. Tm of the micro-RNA forward primer was increased by adding a tail (green sequences) to the 5′-end of the sequences. TGG AGT CCA CTG GCG TCT TCAC RT-PCR primer GTC GTA TCC AGT GCT GCG ACC GTA TGG ATG TGT CTG CGG CGT TTT ATC ATG CAC TGG ATA CGA CAG GCA TTG Association of clinicopathological features and Shh signaling pathway genes and their regulatory miRNAs. Association between the expression of Shh signaling pathway genes and their regulatory miRNAs with clinicopathological features in gastric cancer patients is illustrated in Fig. 4. A significantly associated trend was observed between the expression of studied genes and their regulatory miRNAs with TNM stage. The expression of IHH, BOC, RAB23, miR-195-5p, and miR-6738-3p decreased significantly with more advanced cancer stage, and miR-509-3-5p expression was significantly decreased during early stages in gastric cancer patients (P < 0.05). Also, IHH expression level was associated with histological type, as it was significantly www.nature.com/scientificreports/ lower in well differentiated gastric adenocarcinomas in comparison with moderately or poorly differentiated adenocarcinoma (P < 0.01). Furthermore, miR-6738-3p and miR-509-3-5p were significantly downregulated in poorly differentiated and moderately/poorly differentiated tumors, respectively (P < 0.05) (Fig. 4). Moreover, association study between Shh signaling pathway genes and their regulatory miRNAs according to age groups indicated that IHH and miR-6738-3p expression was significantly (P < 0.05) decreased in gastric cancer patients aged less than 65 years while miR-195-5p was significantly P < 0.05) down-regulated in gastric cancer patients older than 65 years. Moreover, a statistically significant decreased expression of miR-6738-3p was observed in male gastric cancer patients (P < 0.05) (Fig. 5).
Association of the infection status and Shh signaling pathway genes and their regulatory miR-NAs expression.
Furthermore, we investigate the effects of infections on the expression level Shh signaling pathway genes and their regulatory miRNAs in gastric cancer. As shown in Fig. 6, in HCMV positive patients, IHH expression was significantly increased (P < 0.001). Also, patients with no H. pylori infection showed lesser BOC and RAB23 expression in gastric tumoral tissues in comparison with H. pylori positive patients (P < 0.05). EBV and HHV6 infections had no significant effect on Shh signaling pathway genes and their regulatory miR-NAs expression in gastric cancer tissues.
Furthermore, miR-195-5p expressions were significantly decreased in gastric cancer tissues of EBV positive patients (P < 0.001). In addition, there was a significant difference for miR-509-3-5p expression level in H. pylori positive gastric cancer patients in comparison with H. pylori negative patients (P < 0.01). HCMV infections had no significant effect on Shh signaling pathway regulatory miRNAs expression (Fig. 7).
Receiver operating characteristic (ROC) curve analysis. ROC curve analysis used to reveal whether studied genes and their regulatory miRNAs can serve as diagnostic biomarkers (Fig. 8) 35 . As it is evident, total area under the curves (AUCs) of RAB23 (AUC = 0.63, sensitivity 71% and specificity 51%, P = 0.02) and miR-195-5p (AUC = 0.68, sensitivity 80% and specificity 54%, P = 0.002) were > 60%, suggesting that RAB23 and miR-195-5p can serve as diagnostic biomarkers for distinguishing patients with gastric cancer from healthy www.nature.com/scientificreports/ Discussion. Gastric cancer is among the five most frequently diagnosed cancers, and is highly heterogeneous. Accumulating evidence strongly indicate that aberrant activation of multiple signaling pathways can contribute to gastric cancer development. Consequently, cancer stem cells (CSCs) are key driving cells for growth and metastasis of this tumor type. It has been demonstrated that Shh signaling pathway is implicated in maintaining the pluripotency of CSCs, and aberrant activation of this pathway is associated with the development and progression of various types of cancer. In this study, we investigated the clinicopathological features of gastric carcinoma as well as expression levels of Shh signaling pathway genes and their regulatory miRNAs in gastric cancer patients. Although gastric cancer is common in both sexes, its incidence is higher in males, and is more frequently observed in younger female patients [36][37][38] . Our demographic findings are consistent with these reports. We also investigated the expression level of Shh signaling genes including IHH, BOC, and RAB23 in gastric cancer patients. Remarkably, we observed that IHH expression was decreased in tumoral tissues in comparison with adjacent non-tumoral tissues. Also, IHH expression strongly correlated with the stage and grade of malignancy as well as with CMV infection. IHH is one of the three protein ligands in the mammalian hedgehog signaling pathway, and plays an essential role in bone growth and differentiation. The expression level of IHH was found to be upregulated in certain tumors such as basal cell carcinoma, pancreatic cancer, and medulloblastomas 39 . Also, immunohistochemical study indicated that IHH expression was increased in pancreatic ductal adenocarcinoma in comparison with paracancer tissue and benign lesions, and this expression was associated with tumor grade, lymph node metastasis, tumor invasion, and poor overall survival 40 . In contrast, loss of IHH expression can promote the development of dysplasia in colon carcinogenesis via Wnt signaling pathway 41 . Also, epidermal deletion of IHH can promote squamous skin tumor formation and increased malignant tumor progression and metastasis as well as prolonged loss of IHH expression leads to inflammation and mucosal damage 42 . In addition, stromal activation of Hh signaling pathways by IHH suppresses tumor growth and metastases through angiogenesis and reduction of reactive oxygen species (ROS) activity. However, the tumor suppressor or oncogenic role of IHH in cancer is controversial. Relatively, our data demonstrated a decrease in BOC mRNA levels in tumoral tissues, and this expression was associated with the tumor's stage and H.pylori infection. BOC is a co-receptor in Shh signaling pathway and a component of cell-surface receptor complex that mediates cell-cell interactions. However, its relative contribution to cancer risk is currently unknown. The hedgehog co-receptors including GAS1, CDON, and BOC modulate the levels of HH responsiveness in pancreatic fibroblasts, and loss of BOC and GAS1 was shown to reduce HH activity while promoting pancreatic tumor growth through the induction of angiogenic factors 43 . In addition, BOC may induce DNA damage and promote progression of early medulloblastoma to advanced tumors via increasing the incidence of loss of heterozygosity (LOH) of its corecep- www.nature.com/scientificreports/ tor PTCH1 44 . Moreover, our results indicated that RAB23 was downregulated in 42% of gastric cancer tissues while it was overexpressed in 34%of cases. Also, the expression of RAB23 was significantly associated with more advanced cancer stage and H. pylori infection. RAB23 is recognized as a negative regulator of the Shh signaling pathway, and also as the target of many proteins involved in cancer development. However, it remains controversial whether RAB23 acts as an oncogene or tumor suppressor. Although more and more studies have introduced RAB23 as an oncogene in variety of human cancers, there is some evidence indicating that RAB23 plays a tumor suppressive role during carcinogenesis. RAB23 through interaction with SUFU can inhibit GLI transcriptional activities and its nuclear localization. In addition, overexpression of RAB23 inhibits breast cancer cells viability and proliferation, and induces cell apoptosis 45 . Moreover, transient overexpression of miR-367 in medulloblastoma cells caused decreased RAB23 expression resulting in increased medulloblastoma cell proliferation 46 .
It is now established that both genetic and epigenetic alterations contribute to gastric cancer development. Changes in miRNAs expression, as epigenetic modulators, via regulation of cancer-related genes have a primary role in cancer onset and progression. We studied the Shh signaling pathway regulatory miRNAs by in silico analysis, and experimentally validated the expression levels of these miRNAs in gastric cancer patients. Through in silico analysis, we identified three miRNAs, including miR-195-5p, miR-6738-3p, and miR-509-3-5p, which could bind to the 3′ UTRs of Shh signaling genes and modulate the hedgehog pathway. While there is no evidence supporting a role for miR-6738-3p in cancer development, our findings showed that miR-6738-3p expression was decreased in tumoral tissues in comparison with adjacent non-tumoral tissues. Also, its expression was associated with the clinicopathological features of gastric cancer patients including stage, grade, gender, and age. Our in silico analysis predicted that mir-195-5p binding site was located in the 3′ UTR of IHH, and experimental data indicated that mir-195-5p was significantly down-regulated in tumor samples in comparison with their adjacent non-tumoral tissues. Also, mir-195-5p expression was strongly associated with the advanced cancer stage and age of gastric cancer patients, and correlated with EBV infection. Mir-195-5p is one of the well-studied miRNA that is strongly connected with various types of cancer including those of digestive system, respiratory system, urinary system, reproductive system, bone, brain, head and neck, skin, and endocrine cancer. Nevertheless, despite its strong tumor suppressive effects, there is evidence that miR-195 has an oncogenic role in some cancers. Therefore, whether mir-195-5p functions as a tumor suppressor or oncogene is still under debate. It has been reported that miR-195-5p had significant effect on oncogenicity in various types of cancer through binding to complementary sequences in crucial genes of signaling pathways. In stomach cancer, evidence had Mir-509 is one of the anti-oncogene miRNAs that was reported to be downregulated in many prevalent human cancers. Mir-509 functions as a tumor suppressor in pancreatic and breast cancer via targeting MDM2 proto-oncogene (MDM2) and superoxide dismutase 2 (SOD2) 50,51 . Also, miR-509 as epithelial-mesenchymal transition miRNA was shown to induce the expression of E-cadherin, and inhibit cell motility and invasion 52 . In addition, miR-509 can inhibit cell motility and invasion through targeting tribbles pseudokinase 2 (TRIB2) in osteosarcoma 53 . Our findings showed that miR-509 was significantly down-regulated in tumor tissues in comparison with their adjacent non-tumoral tissues, and lower miR-509 expression in tumoral tissues was associated with high tumor grade, stage, and H. pylori infection. In line with our results, a similar study demonstrated that lower miR-509-3-5P expression was associated with advanced tumor stage and poor differentiation in gastric cancer tissues. Additionally, this lower expression promoted the migration and invasion abilities of gastric cancer cells by targeting podocalyxin like (PODXL) gene 54 . However, our finding is in disagreement with another study indicated miR-509 upregulation in H. pylori-negative gastric cancer 55 . www.nature.com/scientificreports/ Gastric cancer can be considered a complex disease that is influenced by multiple genes, miRNAs, and environmental factors. Finding valuable biomarkers for the diagnosis and prognosis of gastric cancer is difficult. Therefore, using deep convolutional neural network learning method is a very promising way to predict disease risk based on biomarkers 56 . Our ROC curve analysis indicated that mir-195 can be considered a good biomarker as it was significantly related to several clinicopathological features of gastric cancer including stage, grade, age, gender, and infection status. In conclusion, the present study demonstrated that the expression level of Shh signaling pathway genes and their regulatory miRNAs were significantly associated with gastric cancer. Also, the expression level of these genes was significantly associated with clinicopathological features including tumor stage, grade, age, gender, and infection status in gastric cancer patients. Although Hh signaling inhibitors have been developed, and two inhibitors, vismodegib and sonidegib, have been approved by the U.S. Food and Drug Administration (FDA) to treat basal cell carcinoma and medulloblastoma, none of them have been approved for gastric cancer. Thus, more researches to elucidate factors regulating Shh signaling pathway as well as their detailed mechanism of action in gastric cancer are necessary. Also, it is well known that immune regulations may affect gastric cancer stage development, and patients with common variable immunodeficiency (CVID) have a high risk of gastric cancer 57 . Recently, in vivo study indicated that immunodeficiency can promote adaptive alterations of host gut or tissue-based microbiome 58 . Therefore, additional studies on alterations of gastric mucosal immunity and microbiota and effects on signaling pathways are needed to better understand the gastric carcinogenesis. | 2021-04-04T06:16:31.925Z | 2021-04-02T00:00:00.000 | {
"year": 2021,
"sha1": "3b0a1efcf5e1e665e7dd321c7e70f48862932825",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-86946-0.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "42e117ce2edc4ce508bee37f88b1d96001467457",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
268139092 | pes2o/s2orc | v3-fos-license | Vagus Nerve Stimulation Modulates Inflammation in Treatment-Resistant Depression Patients: A Pilot Study
Vagal neurostimulation (VNS) is used for the treatment of epilepsy and major medical-refractory depression. VNS has neuropsychiatric functions and systemic anti-inflammatory activity. The objective of this study is to measure the clinical efficacy and impact of VNS modulation in depressive patients. Six patients with refractory depression were enrolled. Depression symptoms were assessed with the Montgomery–Asberg Depression Rating, and anxiety symptoms with the Hamilton Anxiety Rating Scale. Plasmas were harvested prospectively before the implantation of VNS (baseline) and up to 4 years or more after continuous therapy. Forty soluble molecules were measured in the plasma by multiplex assays. Following VNS, the reduction in the mean depression severity score was 59.9% and the response rate was 87%. Anxiety levels were also greatly reduced. IL-7, CXCL8, CCL2, CCL13, CCL17, CCL22, Flt-1 and VEGFc levels were significantly lowered, whereas bFGF levels were increased (p values ranging from 0.004 to 0.02). This exploratory study is the first to focus on the long-term efficacy of VNS and its consequences on inflammatory biomarkers. VNS may modulate inflammation via an increase in blood–brain barrier integrity and a reduction in inflammatory cell recruitment. This opens the door to new pathways involved in the treatment of refractory depression.
Introduction
Recent hypotheses on the etiopathogenesis of major depressive disorder have proposed that proinflammatory cytokines may play an important part in the onset and persistence of depressive symptoms.Studies report that the plasma levels of proinflammatory cytokines such as interleukin-1β (IL-1β), IL-6, IL-12, CC chemokine Ligand (CCL-2), Tumor Necrosis Factor (TNF)-α, prostaglandin E2 are increased in patients suffering from depression [1].Vagus nerve stimulation (VNS) is an approved adjunctive therapy for treatment-resistant depression (TRD).The vagus nerve is composed of afferent fibers sending sensory information to the brain (80%) and efferent fibers relaying data from the brain back into the body (20%).It is also closely related to the cortical-limbic-thalamic-striatal neural circuit involved in emotional and cognitive functions.Although its exact mechanisms remain unclear, VNS is thought to influence microglial cells directly or indirectly through complex brain-immune system interactions [2].It was shown that VNS influences neurotransmitters implicated in mood disorders, such as serotonin and norepinephrine, and increases the brain-derived neurotrophic factor (BDNF), also increased by pharmacological antidepressants.Interestingly, recent reports have also suggested that VNS has an impact on systemic inflammation via the modulation of cytokines (IL-1β, IL-6, and TNFα) [3], such as in inflammatory bowel diseases.Furthermore, the vagus nerve efferent pathway also influences immune cells in the spleen, generating an immunosuppressive environment [3].This new rationale led to a better understanding of approved therapies for TRD.Many reports demonstrated changes in the inflammatory proteins measured in the blood of depressive patients after pharmacological treatment, such as chemokines relevant to inflammatory cell recruitment, proteins relevant to blood-brain barrier integrity, and acute inflammatory cytokines.Although there is a growing interest in the study of inflammation in depressive patients, very few have examined the inflammatory biomarkers in treatment-resistant depressive patients.To our knowledge, no study has explored whether VNS treatment has induced inflammatory changes in TRD patients.
In this limited and exploratory study, we measured the clinical efficacy of VNS treatment in TRD and inflammatory proteins at the time of implantation and later in the course of the disease.Here, despite a low number of patients, we described significant modulations of several inflammatory proteins after more than 4 years of continuous VNS treatment, suggesting a modulation of the inflammation in TRD.
VNS Induced a Significant and Sustained Clinical Response in TRD Patients
Table 1 represents clinical characteristics at baseline of our patients.Supplemental Table S1 includes all the other clinical, biological characteristics and VNS settings of the patients.The mean baseline MADRS score was 24.5 (±7.2), representing a moderate level of depression.Duration of the current depressive episodes at the time of VNS surgery was, on average, 43 months (18-72 months).At the time of the last blood harvest, we observed a 76.3% drop in the mean MADRS score, with 5 out of 6 patients achieving a clinical response (MADRS reduction of at least 50%).Anxiety symptoms were also reduced significantly (59.9%) (see Table 2).Amongst the 40 molecules measured, we found significant changes between the pre-implantation and post-implantation levels of several chemokines/pro-inflammatory cytokines and proteins.VNS stimulation reduced the levels of IL-7, CXCL8, CCL2, CCL13, CCL17, CCL22, Flt-1 and VEGF-C, whereas it increased bFGF levels (Table 3 for patients' and healthy donors' values and Figure 1 for individualized variations).Interestingly, we did not observe any significant variation in the levels of inflammatory proteins previously reported in depressive patients, namely TNF-α, IL-6 and IFNγ (Figure 2).Moreover, IL-1β levels were undetectable.All the other measured molecules not presented here did not variate significantly before or after treatment.We found no correlation between the variation of inflammatory proteins in the plasma and changes in the MADRS score.
Discussion
We measured inflammatory proteins in the plasma of TRD patients treated with VNS and we highlighted significant changes before and after the implantation.VNS treatment resulted in a significant modulation of inflammation by reducing pro-inflammatory cytokine and chemokine plasma levels.IL-7, CXCL8, CCL2, CCL13, CCL17 and CCL22 were reduced after more than 4 years of VNS treatment.Interestingly, these inflammatory mediators are different from the ones previously reported in depression (TNF-α, IL-1β and IL-6).The latter are usually involved in acute inflammation and they were studied in patients with major depression, responding or not responding to treatment.Since we recruited only treated TRD patients, they may suffer from a more chronic illness, explaining why we did not observe changes after VNS implantation.IL-7 is crucial for T cell homeostasis, whereas CXCL8 is a chemokine crucial to promote neutrophil recruitment.Both cytokines were found to be elevated in depressive patients and reduced to the levels of healthy controls after pharmacologic treatment.Interestingly, we observed the same effect on IL-7 and CXCL8 after VNS therapy, suggesting that anti-depressive therapies such as drugs and VNS may exert their effects partly through a reduction in these pro-inflammatory mediators [4].
We also found that CCL2, CCL13 and CCL17 levels were reduced with VNS therapy.These chemokines have been shown to promote immune cell infiltration in various central nervous system diseases [5], such as multiple sclerosis [6].High levels of CCL2 and CCL13 were observed in patients with depression compared to healthy control, evoking the possibility that immune cell mobilization may be involved in the disease [7].Thus, VNS is likely to act as a modulator of inflammation in treatment-resistant major depression patients via leukocyte recruitment in the brain.
CCL22 is a chemokine involved in the pathophysiology of infectious and neoplastic diseases.One recent study suggested that CCL22 could be used as a marker of treatment response, with levels increasing six weeks after the beginning of pharmacological therapy.Increased levels of CCL22 were also initially linked to better responsiveness to pharmacological treatment [8].In our study, we found a decrease in CCL22 with VNS therapy.Since we studied inflammatory protein levels after a few years of VNS treatment, this suggests that CCL22 may have a transitory role in promoting response to anti-depressive treatment.
VEGF-C and sFlt-1 levels also diminished, whereas bFGF levels increased with VNS.The blood-brain barrier is thought to play a crucial role in major depression patients [9].
Discussion
We measured inflammatory proteins in the plasma of TRD patients treated with VNS and we highlighted significant changes before and after the implantation.VNS treatment resulted in a significant modulation of inflammation by reducing pro-inflammatory cytokine and chemokine plasma levels.IL-7, CXCL8, CCL2, CCL13, CCL17 and CCL22 were reduced after more than 4 years of VNS treatment.Interestingly, these inflammatory mediators are different from the ones previously reported in depression (TNF-α, IL-1β and IL-6).The latter are usually involved in acute inflammation and they were studied in patients with major depression, responding or not responding to treatment.Since we recruited only treated TRD patients, they may suffer from a more chronic illness, explaining why we did not observe changes after VNS implantation.IL-7 is crucial for T cell homeostasis, whereas CXCL8 is a chemokine crucial to promote neutrophil recruitment.Both cytokines were found to be elevated in depressive patients and reduced to the levels of healthy controls after pharmacologic treatment.Interestingly, we observed the same effect on IL-7 and CXCL8 after VNS therapy, suggesting that antidepressive therapies such as drugs and VNS may exert their effects partly through a reduction in these pro-inflammatory mediators [4].
We also found that CCL2, CCL13 and CCL17 levels were reduced with VNS therapy.These chemokines have been shown to promote immune cell infiltration in various central nervous system diseases [5], such as multiple sclerosis [6].High levels of CCL2 and CCL13 were observed in patients with depression compared to healthy control, evoking the possibility that immune cell mobilization may be involved in the disease [7].Thus, VNS is likely to act as a modulator of inflammation in treatment-resistant major depression patients via leukocyte recruitment in the brain.
CCL22 is a chemokine involved in the pathophysiology of infectious and neoplastic diseases.One recent study suggested that CCL22 could be used as a marker of treatment response, with levels increasing six weeks after the beginning of pharmacological therapy.Increased levels of CCL22 were also initially linked to better responsiveness to pharmacological treatment [8].In our study, we found a decrease in CCL22 with VNS therapy.Since we studied inflammatory protein levels after a few years of VNS treatment, this suggests that CCL22 may have a transitory role in promoting response to anti-depressive treatment.
VEGF-C and sFlt-1 levels also diminished, whereas bFGF levels increased with VNS.The blood-brain barrier is thought to play a crucial role in major depression patients [9].Several molecules maintain their integrity and homeostasis.VEGF A and C promote blood-brain barrier permeability, whereas sFlt-1 plays a role in counteracting this effect [10] and bFGF plays a role in promoting blood-brain barrier integrity [9].The role of VEGF in major depressive disorder is still ambiguous, but low levels are associated with greater vulnerability to stress, thus a higher risk of developing depressive symptoms.In counterpart, low levels are also associated with a better response to pharmacological treatment.Studies show contraindicatory reports regarding VEGF levels in depression [11].This disparity may be explained by the variations in the severity/chronicity of the disease and the conditions leading to the development of depression.On the other hand, bFGF levels increased with VNS treatment.Previous studies have linked depression with lower levels of circulating bFGF [12].VNS may play a role in the recovery of blood-brain barrier integrity via bFGF and sFlt-1 modulation.Based on our observations, we suggest that VNS promote the recovery of blood-brain barrier integrity by reducing the VEGF-C regulator, leading to a compensatory reduction in sFlt-1 levels and an increase in bFGF levels.This would result in a decrease in blood-brain barrier permeability and brain infiltration of leukocytes.
The efficacy of VNS therapy in our group of 6 patients was high, and above the rates reported in the literature.Yet, a recent 5-year longitudinal registry study of almost 500 patients did find strong evidence for long-term benefits of VNS therapy on over 300 patients in the treatment-as-usual (TAU) group, and a lower relapse rate, rekindling interest in this technology in TRD [13].
Obviously, this represents a pilot study with heterogeneity in sample collections after VNS implantation, and very few patients.Despite these limitations, we highlighted statistically significant differences between the levels of various inflammatory markers.
Our patients in this exploratory study also show prolonged VNS anti-depressive effects, associated with a modulation of inflammation via several mechanisms.To our knowledge, this is the first study demonstrating the possible impact of VNS on inflammatory proteins related to blood-brain barrier integrity and inflammatory cell recruitment in the brain.However, because of several limitations in the current work (small sample size, referral bias, no TRD control group), further studies will be needed to better characterize these changes.First, the small sample size limited us to detect only large effects on plasma proteins.Smaller variations may have been missed due to a lack of statistical power.It was also impossible to define clear cut-off levels characterizing patients versus normal donors.Statistical analyses of non-responders versus responders could not be performed due to the low number of patients.Also, we could not show a significant correlation between the MADRS score and inflammation because of insufficient statistical power.A larger cohort would be needed to determine if baseline inflammatory protein levels (low/high) are predictive of response to therapy, and to find out if modulation of other inflammation markers is in play with this treatment.
Patients
This study was approved by the local ethics committee and performed in accordance with the 3 councils declaration.All patients provided written informed consent.Six patients with TRD, mean age at implantation (48.8 ± 5.8; range 41-57), who underwent standard neurosurgical VNS implantation (Cyberonics Model 102 pulse generator, LivaNova, Boston, MA, USA) at the Centre Hospitalier de l'Université de Montréal (CHUM) between 2007 and 2010 (mean age at implantation; 48.8 ± 5.8) were recruited.No complications were observed in our patients.Inclusion criteria were previously described, grossly defined as unipolar or bipolar disorder with a partial response or no response to at least 4 antidepressant medications.Exclusion criteria included history of auto-immune disease, treatment with immunomodulatory medication, ongoing infection, spontaneous remission before surgery, other psychiatric conditions, and medical contraindication, such as clinically relevant cardiovascular disease, active cancer, and pregnancy [14].All patients were assessed using the Montgomery-Asberg Depression Rating Scale (MADRS) and Hamilton Anxiety Rating Scale (HAM-A) to measure depressive and anxious symptoms at baseline, 12 months, 24 months, and at the time of the second blood harvest.Response to therapy in depression was defined as a reduction of at least 50% on the MADRS.Blood samples were obtained at 8AM before implantation (baseline), and up to 4 years or more after continuous VNS therapy (range 56-93 months).We also obtained blood samples from 6 healthy donors.EDTA blood was centrifuged and stored at −80 • C.
Statistics
Analyses were performed using IBM SPSS Statistics version 25 (IBM Corporation, Amonk, NY, USA).Data obtained from our 8 patients were analyzed using descriptive statistics and repeated measures multivariate ANOVAs (MANOVA), as Pearson correlations revealed several variations in inflammatory proteins.Differences with p < 0.05 were considered to be statistically significant.
Conclusions
Here, we suggest that VNS therapy is associated with modulation of the immune system, inflammation, and blood-brain barrier function.Several chemokines were decreased in the plasma of TRD patients with ongoing VNS therapy for at least 4 years, such as CCL2, CCL13 and CCL17 associated with leukocyte recruitment.Proteins related to blood-brain barrier homeostasis, integrity, and permeability were also modulated (VEGF, sFlt1 and bFGF).Thus, we believe that VNS may reestablish blood-brain barrier integrity and reduce recruitment of inflammatory cells in the brain.This could lead to improvements in depressive symptoms or remission.This study provides pilot data that may warrant and compel future consortia to further test this hypothesis.
Figure 2 .
Figure 2. Levels of inflammatory proteins previously reported to be modulated in depressive patients.Individualized variations for TNF-α, IL-6 and IFNγ in VNS patients were not significant.Squares are pre-treatment values, black circles are post-treament values.
Figure 2 .
Figure 2. Levels of inflammatory proteins previously reported to be modulated in depressive patients.Individualized variations for TNF-α, IL-6 and IFNγ in VNS patients were not significant.Squares are pre-treatment values, black circles are post-treament values.
Author
Contributions: P.L. and J.-F.C.: involved in the conceptualization/methodology, extracted the data, analyzed the data, wrote and revised the manuscript.V.D.J.: extracted the data, analyzed the data, revised the manuscript.D.D., F.R. and N.A.: analyzed the data, wrote and revised the manuscript.J.-P.M., C.L.-P., P.T. and R.L.: extracted the data, analyzed the data, revised the manuscript.M.-P.F.-G.: extracted the data and revised the manuscript.All authors have read and agreed to the published version of the manuscript.Funding: This research was funded by the Chair Claude-Bertrand in neurosurgery of the Université de Montréal (#2013-A0016-0081-595), Université de Montréal, QC, Canada.Institutional Review Board Statement: The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Institutional Ethics Committee of the CHUM (protocol # 14.235 approved on 3 November 2014).Informed Consent Statement: Informed consent was obtained from all subjects involved in the study (ERB #2015-5720, CE 14.235-CA).Data Availability Statement: Details on data supporting the results presented here are available upon reasonable request to the corresponding author.The data are not publicly available due to [lack of plateform availability and funding].
Table 1 .
Characteristics and baseline clinical evaluation of patients treated with VNS.MADRS
Table 2 .
Depression and Anxiety scores of patients treated with VNS.
Table 3 .
Cytokine and chemokine levels in healthy controls and patients treated with VNS (baseline and post-treatment) with quantified difference.p-value represents the mean difference at baseline and after treatment, () for standard deviation. | 2024-03-03T19:49:27.571Z | 2024-02-26T00:00:00.000 | {
"year": 2024,
"sha1": "ecae5bdfd07b4c5d1ed57fa4ab9545423af1eb0f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/25/5/2679/pdf?version=1708940635",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dab0581b72592ebc2027d644b4ea08039e51da10",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
230540474 | pes2o/s2orc | v3-fos-license | Finite-time fuzzy output-feedback control for p -norm stochastic nonlinear systems with output constraints
: This paper investigates the finite-time control problem of p -norm stochastic nonlinear systems subject to output constraint. Combining a tan-type barrier Lyapunov function (BLF) with the adding a power integrator technique, a fuzzy state-feedback controller is constructed. Then, an output-feedback controller design scheme is developed by the constructed state-feedback controller and a reduce-order observer. Finally, both the rigorous analysis and the simulation results demonstrate that the designed output-feedback controller not only guarantees that the output constraint is not violated, but also ensures that the system is semi-global finite-time stable in probability (SGFSP).
Introduction
Over the past decades, a variety of control design strategies have been proposed for different nonlinear systems [1][2][3][4][5][6][7]. Especially, many approximated-based control schemes have been developed for uncertain nonlinear systems by using neural networks (NNs) or the fuzzy logic systems (FLSs) [8][9][10][11][12][13][14][15][16][17][18][19]. Among these studies, the research of stochastic systems is much more attracted (see, e.g. [16][17][18][19] and the references therein), due to their wide application. It is worth noting that the aforementioned NNs-based or FLSs-based control strategies haven't taken output constraint into account. In fact, many practical systems are usually required to satisfy an output constraint in the operation for considering the performance specifications or safety [20,21]. It is well known that, the BLF-based approaches are useful tools to settle controller design problems of output-constrained nonlinear systems, see references [22][23][24][25][26][27] for instances. In the latest research progress of constrained control, many kinds of adaptive neural or fuzzy control design methods have been presented by dx i = x p i+1 dt + φ i (x i )dt + g T i (x i )dω, i = 1, · · · , n − 1, dx n = u p dt + φ n (x)dt + g T n (x)dω, y = x 1 , where ω is a r-dimension standard Wiener process; x = (x 1 , · · · , x n ) T ∈ R n is system state vector; u ∈ R and y ∈ R are respectively control input and output; the fractional power p ∈ R ≥1 odd := {m/k|m ≥ k, m and k are positive odd integers}; for i = 1, · · · , n,x i = (x 1 , · · · , x i ) T ∈ R i ; φ i : R i → R and g i : R i → R r are unknown continuous functions satisfying φ i (0) = 0, g i (0) = 0. The system output y = x 1 is measurable and constrained in Π 1 = {y(t) ∈ R, |y(t)| < ε} with a constant ε > 0, while the other states x 2 , · · · , x n are all unmeasurable.
The objective of this paper is to design a finite-time fuzzy output-feedback controller for system (2.1) such that: 1) the output don't violate the given constrained boundary; 2) all the signals of the closed-loop system converge to a small compact of the original point in finite-time in probability in presence of unknown nonlinearities and unmeasured states x i (i = 2, · · · , n).
Firstly, some concepts and lemmas are presented for preliminaries. Consider the following where φ(x) and g(x) are continuous functions with satisfying φ(0) = g(0) = 0.
Remark 1. As stated in [17], the Eq (2.4) implies that there exists the stochastic setting time function
[8] Let p ∈ (0, ∞), for any ζ i ∈ R, i = 1, · · · , n, one has Lemma 5. [36] If ζ, η ∈ R and p > 1 is an odd number, then In this paper, the nonlinear functions φ i (·) and g i (·) are all unknown. The unknown functions will be approximated by the FLSs based on the following presented lemma.
Remark 2. In view of Lemma 7, any function F(X) which is defined and continuous on a compact set Π 0 , can be approximated by where (X) is the FLS approximation error satisfying | (X)| < δ.
State-feedback controller design
In this section, a fuzzy state-feedback controller will be explicitly designed for system (2.1) by combining a tan-type BLF and the FLSs into the adding a power integrator technique.
First of all, we introduce a coordinate transformation as follow where q 1 = 0, q j = q j−1 +1 p ( j = 2, · · · , n + 1), and H > 1 is a constant to be determined later. Based on (3.1) , system (2.1) turns into In what follows, a fuzzy state-feedback controller will be designed through n steps based on the equivalent system (3.2). Define where β i 's are the virtual signals being constructed later.
Step 1. From (3.1), we can get Choose the first Lyapunov function where b 1 > 0 is an adjustment parameter,α 1 = α 1 −α 1 is the estimate error andα 1 is the estimator of the parameter α 1 .
2ε 4 is a tan-type BLF adopted to deal with the system output constraint. Compared to the log-type BLF, V B (ξ 1 ) possesses the following characteristic: which implies that the proposed method is also applicable to the system without output constraints.
Substituting (3.11) and (3.12) into (3.10), gets In addition, it is easily obtained that Therefore, we can get Remark 5. According to Lemma 6, one getsα 1 ≥ 0, for ∀t ≥ 0. In each design step, this characteristic will be always applied.
Step 2. From (3.3) and Itô's formula, we have where Combining the definition of β 1 with the properties of f 1 (χ 1 ) and h 1 (χ 1 ), implies that β 1 is valid and continuous.
Choose the second Lyapunov function as where b 2 > 0 is an adjustment parameter,α 2 = α 2 −α 2 is the estimate error andα 2 is the estimator of the parameter α 2 . Applying (2.3), (3.14) and (3.16), it can be gotten that Besides, applying Lemma 3 renders On the other hand, we gets , and σ 12 > 0 is an adjustment parameter.
In addition, it is evident that Substituting (3.25) into (3.22), one gets where Step (3 ≤ k ≤ n). In view of above two steps, we can deduce the following similar property whose proof can be found in the Appendix. Proposition 1. For the kth Lyapunov function V k : Π k → R + as there exists a virtual controller β k and the adaptive law ofα k of the following forms
Selection of the observer gains
In this section, we will analyze the appropriate values of the gains γ i (i = 2, · · · , n) and some constant parameters in output-feedback controller.
Now, a proposition is provided for helping to determine gain constants, whose proof will be given in Appendix.
Stability analysis
To state the main result, the following theorem is presented.
ii) all the signals in the closed-loop stochastic nonlinear system (2.1) are SGFSP.
Proof. i) Let µ 0 = min{ 1 , d 1 , · · · , d n ,¯ 2 , · · ·¯ n ,¯θ 2 γ , · · ·¯θ n γ } and π 0 =Q. Then, Eq (3.51) can be expressed as (3.52) We can easily get from Eq (3.52) that For x(0) = (x 1 (t 0 ), · · · , x n (t 0 )) T satisfying x 1 (t 0 ) ∈ Π 1 , it easily obtains that the mean of V(t) is bounded, which implies that V is bounded in probability. It can be directly deduced from the definition of V that P{V B (ξ 1 ) < ∞} = 1. (3.54) Consequently, it is clear that P{|y(t)| < ε} = P{|ξ 1 (t)| < ε} = 1, which demonstrates that the output constraint of system (2.1) is not violated in the sense of probability. ii) For ∀ 0 <ς 0 < 1, it is easy to get from Lemma 3 that Further, one has Then, substituting (3.55) into (3.52) drives where χ(0) = (χ 1 (t 0 ), · · · , χ n (t 0 )) T , e(0) = (ẽ 2 (t 0 ), · · · ,ẽ n (t 0 )) T ,α(0) = (α 1 (t 0 ), · · · ,α n (t 0 )), 0 < l 0 < 1 is a constant. Then it follows from Lemma 1 that for ∀t ≥ t 0 + T * , E V 1−ς (χ,ẽ,α) ≤¯π 0 µ 0 (1−ς 0 ) , which means that all the signals in the closed-loop systems are semi-global finite-time stable in probability. Remark 6. In this paper, we construct an output-feedback controller rather than the designed statefeedback controllers in existing results about output constraints. On the other hand, it should be pointed out the considered constraint is symmetric rather than asymmetric, which leads that the proposed scheme can not be directly employed or further extended to the case of asymmetric constraints. However, a control scheme based on a new BLF can be developed for asymmetric output constraints in a similar way to this paper. In addition, another limitation is that all of the fractional powers are equal to p. If p i 's are taken different values, the proposed strategy seems not applicable. In the future, we will address the two issues.
Simulation example
The validation of the proposed strategy will be testified by the following system.
where the output y = x 1 is measurable and constrained by Π 1 = {y(t) ∈ R, |y(t)| < 1}, and the state x 2 is unmeasurable. According to the controller design procedure, we can respectively design the finite-time outputfeedback controller, the adaptive laws and the observer as follows: Figure 1 provides the trajectory of x 1 (t), which indicates that the system output constraint is not violated under controller (4.2). Meanwhile, the trajectories of x 2 (t) andx 2 (t) are given in Figure 2, which shows that x 2 (t) is well estimated byx 2 (t). Moreover, the trajectory of the controller u is displayed in Figure 3. Finally, Figure 4 expresses the curves of the adaptive parameter vector under the developed strategy. Also, one could evidently observe from these figures that all the signals of system (4.1) are semi-global finite-time stable in probability under controller (4.2).
Conclusion
In this paper, the output-feedback controller design problem is investigated for a class of p-norm stochastic nonlinear systems with output constraints. Through using a tan-type BLF, an adaptive fuzzy state-feedback controller is proposed by the adding a power integrator technique. Then, a finite-time fuzzy output-feedback controller is constructed by combining the proposed state-feedback controller and a reduced-order observer. Both rigorous proof and the simulation example verify that the designed controller can ensure the achievement of the system output constraint and semi-global finite-time stability of all the signals in probability. In the future, we will consider the situations of asymmetric constraints, different fractional powers, or multi-input multi-output stochastic nonlinear systems.
In addition, we have γ Hẽ | 2020-12-17T09:11:20.953Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "5a183195bb692dc06d5b688be7fdbe64e1c312ee",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3934/math.2021136",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "af8c5666003b2e47daf5c93c26a8ad259bf163d8",
"s2fieldsofstudy": [
"Engineering",
"Mathematics",
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
250496900 | pes2o/s2orc | v3-fos-license | Optimal Strategic Mining Against Cryptographic Self-Selection in Proof-of-Stake
Cryptographic Self-Selection is a subroutine used to select a leader for modern proof-of-stake consensus protocols, such as Algorand. In cryptographic self-selection, each round $r$ has a seed $Q_r$. In round $r$, each account owner is asked to digitally sign $Q_r$, hash their digital signature to produce a credential, and then broadcast this credential to the entire network. A publicly-known function scores each credential in a manner so that the distribution of the lowest scoring credential is identical to the distribution of stake owned by each account. The user who broadcasts the lowest-scoring credential is the leader for round $r$, and their credential becomes the seed $Q_{r+1}$. Such protocols leave open the possibility of a selfish-mining style attack: a user who owns multiple accounts that each produce low-scoring credentials in round $r$ can selectively choose which ones to broadcast in order to influence the seed for round $r+1$. Indeed, the user can pre-compute their credentials for round $r+1$ for each potential seed, and broadcast only the credential (among those with a low enough score to be the leader) that produces the most favorable seed. We consider an adversary who wishes to maximize the expected fraction of rounds in which an account they own is the leader. We show such an adversary always benefits from deviating from the intended protocol, regardless of the fraction of the stake controlled. We characterize the optimal strategy; first by proving the existence of optimal positive recurrent strategies whenever the adversary owns last than $38\%$ of the stake. Then, we provide a Markov Decision Process formulation to compute the optimal strategy.
Introduction
Proposed by Nakamoto in 2008, Bitcoin was one of the major innovations in peer-to-peer networks for electronic transactions [15].Bitcoin is a decentralized currency without an administrator where anyone is free to join and submit or validate transactions in a public ledger.To modify the state of the ledger, users must publicly broadcast transactions.Those transactions are included in a block by miners and validated via a Proof-of-Work (PoW).The significant computational resources required to validate a block coupled with the reward for validating blocks create an economic incentive for miners to validate blocks correctly.
Unfortunately, Bitcoin is not without limitations.The proof-of-work consensus mechanism was designed to be energy-intensive-the global energy consumption from all Bitcoin miners exceeds that of all but 26 countries [1].Moreover, the economies of scale from designing and purchasing large quantities of specialized hardware for proof-of-work mining has demonstrated that Bitcoin is more prone to centralization than initially thought [2].In attempt to address the limitations of proof-of-work, many alternative blockchain designs have been proposed [17,11,13,6].
In particular, proof-of-stake blockchains replace proof-of-work by a randomized mechanism to select a miner as the leader who proposes a new block.To avoid sybil attacks, where an adversary impersonates multiple identities, the mechanism hopes to choose a miner with probability proportional to their fraction of owned coins (commonly referred to as their stake in the system).The main challenge for such systems is to sample a miner without sacrificing decentralization and, at the same time, preserve the security and economic properties of the blockchain.
Several proposals for "Bitcoin-like" longest-chain proof-of-stake protocols have been made, but several drawbacks to this approach still exist.For example, [8] considers a longest-chain proof-stake blockchain that requires a randomness beacon [16,12], and proves that this qualitatively preserves the mining incentives of Bitcoin, but the need for a trusted external randomness beacon is prohibitive in most settings.Without a trusted external randomness beacon, proof-of-stake implementations often rely on using the own blockchain as a source of pseudorandomness.Because miners can often predict the randomness in such protocols, Brown-Cohen et al. [3] showed that a large class of longest-chain proof-of-stake protocols are vulnerable to profitable deviations that they term "predictable selfish mining."Thus, the current state of longest-chain proof-of-stake cryptocurrencies must either: (a) propose a trusted randomness beacon, or (b) propose clever applications of cryptography to minimize the ability of strategic miners to manipulate the blockchain pseudorandomness.While active research agendas aim to address both (a) and (b), these are currently notable barriers to longest-chain proof-of-stake protocols.
One alternative design that relies on neither a trusted beacon, nor even an underlying longest-chain protocol, is cryptographic self-selection.This procedure is adopted in blockchains like Algorand [10,6].In cryptographic self-selection, each round r has a seed Q r−1 , used to sample the leader r to propose the r-th block in the blockchain.In the ideal case where Q r−1 is unbiased, a clever cryptographic construction ensures that a miner owning a α ∈ [0, 1] fraction of the coins has a probability α of becoming the leader.
Unlike longest-chain blockchains, blockchains using cryptographic self-selection are immutable because once the leader validates B r , B r cannot be modified.Nevertheless, one limitation of such protocols is that the leader for round r may have some influence over the seed Q r+1 for round r + 1.For example, Chen and Micali [6] note that it is possible for an adversary to bias seeds in future rounds in Algorand's cryptographic self-selection.In this work, we study quantitatively the limits of how much these deviations might benefit an economically-motivated adversary.Specifically, we assume that the adversary wishes to maximize the fraction of rounds during which they are the leader.If the adversary were honest, this would be exactly an α fraction.We seek to understand f (α) ≥ α, the maximum fraction of rounds that an adversary with an α fraction of the stake can lead in expectation. 1
Overview of Results and Roadmap
Our main contributions are as follows: • First, we provide a formal stochastic process that captures the game played by strategic players who want to be leader as often as possible in a cryptographic self-selection protocol.We provide a detailed, formal description of the game in Section 2, and prove several basic facts in Section 3.1.These sections provide a clean stochastic process whose analysis directly informs the rewards achievable by strategic players in blockchain based on cryptographic self-selection.
• In this game, it is a priori possible that an extremely strong strategy exists that lets a player with an α < 1 fraction of the stake win unboundedly-many rounds in a row in expectation.We prove that when α < 3− √ 5 2 ≈ 0.38, this is not possible, and the optimal fraction of rounds won by a strategic player with α < 3− √ 5 2 is < 1.Note that in many protocols, including [6], that use cryptographic self-selection, owning α > 1/3 of the stake is already enough to subvert consensus, so the α < 1/3 is the most relevant range where strategic mining is a concern.We prove this in Section 4.
• We pose a simple strategy, the 1-Lookahead strategy, which strictly outperforms the honest strategy for all α.We fully analyze the expected reward of this strategy in Section 5.
• Finally, we describe how the optimal strategy can be found by solving a series of MDPs.As the MDPs are infinite, this unfortunately does not immediately give an efficient algorithm.This appears in Section 6.
Related Work
Seminal work of [7] established that strategic mining in Bitcoin is possible.Hundreds of followup works pushed their ideas in various directions.One notable work, which is similar 1 This is the same objective in prior work [7,19,3,8].In prior work, this objective function was motivated by the block reward associated with creation of each block.Even if a protocol has no block reward there is still some economic incentive associated with creating a block.This could be due to transaction fees [9,18], or side contracts that the leader is able to execute in deciding what to include.We do not explicitly model the direct connection between being a leader in a round and the monetary reward, and treat this per-block incentive as exogenous.
in spirit to ours, is [19], who nail down the optimal achievable reward for a miner who has an α-fraction of the computational power.Follow-up works such as [3,8] consider similar questions for proof-of-stake instead of proof-of-work, but to the best of our knowledge, all prior work in this agenda considers longest-chain protocols.In comparison to this line of work, ours is the first (to our knowledge) to consider a formal model of strategic behavior in cryptographic self-selection, which is used in protocols based on Byzantine consensus.
Chen and Micali [6] develop the theoretical protocol for Algorand, and propose that manipulations of the cryptographic self-selection protocol may be possible.They do not propose a concrete manipulation, but do upper bound the maximum fraction of rounds that certain kinds of adversaries can possibly be leader.In comparison to their work, our work proposes a formal model to capture the entire strategy space in cryptographic selfselection protocols, and our 1-Lookahead strategy also provides the first concrete profitable manipulation.
Background and Setup
In this section, we provide our formal model and some preliminary observations.Our model captures the cryptographic self-selection protocol of [6], but we remind the reader that our model only concerns leader selection-this process is independent of block creation, consensus, etc.
Blockchain Protocols with Finality
Many modern proof-of-stake blockchain protocols, such as Algorand [6,10], differ significantly from the longest-chain protocols like Bitcoin.All blockchain protocols maintain a ledger, which is a sequence of blocks B 0 , B 1 , . . ., B t , . ... Protocols with finality differ from Bitcoin in that there are no forks.Once round t has concluded, there is a single well-defined B t , which will stay fixed throughout eternity.
To produce the block for round t, blockchain protocols with finality run an underlying consensus protocol.These consensus protocols often require a leader t , selected based on B 1 , . . ., B t−1 , who gets to propose the block which could potentially be ratified as B t .Note that because there are no forks, the leader t is well-defined.
Cryptographic Self-Selection to Determine a Leader
One problem that any blockchain protocol with finality must resolve is how to determine t as a function of B 1 , . . ., B t−1 , and this must be done with care.For example, if there are N coins indexed from 1 to N , one naive proposal might simply declare the owner of coin t (mod N + 1) to be the leader at round t.This is vulnerable to a predictability attack: an attacker knows exactly which coins they need to own in which rounds, and can be solely responsible for block proposal for many rounds in a row.Another naive proposal might declare the owner of coin HASH(B t−1 ) (mod N + 1) to be the leader at round t.This is vulnerable to a grinding attack: t−1 has many options for the contents they include in B t−1 , and can try arbitrarily many contents until HASH(B t−1 ) results in a coin they own.In general, the goal of a leader-selection protocol is to pick a leader in a manner so that: • When every user honestly follows the intended protocol, the distribution of each t is proportional to the stake, and i.i.d.across rounds.That is, each user with an α fraction of the total stake is selected as the leader in round t with probability α, independently across rounds.
• A self-interested user has little ability to predict future rounds in which they could become the leader, or to increase the fraction of rounds in which they are the leader.
Cryptographic self-selection is a clever approach, used by Algorand [6,10] to select a leader.Before defining the full protocol, we need two basic tools.
Tools for Cryptographic Self-Selection
One useful cryptographic tool towards cryptographic self-selection is a verifiable random function [14].For the purposes of this paper, we'll use an ideal verifiable random function.
Definition 2.1 (Ideal Verifiable Random Function).An Ideal Verifiable Random Function (Ideal VRF) satisfies the following properties: • Setup.There is an efficient randomized process to produce a secret key sk and a public key pk that parameterize the function f sk (•).
• Private Computability.There is an efficient algorithm A such that for all sk, A(x, sk) = f sk (x) (that is, f can be efficiently computed with knowledge of the secret key sk).
• Perfect Randomness.For all inputs x = y, the random variables f sk (x) and f sk (y) are independent, and uniformly drawn from [0, 1], conditioned on knowledge of pk.In particular, this implies that f sk (x) is distributed uniformly on [0, 1] to any user who sees only pk, even after that user has seen any number of pairs (x 1 , f sk (x 1 )), . . ., (x k , f sk (x k )), where x i = x, ∀i.
• Verifiable.There exists an efficient algorithm V that takes as input x, y, pk and outputs yes if and only if y = f sk (x).
Intuitively, an Ideal VRF allows a holder of sk to draw a random number uniformly from [0, 1] in a way that is unpredictable to anyone without knowledge of sk, yet also in a verifiable manner.The distinction between an Ideal VRF and VRF lies in perfect randomness: it is generally not possible to have the random variables {f (x 1 ), . . ., f (x k )} be statistically indistinguishable from independent uniformly random draws from [0, 1] conditioned on pk.VRFs used in practice instead provide that the distribution of {f (x 1 ), . . ., f (x k )} be computationally indistinguishable from independent uniformly random draws from [0, 1], conditioned on pk. 2e omit a formal definition of (non-Ideal) VRFs, which is cumbersome and not relevant to our results.In particular, all proposed deviations work even when the protocol has access to an Ideal VRF, and therefore they also work when the protocol instead uses a VRF.
To have a simple example of a (non-Ideal) VRF in mind, consider any digital signature scheme and hash function.On input x, first, digitally sign x to obtain SIG(x), and then hash it (this is the VRF used in [6]).Indeed, with the secret key, a user can efficiently compute a digital signature of any x and hash it.Similarly, correct computation of the hash function can be efficiently verified by anyone, and correct computation of the digital signature can be efficiently verified by anyone with the public key.Any input SIG(x) to the hash function is mapped to a uniformly-random draw from [0, 1], independently of all other inputs, and the digital signature scheme ensures that anyone without the secret key cannot guess SIG(x), even with knowledge of x, and any number of input/output pairs to SIG.
A second tool we will need is a concept that enables the leader to be selected proportional to the stake, rather than uniformly at random among all accounts.Definition 2.2 (Balanced Scoring Function).A scoring function S(•, •) is balanced if for all n ∈ N and all α 1 , . . ., α n ∈ R n >0 : Pr Observe that, if ties in arg min are broken lexicographically, this implies that for all α, the distribution of S(X, α) when X is drawn uniformly from [0, 1] must have no point-masses.
Intuitively, one can think of arg min i {X i } as the winner of a random process when each X 1 , . . ., X n is drawn independently from the uniform distribution on [0, 1], denoted by U ([0, 1]), and each player is equally likely to win.A balanced scoring function allows us to redistribute the probability of winning to be proportional to α i instead.
Cryptographic Self-Selection Protocol
Now, we define the cryptographic self-selection protocol, the leader-selection protocol analyzed throughout our paper.
Definition 2.3 (Cryptographic Self-Selection Protocol A). The Cryptographic Self-Selection
Protocol A (CSSPA) is the following: • Every account i sets up an Ideal VRF with secret key sk i and public key pk i .α i ∈ [0, 1] refers to the fraction of the total stake that account i owns.
• Q r denotes the seed for round r.The initial seed is a uniformly random number in [0, 1] constructed via a coin tossing protocol [4].
• In round r, each user i computes their credential Cred i r := f sk i (Q r ).Every user can either broadcast, or not broadcast.B r denotes the set of users who broadcast in round r. 3• There is a publicly-known balanced scoring function S. The leader r for round r is arg min i∈Br {S(Cred i r , α i )}.
• Q r+1 := Cred r r .That is, the seed for round r + 1 is the credential of the leader for round r.
We note a few quick observations about CSSPA: • Aside from network/security/cryptography attacks, which are not the focus of this paper, the action space of a single account in each round is binary: broadcast your credential, or don't.A single player may own multiple accounts.Therefore, the actions a single player may take in our game is to: a) decide how to divide their stake among multiple accounts, and b) pick which subset of credentials to broadcast.
• We'll refer to the honest strategy as one which announces all credentials in every round.
• Assuming all players are honest, each leader is drawn i.i.d. and proportional to α.This follows immediately from the definition of Ideal VRF and balanced scoring function.
• Assuming that all players are honest, the protocol is robust to Sybil attacks.That is, a player who truly controls an α i fraction of the total stake can put all of their funds into a single account, or split their funds arbitrarily over any number of accounts.No matter how they divide their funds, the probability that an account owned by this player is selected as leader is exactly α i .
• Much analysis of CSSPA can be done agnostically to the particular balanced scoring function.For example, Proposition 2.1 establishes that our analysis holds for a wide class of "canonical" balanced scoring functions.In particular, our analysis chooses a particularly simple balanced scoring function for the benefit of tractability, but our analysis holds for the balanced scoring function used in [6] as well via Proposition 2.1.
Throughout our paper, we'll use the balanced scoring function S(x, α i ) := − ln(x) α i .This allows us to leverage basic facts about independent draws from exponential distributions.Definition 2.4 (Exponential Distribution).The exponential distribution with rate α is the distribution with Cumulative Density Function (CDF) F α (x) := 1 − e −αx , for all x ≥ 0. We refer to Exp (α) as one independent sample from the exponential distribution with rate α.For simplicity of notation in later calculations, we will denote by Exp (0) to be a point-mass at +∞. Exponential distributions have many relevant properties that we remind the reader of in Appendix A.
Proof.We first show that S(x, α i ) is distributed according to Exp (α i ) when x is drawn uniformly from [0, 1].This follows essentially because − ln(x) α i is the inverse reverse-CDF of Exp (α i ).To see the claim, we compute the probability that S(x, α i ) > y, for any y: This means that the CDF of S(x, α i ), when x is drawn uniformly from [0, 1], is exactly 1 − e −α i x , and therefore this distribution is equal to Exp (α i ).Now, the fact that S(•, •) is a balanced scoring function follows from Corollary A.1 (which states that the minimum of X 1 , . . ., X n , when each X i is drawn independently from Exp (α i ) is equal to X i with probability α i , for all i).
We conclude this section by formally establishing that our analysis extends to a broad class of scoring functions.
Before proceeding, we give quick context for each bullet.Balanced scoring functions where S(•, α) is not monotone decreasing exist, but assuming that S(•, α) is monotone decreasing is w.l.o.g.Indeed, for any α, let F α denote the CDF of the random variable S(X, α) when X is drawn uniformly from [0, 1].Now consider redefining S (x, α) := F −1 α (1 − x).Then the distribution of S(X, α) and S (X, α) are identical, but S (•, α) is monotone decreasing.We conjecture that all balanced scoring functions satisfy the second two bullets, but we suspect that rigorously establishing this will require significant analysis.As this is not the focus of our paper, we instead treat these bullets as reasonable assumptions.Indeed, the balanced scoring function used by Algorand is canonical.
Proposition 2.1.The game induced by CSSPA with a canonical balanced scoring function is independent of the particular canonical balanced scoring function used.Formally, for two distinct canonical balanced scoring functions S, S , the games induced by CSSPA are identical.Specifically, for all players i, there is a bijective mapping f from strategies of player i in the CSSPA with S to strategies of player i in the CSSPA with S .For all i, the payoff that player i receives in the CSSPA with S under strategy profile s is equal to the payoff that i receives in the CSSPA with S under strategy profile f i (s i ) i .
A complete proof of Proposition 2.1 appears in Appendix B.
Our Model: Strategic Mining in Cryptographic Self-Selection
This section formally defines our model and, in particular, the optimization problem considered by a strategic player.Like prior work [7,5,8], we consider a single strategic player who is best responding to a profile of honest players.The purpose of this analysis, like in prior work, is to understand the maximum disruption that can be caused when a 1 − α fraction of the stake is owned by honest players, and an α fraction of the stake is owned by strategic players. 4We now formalize the strategy space of the strategic player.
Definition 3.1 (Strategy Space in CSSPA).CSSPA is parameterized by α, the fraction of stake owned by the strategic player, α the distribution of remaining stake among honest players, and β ∈ [0, 1], the network connectivity strength of the strategic player.We'll refer to this as a β-strong player.When β = 1, we'll simply refer to the player as strong, and when β = 0 we'll refer to the player as weak.The strategic player knows α, β, α.
In round r, the strategic player in CSSPA has the following information and makes the following decisions, in order: 1.The strategic player can distribute their total stake of α arbitrarily among as many accounts as they desire.Refer to this set as A.
2. The strategic player knows Q r , and knows that all other players are honest.
3. For a set of accounts B such that B ∩ A = ∅, and j∈B α j = β • (1 − α), the strategic player learns Cred j r , for all accounts j ∈ B. The strategic player does not learn Cred j r for any j / ∈ A ∪ B (that is, the player only knows that each S(Cred j r , α j ) will be drawn from Exp (α j ), independently).
4. Observe that the strategic player can compute Cred i r and also S(Cred i r , α i ), for all accounts i ∈ A.
5.
Observe further that for all j ∈ A ∪ B, Cred j r is a possible seed for Q r+1 .So the player can also pre-compute a hypothetical Cred i r+1 , assuming Q r+1 = Cred j r , for each account i ∈ A and j ∈ A ∪ B. But observe that the strategic player cannot execute this computation for i / ∈ A (because they cannot compute the ideal VRF for accounts / ∈ A).
6.More generally, for any k, and any list of accounts i 0 , . . ., i k such that i 0 ∈ A ∪ B, and each i j ∈ A for all j > 0, the player can also pre-compute what Cred i k r+k would be, assuming that r+j = i j for all j ∈ {0, . . ., k − 1}.
7. The strategic player selects a subset A r ⊆ A, and broadcasts all credentials in A r .
We will consider optimal strategies for all α, β, α.Note that the role of β differentiates how much information they know about other players' credentials before deciding which credentials of their own to broadcast.Before getting into our main analysis, we prove some basic facts about optimal strategies in this model.
Basic Facts on Optimal Strategies
First, we define the reward achieved by a particular strategy π, which the strategic player aims to optimize.A priori, the reward can depend on α, β, and the distribution of the remaining (1 − α) fraction of stake, α.Definition 3.2 (Reward of a Strategy).A strategy π prescribes an action to take during each round.X α,β, α r (π) is an indicator random variable for the event that the strategic player is the leader during round r, when the game with parameters α, β, α is played.The reward of a strategy π is simply the expected fraction of rounds where the strategic player is the leader.We drop the superscript and write X r (π) whenever α, β, α is clear from context.Formally: The expectation is taken over the randomness in the Ideal VRFs in every round, assuming that all non-strategic miners are honest.We use the notation val(α, β, α) := sup π {Rev α,β, α (π)}.We say that a strategy π is ε-optimal for parameters α, β, α if Rev(π) ≥ val(α, β, α) − ε.
Next, we produce a series of refinements concerning ε-optimal strategies, which will allow us to greatly simplify the analysis of strategies in CSSPA.First, we observe that the strategic player need not consider any set with |A r | > 1.
Proof.Observe that the strategic player can compute S(Cred i r , α i ) for all i ∈ A. If they broadcast a set A r = ∅, then the leader will be i * := arg min i∈Ar {S(Cred i r , α i )} if and only if S(Cred i * r , α i * ) < S(Cred j r , α j ) for all j / ∈ A. Observe that this is exactly what would happen if the strategic player instead broadcast only {i * } instead.So π will broadcast only {i * }, and this results in the same leader as using π.
If instead the strategic player chooses to broadcast A r = ∅, then π will broadcast ∅ as well.This clearly results in the same leader as using π, because the actions are identical.
The leader is the same in both cases, and π only ever broadcasts (at most) a single credential.
Next, we show that optimal strategies split their stake among as many accounts as possible.
Lemma 3.1.Consider a strategy π where strategic player divides their stake into n wallets with stake α i > 0, for i ∈ [n].Then there is a strategy π where the strategic player instead divides their stake into 2n wallets with stake α i > 0, for all i ∈ [2n], and Rev(π ) = Rev(π).
Proof.The strategy π defines 2n wallets with stake for n < i ≤ 2n.
Observe that, conditioned on Q r , S(Cred i r , α i ) is distributed according to Exp (α i ), independently for all i.Similarly, S(Cred j r , α j ) is distributed according to an Exp α j , independently for all j.Define now the random variable j(i) := arg min{S(Cred i r , α i ), S(Cred n+i r , α n+i )}, and denote by Y i r := S(Cred j(i) r , α j(i) ).Then by Lemma A.1, Y i r is distributed according to Exp α i + α n+i = Exp (α i ), independently for all i ∈ [n].Therefore, Y i r and S(Cred i r , α i ) are identically distributed.Therefore, we can couple executions of π and π so that Y i r = S(Cred i r , α i ) for all r, i, and also so that S(Cred j r , α j ) is identical for all r, j / ∈ A. Consider now the strategy π that does the following.If π does not broadcast a credential, then π also does not broadcast a credential.If π broadcasts i * , then π broadcasts j(i * ).Observe now that the score of the credential broadcast by π and π is identical (due to the coupling), and the scores of credentials broadcast by the honest players are also identical.Therefore, r = i * under π if and only if r = j(i * ) under π .Moreover, Q r+1 is identical under both executions.We have therefore coupled the executions of π and π so that X α,β, α r (π) = X α,β, α r (π ) for all r, and therefore Rev α,β, α (π) = Rev α,β, α (π ).
Next, we argue that is w.l.o.g. to consider two honest players, one with a fraction β•(1−α) of the stake, and the other with fraction (1 − β) • (1 − α) of the stake.r , α 1 ) and S(Cred 2 r , α 2 ) in the execution with α .Observe, now, that the seed for round r + 1 in the execution with α will be the minimum of Y B , Y C , and the score of the credential broadcast by the strategic player.In the execution with α , the seed for round r + 1 will be the minimum of S(Cred 1 r , α 1 ), S(Cred 2 r , α 2 ), and the score of the credential broadcast by the strategic player.Therefore, Q r+1 is the same in both executions.Moreover, we also have X α,β, α r (π) = X α,β, α r (π).We have therefore coupled the executions with α and α so that X α,β, α r (π) = X α,β, α r (π) for all r, and therefore Rev α,β, α (π) = Rev α,β, α (π).
Proof.Observe that if min j∈B {Cred j r } < min j∈A {Cred j r }, then the seed Q r+1 will be equal to the minimum credential among all honest nodes, no matter what the strategic player chooses to broadcast.So no matter what they do this round, they cannot affect Q r+1 .Because the strategic player's actions during round r have no impact on the game, they can shift any computation they originally planned to do in round r later to round r + 1.This results in a strategy π that results in identical seeds in every round as π, but that does no computation during rounds where their action has no impact.
We now state the strategy space of the refined CSSPA.In round r, the strategic player in CSSPA has the following information and makes the following decisions, in order: 1.The strategic player can distribute their total stake of α arbitrarily among as many accounts as they desire.Refer to this set as A.
2. The strategic player knows Q r , and knows that all other players are honest.
3. The strategic player learns Cred B r .The strategic player does not learn Cred C r (that is, the player only knows that S(Cred C r , (1 4. Observe that the strategic player can compute Cred i r and also S(Cred i r , α i ), for all accounts i ∈ A ∪ {B}.For any k, and any list of accounts i 0 , . . ., i k such that i 0 ∈ A ∪ {B} and i j ∈ A for all j > 0, the player can also pre-compute what Cred i k r+k would be, assuming that r+j = i j for all j ∈ {0, . . ., k − 1}.If the strategic player learned in Step (3) that S(Cred B r , β • (1 − α)) < min j∈A {S(Cred j r , α j )}, then the player does no computation.
5. The strategic player selects an account i * to broadcast, or decides not to broadcast.
Existence of Optimal Recurrent Strategies
Recall we bootstrap the initial seed Q 0 to be drawn from U [0, 1] via a distributed coin tossing protocol.Hence Q 0 is an unbiased seed since it does not favor any player.Formally, we say a seed Q r−1 is unbiased if substituting Q r−1 by a fresh independent sample from U [0, 1] results in the same distribution for X r (π), X r+1 (π), . . .conditioned on all the queries to f sk i for all i up to round r − 1.Another interpretation is that the adversary did not query any f sk i on Q r−1 before round r begins which suggests the adversary is indifferent about replacing Q r−1 for a fresh sample from U [0, 1].
The adversary has a probability at most α of becoming the leader for round r if Q r−1 is unbiased because the probability an honest miner samples the lowest scoring credential is equal to 1 − α-the adversary can only reduce their chances of being a leader by not broadcasting their credentials.
How can the adversary build a biased Q r provided Q r−1 is unbiased?For some intuition, suppose β = 1, and the adversary has the lowest scoring credential for round r.In other words, the adversary observes the credentials of all honest miners and knows that if they broadcast some credential Cred i * r , i * ∈ A becomes the leader for round r.However, the adversary also has the option to not broadcast any credential, in which case, some account B becomes the leader.Note that the adversary already knows Cred B r before deciding if they will broadcast Cred i * r or not (the assumption β = 1 implies the adversary is well connected and get to see all other credentials before taking any action).Then, the adversary queries f sk i on Cred i * r and Cred B r for all i ∈ A and observes which seed would be more favorable for round r +1 (would allow the adversary to sample credentials with the lowest scores for round r + 1).This concludes our example, and in Section 5, we provide a complete description of one such strategy.As a takeaway, the the adversary can bias the seed Q r unless the credential with the lowest score comes from an account j / ∈ A. It will be convenient to ask when the game reaches a round τ ≥ 1 where Q τ +1 is unbiased given that Q 0 is unbiased.Definition 4.1 (Stopping Time).We call a round τ a stopping time if for all possible strategies π, the distribution of {X r (π)} r>τ , conditioned on Q τ and all information the adversary has during round τ , is identical to the distribution of {X r (π)} r>τ after replacing Q τ +1 with a uniformly random draw from [0, 1].That is, τ is a stopping time if the game effectively resets at round τ + 1, because the adversary was unable to bias the distribution of Q τ +1 .
We now state the main way in which stopping times arise.Observation 4.1.Let τ be a round such that the adversary does not query any VRF on Q τ +1 during any round ≤ τ .Then τ is a stopping time.
Proof.Because the adversary has not queried Q τ +1 on any VRF, this means that the adversary currently believes that every future query to any VRF on Q τ +1 is independently drawn from U ([0, 1]) (by definition of VRF).Replacing Q τ +1 with any other seed that has not been queried by the adversary has exactly the same distribution.In particular, with probability 1, a uniformly random draw from [0, 1] has not been queried by the adversary in any previous round, and therefore τ is a stopping time.Let τ 0 , τ 1 , . . .be a sequence of stopping times.Since we can assume the adversary's strategy resets whenever a stopping time is reached, τ 1 −τ 0 , τ 2 −τ 1 , . . .and τ 1 r=τ 0 +1 X r (π), τ 2 r=τ 1 +1 X r (π), . . .are sequences of i.i.d.random variables.The following result simplifies the expression for revenue for positive recurrent strategies: where τ is a stopping time.
Proof.Let τ 0 = 0, τ 1 , τ 2 , . . .be the sequence of stopping times and let N (t) be the index for the most recent stopping time by time t.Then Since N (T ) → ∞ as T → ∞, the statement follows from the strong law of large numbers (Lemma A.4).
Lemma 4.1 provides a nice characterization for the revenue of positive recurrent strategies which will be critical when studying optimal strategies.In the rest of this section, we aim to show a sufficient condition for the existence of optimal positive recurrent strategies by proving the following informal claim: for any strategy π, let τ ≥ 1 be the first round where arg min i∈[n] S(Cred i τ , α i ) / ∈ A, then τ is a stopping time and Proof.The leader r / ∈ A, because both B and C always broadcast their credentials, and one of them has the lowest score.Let j * refer to the account in {B, C} with minimum score.Then Q r = Cred j * r regardless of the adversary's action.Now, observe that the probability that Q r+1 has been any previous credential in any previous round < r is 0 (because all credentials are drawn uniformly from [0, 1] when drawn).Moreover, because r / ∈ A, the adversary cannot possibly have known Q r+1 prior to round r.This is because the adversary cannot compute the VRF of r , and r only broadcasts Q r+1 during round r.Finally, the adversary did not query Q r+1 after learning Q r+1 during round r because either the minimum account was B (in which case, by definition of step (4), the adversary did not query B), or the minimum account was C (in which case, the adversary does not have a step to query any VRFs during round r after learning C. Therefore, the adversary certainly did not query Q r+1 after learning Q r+1 . The only remaining possibility is that the adversary had previously decided to query Q r+1 at a point when all they know is that Q r+1 is drawn independently from U ([0, 1]), conditioned on inducing the minimum credential for round r.As this distribution is continuous (even after any conditioning), the probability that it outputs any particular credential is 0. Therefore, assuming that the adversary queries a finite number of inputs across all previous rounds, the probability that it has previously queried any VRF on Q r+1 during any previous round is also 0.
Therefore, Q r+1 has not been queried by the adversary in rounds ≤ r, and r is a stopping time.
The Branching Process
Next, we aim to show that the expected value of the forced stopping time is finite whenever the adversary owns at most 38% of the stake.Fix the seed Q r−1 and let j * = arg min j / ∈A S(Cred j r , α j ), the honest account with lowest score when the seed is Q r−1 .Let W (Q r−1 ) denote all the accounts that could become leaders during round r when the seed is Q r−1 : | is related to the growth distribution in a Galton Watson branching process [20].To see this, consider a tree Tree(Q 0 ) where each node stores a seed.We give a recursive definition for Tree(Q 0 ).Initialize the tree to contain only the root Q 0 , which we color black.Then while Tree(Q 0 ) contains some black node Q: Intuitively, a node is colored red without appending new edges whenever that node is a forced stopping time.A node is colored red after appending a new edge if it is not a forced stopping time (and then we need to recurse on each possible subgame induced by each possible seed).
The height of Tree(Q 0 ) gives an upper bound for how long it takes for a game starting with seed Q r−1 to reach a forced stopping time.To see this, consider an omniscient adversary, who knows all secret keys (and therefore can query all VRFs in any round).Even this omniscient adversary can bias the next k ≥ 1 rounds, if and only if |W (Q r−1 )| ≥ 2 (the adversary has at least two options for the seed Q r ) and there is a value for Q r ∈ {Cred i r : i ∈ W (Q r−1 )} such that the omniscient adversary can bias the next k − 1 rounds.In other words, the omniscient adversary can bias k rounds if and only if there is a path Q 0 , Q 1 , . . ., Q k in the tree.
A real adversary cannot search over the entire tree for the longest path, since the real adversary cannot compute the VDFs of accounts they do not own in future rounds (they can still compute VDFs for their own accounts in hypothetical future rounds, which provides statistical information about what the tree might look like in future rounds, but they do not know the precise tree as the omniscient adversary does).However, the performance of the omniscient adversary is clearly an upper bound on the performance of the real adversary, so T provides an upper bound for the number of rounds the adversary can bias.Hence, showing that the expected height of T is finite implies that any strategy played by a strategic miner is positive recurrent.
First, we will characterize the distribution of |W (Q r−1 )|.Formally, we will show that The notation min (i) {S} refers to the i th -smallest element of S for i ≥ 1 and min (0) {S} := 0. As a technical tool, we recall a useful property for exponential distributions: for all k ≥ 1, min i∈[n] S(Cred j r , α k ) where Exp (α) refers to an independent sample from the exponentially distributed with rate α.We defer the proof to Appendix C. Lemma 4.3.Let X 1 , X 2 , . . .be i.i.d.copies of an exponentially distributed random variable such that min n∈N X n is exponentially distributed with rate α.Then, for all i ∈ N, the random variable Remark.Lemma 4.3 provides an useful tool to reduce the computational cost of sampling only the best credentials for our adversary.If one wants to sample the k lowest scores among accounts in A, a naive approach would require us to take |A| samples from Exp α |A| , sort in increasing order and output the first k credentials.However, from Lemma 4.3, it suffices to sample and output the sequence We now prove the probability that the adversary has j options for the seed of round r given Q r−1 is α j (1 − α).Lemma 4.4.Let X 1 , X 2 , . . .be i.i.d.exponentially distributed random variables such that min n∈N X n is exponentially distributed with rate α.Let W be exponentially distributed with rate 1 − α.Let S = {i ∈ N :
Extinction in the Branching Process
Next, we derive necessary conditions for the expected height of Tree(Q 0 ) to be finite.This result will will imply the existence of optimal positive recurrent strategies.Lemma 4.5.Let Q 0 be an unbiased seed and let τ be the first forced stopping time.Then Proof.Clearly τ is upper bounded by the height of Tree(Q 0 ), then the event τ ≥ k implies the height of Tree(Q 0 ) is at least k + 1.For all k ≥ 0 and Q ∈ [0, 1], let E k,Q denote the event that Tree(Q) has height at least k + 1.Note P r [E 0,Q ] = 1.Then, for k ≥ 1, the event E k,Q holds if and only if |W (Q)| ≥ 2 and for some child Q ∈ W (Q), the sub-tree Tree(Q ) has height at least k − 1.Let The last line observes the geometric series converges to α(2−α) 1−α .To conclude, we proof by induction that A k ≤ α(2−α) 1−α k .The base case is clear: A 0 ≤ 1.For k ≥ 1, the inductive assumption gives as desired.This proves the statement.
Theorem 4.1.Consider any strategy π and let Q 0 be an unbiased seed.Let τ ≥ 1 the first forced stopping time.
The last inequality observes the geometric series converges for α < 3− √ 5 2 .As an application of Theorem 4.1, we derive a theoretical upper bound on the revenue for any strategy.Figure 1 compares the curve for the theoretical upper bound with the revenue of the honest strategy.Proof.From Theorem 4.1, for α < 3− √ 5 2 , there is an optimal positive recurrent strategy π.Let τ ≥ 1 be a forced stopping time.From Lemma 4.2, τ is a stopping time and From linearity of expectation, The first inequality observes that if the adversary cannot choose Q τ , then the adversary does not create block B τ .
The analysis in this section shows the following: for all α < Figure 1: Maximum revenue attained by any strategy.In the left, we plot the revenue for the honest strategy and our upper bound for the maximum revenue.In the right, we plot the maximum revenue improvement relative to the honest strategy.
The 1-Lookahead strategy
This section defines the 1-Lookahead strategy for a strong adversary (β = 1), which outperforms the honest strategy for any value of α.Recall that the adversary divides their stake equally among an arbitrarily large number of accounts A. Note that this is a concrete strategy that can be used in CSSPA, and therefore its reward gives a lower bound on val(α, 1).Definition 5.1 (1-Lookahead strategy).The strategy proceed as follows: 1. Let r be the current round.Let W (Q r−1 ) = {i ∈ A : S(Cred i r , α i ) < min i / ∈A S(Cred i r , α i )} be the collection of potential winners for the adversary.3.If |W (Q r−1 )| ≥ 1, for each potential winner i ∈ W (Q r−1 ), for each account j ∈ A, sample credential Cred i,j r+1 = f sk j (Cred i r ).Let j(i) = min j∈A Cred i,j r+1 .
5. Broadcast Cred i * r at round r and Cred r+1 at round r + 1.
Return to
Step 1.
Proof.For our strategy, consider the stopping time τ where τ = 1 when |W (Q 0 )| = 0 and τ = 2 when |W (Q 0 )| ≥ 1. Observe that this is indeed a stopping time as: (a) when |W (Q 0 )| = 0, τ = 1 is a forced stopping time, and (b) while the adversary queries f sk j (Q 1 ) for multiple possible values of Q 1 , they do not query f sk j (Q 2 ) for any of the possible values for Q 2 .Therefore, from the perspective of the adversary, the distribution of any VRF Q 2 is just U ([0, 1]), and this distribution is identical if we replace Q 2 with a fresh seed.Therefore, τ = 2 is indeed a stopping time when W (Q 0 ) ≥ 1.Now let's compute E[τ ].The probability |W (Q 0 )| ≥ 1 is equal to the probability min i∈A {S(Cred i r , α i )} < min j / ∈A {S(Cred j r , α j )}.The first term is exponentially distributed with rate α (Lemma A.1) while the second is exponentially distributed with rate 1 − α.Hence the probability the adversary has at least one winner is α (Lemma A.2). Then the adversary wins round r since they always reveal a winning credential.Moreover, for round r + 1, they reveal a credential with score min i∈W (Q 0 ),k∈A Cred i,k r+1 which is exponentially distributed with rate α • |W (Q 0 )| (Lemma A.1). From Lemma A.2, the probability the adversary wins round r + 1 given as desired.This concludes the proof.
Figure 2 shows the revenue of the 1-Lookahead strategy (Theorem 5.1) against the revenue of the honest strategy.Observe that it is always more profitable than the honest strategy, as expected.In the left, we have the absolute revenue of the honest and the 1-Lookahead strategies.In the right, we plot the percentage revenue improvement from 1-Lookahead relative to the honest strategy.
-For each i / ∈ A, once user i already broadcast Let Tree(Q) be the graph obtained when we take k → ∞ in Tree k (Q).Recall the basic facts for an optimal strategy π from Section 3.1: (1) π divides its stake α among an infinite amount of wallets; (2) π broadcast at most one credential each round.Then, without loss of generality, a strategy π maps Tree(Q) to at most one credential from {f sk i (Q)} i∈A , corresponding to the credential π broadcast in a round with seed Q.If the strategy outputs no credential, we write π(Tree(Q)) = ⊥.Definition 6.1 (Value Function).Let π be a positive recurrent strategy, and let ρ be a positive constant.For a tree Tree(Q), define where τ is stopping time.Taking the expected value with respect to Tree(Q) gives . We can derive a recursive formula for the value function as follows: Proposition 6.1.For any positive recurrent strategy π, positive constant ρ, tree Tree(Q), Theorem 6.1.Let π and π be positive recurrent strategies.Then Proof.From Lemma 4.1 and the assumption π is positive recurrent, From linearity of expectation, The chain of inequalities proofs V ρ π = 0 when ρ = Rev(π) as desired.For the other direction, observe V ρ π is a strictly decreasing function of ρ.Hence there is a unique value for ρ where V ρ π vanishes to zero.This proves the first bullet.The second bullet follows from the fact V ρ π is strictly monotone decreasing in ρ. .
Proof.First we prove that if π ∈ arg max π Rev(π), then π ∈ arg max π V Rev(π * ) π .From Theorem 6.1, for all strategy π, where the first and second equality are the first bullet in the theorem; the inequality is the second bullet and the fact Rev(π) = Rev(π * ) ≥ Rev(π).Since the inequality holds for any π, we have π ∈ arg max π V Rev(π * ) π .This proves the first part.For the second part, we proof that if π ∈ arg max π V Rev(π * ) π , then π ∈ arg max π Rev(π).We already proved that V = 0 which proves π is optimal (Theorem 6.1).
The following is equivalent to Bellman's principle of optimality.Proof.Let π r refer to the action of strategy π at round r.From Corollary 6.1, the fact π is optimal implies The second equality is Proposition 6.1.The third equality observes that the optimal strategy for the sub-game starting with seed Q 1 is independent of the action taken at round 1. Hence V Rev(π) π (Tree(Q)) = max π V Rev(π) π (Tree(Q)) for all Tree(Q).
To compute the optimal strategy π * ∈ arg max π Rev(π), we can use a similar binary search algorithm from Sapirshtein et al. [19].We pick some ρ ∈ [0, 1] as our guess for Rev(π * ) and maximize the Markov Decision Process max π V ρ π.Let π be the strategy the solver outputs.Then one of the following cases tell us if ρ is a lower bound or an upper bound on the optimal revenue: • The case where V ρ π ≥ 0 witnesses that Rev(π * ) ≥ ρ.To see, recall V Rev(π) π = 0 (Theorem 6.1).Because V ρ π is a strictly decreasing function of ρ, we conclude ρ ≤ Rev(π) ≤ Rev(π * ) as desired.
Conclusion
We propose a stylized model to study optimal strategic mining in Cryptographic Self-Selection leader election protocols.We consider rational miners that wish to maximize the fraction of blocks they create.The same adversary has been studied in the context of Proofof-Work blockchains since the discovery of the selfish mining attacks against Bitcoin [7].
Prior work largely classifies existing protocols into two camps: those where sufficiently small miners cannot profitably deviate (longest-chain proof-of-work protocols with block reward and longest-chain proof-of-stake protocols with a randomness beacon), and those where arbitrarily small miners can still profitably deviate (longest-chain proof-of-work protocols with transaction fees, longest-chain proof-of-stake protocols without a randomness beacon).Our work classifies blockchains based on cryptographic self-selection with the latter group: we give a closed-form representation for a strategy that outperforms the honest strategy for any amount of stake.
The key open question left by our work is to nail down the optimal fraction of rounds that a β-strong strategic miner with an α fraction of the stake can earn.While our work states that this quantity can in principle be determined by performing binary search over infinitelysized MDPs, actually significant innovation seems to be required to actually perform this search, or even to approximating it computationally-efficiently.
A Probability Theory Background
Lemma A.1.Let X 1 , X 2 , . . ., X n be independent random variables where X i is a copy from Exp (α i ) where α i is a positive constant.Then min i∈[n] {X i } is identically distributed to Exp ( n i=1 α i ).
Proof.The proof follows from computing the probability P r [min X i ≤ x]: P r [min The last line witness min X i is exponentially distributed with weight n i=1 α i .
Lemma A.2. Let X and Y be drawn independently from an exponential distributions with rate α X and α Y respectively.Then Proof.We have as follows: α Y e −(α Y +α X )y dy Corollary A.1.Let X 1 , . . ., X n be drawn independently from exponential distributions with rates α 1 , . . ., α n , respectively.Then Pr[X i = min j∈[n] {X j }] = α i n j=1 α j .Proof.We prove this by induction, using Lemmas A.1 and A.2.As a base case, the claim is clearly true when n = 1, for all α 1 .Now as an inductive hypothesis, assume that the claim is true for some n, and all α 1 , . . ., α n .We now consider the case of n + 1 and any α 1 , . . ., α n+1 .
By Lemma A.1, min j∈[n+1]\i {X j } is distributed according to an exponential of rate j =i α j .By Lemma A.2, the probability that X i = min j∈[n+1] {X j } = α i n+1 j=1 α j , as desired.This argument holds for any i, and completes the inductive step.
Definition 3 . 3 (
Refined CSSPA).The refined CSSPA is parameterized by α, the fraction of stake owned by the strategic player, and β ∈ [0, 1], the network connectivity strength of the strategic player.There are two honest players B and C. B owns a β • (1 − α) fraction of the stake, and C owns a (1 − β) • (1 − α)-fraction of the stake.
Definition 4 . 3 (
Forced stopping time).Consider round r with seed Q r .If arg min i∈[n] S(Cred i r , α i ) ∈ A, we say r is a forced stopping time with respect to Q r .Lemma 4.2.If r is a forced stopping time, then r is a stopping time.
Figure 2 :
Figure2: Revenue for the 1-Lookahead strategy.In the left, we have the absolute revenue of the honest and the 1-Lookahead strategies.In the right, we plot the percentage revenue improvement from 1-Lookahead relative to the honest strategy. | 2022-07-12T20:48:17.595Z | 2022-07-12T00:00:00.000 | {
"year": 2022,
"sha1": "5b4f037f46e8bbc77be85879e70b0ca94345a958",
"oa_license": null,
"oa_url": "https://arxiv.org/pdf/2207.07996",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "42b7cfb45868efcb56dc90b65462bcab07cec768",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Economics",
"Mathematics"
]
} |
216247790 | pes2o/s2orc | v3-fos-license | Study on the Ways to Observe, Record, and Analyze Children’s Behavior
—The observation and record of children's behavior is the basic content of preschool teachers' daily work. The observation and analysis of children's behavior can promote teachers to see children and the reasons behind children's behavior. It is the core quality of preschool teachers' specialization to promote the integration of adult vision and Preschool Vision from dialogue. The guarantee of effective observation is to make full preparations for observation and record, to seek the clues of observation and record, to pay attention to the details of observation and record, and to strive to record objectively. The analysis of children's behavior should be combined with the development theory, behavioral clues and specific circumstances to make a comprehensive and holistic interpretation. Only in this way can teachers be able to provide effective development support in the zone.
INTRODUCTION
Observing the behavior of young children is the beginning of understanding young children, the starting point of carrying out various activities, the embodiment of teachers' professional quality, and the basis of high-quality early childhood education. Is the observation of young children's behavior just a look? How to observe and record the behavior of young children? Is the interpretation of young children's behavior merely an analysis of their behavior? How to interpret children's behavior appropriately? Therefore, to observe young children, people must first understand how to observe and record their behaviors. Second, it's a must to understand the ways to interpret their mental activities and development levels through their behaviors.
IS THE CORE ACCOMPLISHMENT OF CHILDREN'S TEACHERS
The observation, recording and analysis of children's behaviors are not only the daily work of children's teachers, but also the embodiment of their core qualities. The core accomplishment has specialty uniqueness and foundation. The core accomplishment of kindergarten teachers is unique to kindergarten teachers and is not possessed by teachers of other ages. At the same time, the core accomplishment is also reflected in that it is the basis of other qualities of preschool teachers.
In the "Professional Standards for Kindergarten Teachers", it is mentioned that teachers should have "the planning and implementation of educational activities", "encouragement and evaluation, communication and cooperation", "support and guidance of game activities" and "reflection and development ability". Although there is no mention of teachers' ability to observe children's behavior, the development of the above-mentioned ability is based on teachers' ability to observe children's behavior. The reason why the observation, recording and analysis of children's behaviors are regarded as the core accomplishment of children's teachers is determined by the object of children's educationchildren. Children's age characteristics determine their learning methods and characteristics. Children's learning is integrated in activities and life through perception, experience and operation. This requires teachers to pay attention to observing children's behavior and the causes of behavior in one day's life. Teachers are urged to constantly "discover" children and "interpret" children in practical activities. On this basis, teachers are required to reflect on the suitability of their own educational behavior and improve the quality of education. Therefore, the purpose of observing and recording children's behavior is not to simply record children's behavior, but to reflect on their own teaching behavior and promote their professional development on the basis of reading children, and finally realize high-quality teaching work. How to scientifically observe and record children's behavior so that the observation record text is faithful to the children's on-site activities? How can the interpretation of young children meet the real level of development? This requires teachers to learn how to observe records and master the main points of analyzing and interpreting children's behavior.
At present, kindergarten teachers mostly use written observation and recording methods, such as anecdotal recording method, live detailed recording method, etc., which are simple, easy to operate and can provide original, detailed and open materials. Therefore, written observation and recording method has become the most commonly used method for kindergarten teachers to observe children's behavior. The author combines the above cases to analyze how to record and analyze the written observation records.
A. Adequate preparation is a prerequisite for observing the record
In order to understand children's behavior more completely and accurately, teachers should make full preparations to catch children's behavior before observing and recording it. First, the teacher must determine the observation goal, the observation object and the observation concrete time. In general, the anecdotal record is an undefined object of observation, while the continuous record method requires a prior determination of the object of observation, but either method of observation must provide basic information about the object of observation and the time of observation Such as the child's name, age, sex, observed starting and ending time. Secondly, the teacher should be prepared to observe the necessary pen and paper and supporting materials. If we need to use charts and other symbols to show the activity site or children's activities, teachers should identify them in advance so that they can be recorded and analyzed later to better record behavior, but it should be noted that the use of multimedia is not a substitute for teachers' on-site content of timely record. Finally, the teacher should make sure to observe the children's environmental information in advance. Environmental information includes scene information and scene information, and scene information refers to the physical environment in which children's behavior takes place. For example, whether it is an outdoor area or an indoor area, what facilities and materials are used in the area? Situational information refers to the social and environmental information of children's behavior, for example, the number of children participating in activities, the way teachers organize activities, etc.
B. The search for clues is the key to the observational record
Preschool teachers face a variety of preschool behavior events every day. How do they screen out the observation events they need? This requires preschool teachers to have the ability to recognize and observe clues, which are the typical observation points in the observation. He main points of observation are different in different fields, and the emphases of observation are different in different forms of activities. However, children's learning and development is based on activities and displayed in activities, so we can look for observation clues from the following points: First, teachers should identify a certain activity which contains a field of learning and development. If the teacher is unable to identify the areas of study in which the activity context permeates, those observations will be recorded in a journal. Thirdly, children's occasional behavior or specific performance should be identified as a clue. The theory of Children's development describes the General Development Law of Children's growth, which is "abstract" children, while in real life children are "concrete" children. The development of young children in the same field is very different, so we want to take as a clue how the General Law of Early Childhood Development is reflected in specific children. For example, a child of 2 years and 8 months can eat with chopsticks, which is different from the typical behavior of the age group provided in our guide. However, teachers must be aware of this personality, pay attention to this specific performance and establish a link with the field of learning and development. Only in this way can teachers grasp the records of the clues and truly "see" the child. Finally, the children in the activities of the difficulties and solutions should also become a clue to observe. Children will encounter various problems and difficulties in the process of interacting with materials and people. The solution of problems and difficulties is the reflection of children's development level and comprehensive ability. Teachers should take the children's performance of solving problems as an observation clue, through the observation record of children in problem situations, find children who are "different" from others.
C. Objectivity is at the heart of the observational record
Objectivity requires teachers to make objective, clear, accurate and faithful records of the children's behavior or its development process in a certain period of time by using various means under the children's natural state. In order to record children's behavior without the influence of prior experience, we can use the language of line drawing in writing. The Language of line drawing is a kind of writing style which is concise and without exaggeration. It emphasizes describing things directly. Teachers should use descriptive language to observe children's behavior, record the scene without increase or decrease, and make no subjective guess, and strictly distinguish "her" from "her in your eyes" Without the Observer's Interpretive Assessment, the scene is true. What we advocate is to record what the teachers see and hear and be true to the situation. We emphasize the objectivity of observation, but it does not mean that teachers do not have the behavior of children's thinking and views. Records can be separated from evaluation analysis during the recording process.
D. Attention to detail is fundamental to the observational record
As the saying goes, the details of success or failure, as well as the observation and documentation of young children's behavior. Written observation records require a detailed, accurate and complete record of children's behavior, showing as far as possible the process, background and results of Children's interaction with people, objects and events. In the symbolic observation record, we also advocate the use of text-assisted record to provide children with detailed and informative information. The attention to details can show a vivid child, it can describe the language, action and expression of the child, and provide teachers with necessary clues to understand the child's behavior, and provide evidence for exploring the child's development level. Encourage teachers to switch between professional practice, cognition and behavior. For example, when observers recorded an 8 month old baby repeatedly dropping a spoon and then watching his mother pick it up, the movements were external and objective, so what did they mean? In the absence of a theory of child development, there is a common thread of explanation, with parents exclaiming, "this little guy is bad and naughty." But when we have certain developmental theories, we can explain that the child is in the Piaget's theory of cognitive development stage, practicing simple motor skills while repeating interesting movements at the same time, the parent's reaction to the child's behavior becomes the reinforcement of the child's repeated actions, and the parent-child interaction is the bond of parent-child emotional bond. The analysis of children's behavior should be based on the corresponding theory of Children's development, but also to go beyond the relevant theory to view the growth of children with a developmental perspective. In order to ensure that every child has reasonable expectations, in 2012 the state promulgated "The Learning and Development Guide for Children Aged 3-6 Years". The guide sets out reasonable expectations of what children at the end of the three age groups of 3-4, 4-5 and 5-6 should know, what they can do and what level of development they can achieve These age goals and typical performance can be used as a dimension and reference to analyze children's behavior. However, do not use age goals as a yardstick to measure children's development "ahead" and "slow", and use "typical performance" as a design activity to train children specifically. Under the guidance of the goals in the guide, we should analyze and interpret the zone of children's recent development with the help of "typical performance".
B. Focusing on "relationship" and analyzing infant behavior with the help of specific situations
Realism holds that things are not simply dualistic relations, but a relationship of mutual dependence, or even mutual determination. The relationship between children and environment, people and things is interdependent or mutually determined. Bronfenbrenner believes that young children's behavior is influenced by microsystem, mesosystem, ecosystem and microsystem. The interaction among people, objects, environment and each other in "relationship" will influence the behavior of children in various ways. Therefore, teachers should pay attention to the relationship between children and their parents, teachers, peers, teaching materials and environment.
First of all, teachers should pay attention to the interaction between children and materials. The interaction between children and materials can not only reflect the current development level of children, but also promote the development of children. For example, there are a lot of repetitive manipulations in the interaction between small class children and materials, which reflects the cognitive level of small class children in Piaget's theory of cognitive development this reflects the material can promote the development of young children. Thirdly, teachers should pay attention to the people related to children. In many "human" relationships, teachers should pay attention to two kinds of relationships: the vertical relationship between children and adults, and the parallel relationship between children and peers. In these two kinds of relations, we should pay attention not only to the explicit interaction of language, facial expression, body movement between children and teachers, but also to the implicit relationship hidden in them. For example, small class Le Le Try to eat with a spoon, but still keep throwing food, the teacher saw, said: "Le Le has been very good, eat slowly, the next will be better and better. "Le Le Smiled and went on eating. The teacher's concern about Le Le is not the mastery of the dining skills, but the cultivation of the initiative to promote the mastery of the dining skills. Finally, teachers should pay attention to the environment in which young children live, including social and cultural environment, community environment, family environment and kindergarten environment. These environments are the big environment for young children, the big environment in a short period of time is not necessarily larger changes, but the children of a "environment" is constantly changing, that is, the specific situation of children's activities. There may be different explanations for the same behavior in different situations, and the performance of children in the same situation is not exactly the same. For example, in the case of small classes, the same crying, some children because of separation anxiety, some because of poor self-care ability. Therefore, the teacher observes and analyzes the young child behavior to understand certain behaviors that occur under certain conditions. Only in this way can teachers interpret children and "see" them.
C. Centering on the "clues" and paying attention to analyzing infant behavior Clues are the key to observation and record, and the clues can be used to analyze children's behavior. In the combination of clue analysis, we can use the point analysis method and the whole analysis method. The method of point analysis is to extract a core element from numerous observation records as the basis of observation and analysis and to classify it into the corresponding field. This kind of analysis method may let the teacher direct lock some concrete aspect to carry on the interpretation analysis to the child, provides the interpretation analysis the efficiency. For example, in the case of Le Le's meal, we can analyze the health field in terms of "movement development," "living habits and living abilities." The overall analysis method is to the event involves in the multitudinous factor carries on the crosswise and the Longitudinal objective overall consideration, gradually concentrates carries on the interpretation analysis to the record data. Children's development clues (cognitive, social, emotional, physical and so on) are intertwined with each other, children's development in one aspect of an activity is closely related to the development of other areas. When analyzing children's behavior, teachers should examine the whole development level of children with a holistic view.
V. CONCLUSION
Observation of children's behavior is the embodiment of core professional literacy of teachers. Record of observation of children's behavior is a way for teachers to understand the development level of children. Behavior analysis and interpretation is the only way for teachers to "see" children. Improving the quality of childcare is the ultimate goal of children's behavior observation. It's needed to be aware of the goals of observation records, prepare observation materials in advance, pay attention to details, seize clues, and maintain an objective development theory to interpret children's behavior and the level of development represented by behaviors. Only in this way can teachers apply their own educational and behavior teaching projects into the children's most recent development zone to achieve children's meaningful learning. | 2020-04-02T09:33:18.966Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "cbcacf3d30518c4d415017e5395f87c845197a1e",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.2991/assehr.k.200316.172",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "003e16c9cc9d06cb52111b9aa91e2e4cb85bfbbc",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
18411196 | pes2o/s2orc | v3-fos-license | Class-switched B cells display response to therapeutic B-cell depletion in rheumatoid arthritis
Introduction Reconstitution of peripheral blood (PB) B cells after therapeutic depletion with the chimeric anti-CD20 antibody rituximab (RTX) mimics lymphatic ontogeny. In this situation, the repletion kinetics and migratory properties of distinct developmental B-cell stages and their correlation to disease activity might facilitate our understanding of innate and adaptive B-cell functions in rheumatoid arthritis (RA). Methods Thirty-five 'RTX-naïve' RA patients with active arthritis were treated after failure of tumour necrosis factor blockade in an open-label study with two infusions of 1,000 mg RTX. Prednisone dose was tapered according to clinical improvement from a median of 10 mg at baseline to 5 mg at 9 and 12 months. Conventional disease-modifying antirheumatic drugs were kept stable. Subsets of CD19+ B cells were assessed by flow cytometry according to their IgD and CD27 surface expression. Their absolute number and relative frequency in PB were followed every 3 months and were determined in parallel in synovial tissue (n = 3) or synovial fluid (n = 3) in the case of florid arthritis. Results Six of 35 patients fulfilled the European League Against Rheumatism criteria for moderate clinical response, and 19 others for good clinical response. All PB B-cell fractions decreased significantly in number (P < 0.001) after the first infusion. Disease activity developed independently of the total B-cell number. B-cell repopulation was dominated in quantity by CD27-IgD+ 'naïve' B cells. The low number of CD27+IgD- class-switched memory B cells (MemB) in the blood, together with sustained reduction of rheumatoid factor serum concentrations, correlated with good clinical response. Class-switched MemB were found accumulated in flaring joints. Conclusions The present data support the hypothesis that control of adaptive immune processes involving germinal centre-derived, antigen, and T-cell-dependently matured B cells is essential for successful RTX treatment.
Introduction
B-cell depletion with the chimeric anti-human CD20 IgG 1 antibody rituximab (RTX) represents a novel target-specific treatment option [1][2][3] for active rheumatoid arthritis (RA). RTX leads to almost total depletion of peripheral blood (PB) B cells for several months [1][2][3][4][5][6]. The subsequent clinical course follows the autoantibody kinetics more closely than the B-cell numbers in the blood [7]. Despite its specific mode of action on B cells, clinical response to RTX is not restricted to rheumatoid factor (RF)-positive or otherwise autoantibody-positive RA patients [2]. Important innate immune functions of B cells such as antigen presentation and cytokine production [8,9], but also B-cell-dependent adaptive autoimmune processes that were not represented by standard autoantibodies [10], are alternative explanations for this phenomenon.
(page number not for citation purposes)
Up to five repetitive B-cell depletion courses appear safe in RA [11,12], but the risk of secondary immunodeficiency with more repetitive RTX courses is still not ruled out. This uncertainty may cause restriction in re-treatment scheduling and requires at least ongoing surveillance [12][13][14][15]. There is a large variability in duration of response after RTX administration. Fixed short re-treatment intervals neglect the potential of saving immunosuppression and costs provided by this variability, whereas long intervals imply the risk of avoidable relapses and disease progression. Previous experimental studies indicated a rationale for repetitive RTX scheduling based on B-cell kinetics [5,6,16], but variable time lag between B-cell repopulation and clinical flare limited the immediate clinical application of B-cell repletion monitoring. Individual re-treatment intervals, therefore, are still recommended on the basis of the clinical course [17].
Which B-cell subset should be monitored? Long-lived plasma cells currently are believed to play a pivotal role in chronic autoimmunity [18]. They derive from short-lived plasma cells and undergo apoptosis unless they find survival niches of limited number in the bone marrow. Their progenitors, the CD19 + plasmablasts, have undergone class switch on their differentiation pathway to further develop to antibody-producing CD19plasma cells. Plasmablasts draw a dynamic picture of ongoing autoimmune response in animal models [19]. They share CD27 positivity and IgD negativity with germinal centre (GC)derived, affinity matured, CD27 + IgDimmunoglobulin (Ig) class-switched memory B cells (MemB). However, splenic long-lived plasma cells may also derive from extrafollicular maturation [20]. As long-lived plasma cells are primarily resistant to RTX due to a lack of CD20 expression, they currently are hard to be directly extinguished by any available therapeutic modality [18]. Plasma cells, in principle, are able to persist in tertiary immune organs, as it may be under certain circumstances the inflamed synovium [9,18]. Their number indeed was reported to be unchanged in the synovium 4 weeks after RTX [21] but strongly reduced later on [22][23][24]. Plasma cell numbers are very low after RTX in the PB, with a transient peak early in the reconstitution. However, no correlation of plasma cell kinetics to time to relapse could be shown, which limits their usage for clinical monitoring [5,6].
Another candidate B-cell subset of relapsing autoimmunity might be CD27 + IgD + non-switched MemB, which according to their surface marker expression are reported to correspond to splenic CD27 + IgD + (IgM + ) marginal zone B cells in rodents [25,26]. Cells of this developmental stage are able to undergo CD27-mediated co-stimulation but have not yet switched their Ig receptor isotype. They are not prone to the GC-related processes of antigen-dependent maturation but may undergo T-cell-independent maturation outside a lymphoid follicle. CD27 + IgD + B cells are centrally involved in the processes of innate host defense, but on the other hand, they also represent several features that argue for a role in autoimmunity [27,28].
Their number was associated with RA relapse in the regenerating B-cell compartment in previous studies [5,6]. Like switched MemB, they may also develop to plasmablasts [20], which are able to secrete RF.
In this study, we questioned whether the advantage of individual RTX scheduling could be achieved by combining serological and cytological monitoring strategies. We confirm the previously reported importance of RF kinetics [1,7]. In addition, we found that, by using a B-cell monitoring strategy in CD45 + CD19 + B cells (Additional data file 1), sustained depletion of CD27 + IgDclass-switched MemB from the blood was associated with good clinical response to RTX treatment. We found also that the same B-cell subset, but not the CD27 + IgD + non-switched MemB or the CD27 -IgD + 'naïve' B cells, were preferentially accumulated in actively inflamed joints.
Patients
Thirty-five patients with RA according to the American College of Rheumatology classification criteria [29] were included in this prospective observational study. All patients were 'RTXnaïve'. They had active disease according to a 28-joint disease activity score (DAS28) of greater than 3.2, which would qualify them for repetitive RTX treatment [17]. All patients had failed to at least one disease-modifying antirheumatic drug (DMARD) and had shown inappropriate response to at least one tumour necrosis factor (TNF)-blocking agent. Disease activity was reflected by a median of 6 swollen (interquartile range [IQR] 3 to 10) and a median of 5 tender (IQR 2 to 9) of 28 evaluated joints. Median DAS28 was 5.0 (IQR 4.3 to 5.9), median erythrocyte sedimentation rate (ESR) was 33 mm (IQR 27 to 46), and C-reactive protein (CRP) serum concentration was 11 mg/L (IQR 3 to 24). Other patient characteristics, including the number of previously used DMARDs and anti-TNF agents, are summarized in Table 1. Assessors for clinical parameters were blinded to the time-matched laboratory results. All patients gave their written informed consent to participate. The study was approved by the Cantonal Ethics Committee of Bern (ref. no. 254/07).
Treatment
B-cell depletion therapy was performed with two infusions of 1,000 mg RTX 14 days apart from each other, and both were co-administered with 100 mg prednisone in order to prevent allergic reactions. Low-dose methotrexate in stable weekly doses of between 10 and 25 mg and other conventional DMARDs were continued during the entire observation phase. Oral prednisone doses were maintained from baseline to month 3 with a median of 10 mg per day. Afterward, they could be adjusted to the clinical course, which resulted in median doses of 6 mg at month 3 and 5 mg at months 9 and 12. Corticosteroid doses were thus significantly lower (P < 0.05) in good responders than in moderate or non-responders at months 9 and 12.
Response
Clinical improvement was assessed every 3 months by DAS28 and graded as European League Against Rheumatism (EULAR) good, moderate, or non-response [30]. EULAR good response, which could have been achieved at any visit during the 12-month observation period, was used for group definition in retrospective B-cell and antibody analyses.
Sample preparation
Freshly isolated PB (9 mL) was anticoagulated with EDTA (ethylenediaminetetraacetic acid) and immediately used for flow cytometry. B-cell analyses were also performed in anticoagulated synovial fluids (n = 3) or in tissue homogenates from another three patients undergoing urgent joint replacement surgery of the knee (n = 1), synovectomy in treatment-resistant synovitis of the knee (n = 1), or wrist joint synovectomy (n = 1). The study protocol for invasive procedures was approved by the Cantonal Ethics Committee of Bern (254/07). All study participants gave their written informed consent to participate.
Synovial tissue was immediately prepared, as previously described [31], by injecting 1 mg/mL collagenase (Sigma-Aldrich, Munich, Germany) into the tissue samples, followed by incubation for 20 minutes at ambient temperature. Digests subsequently were minced and incubated for an additional 50 minutes at 37°C in collagenase 1%. The cell suspension was strained by 70-μm nylon filters (Falcon, now part of BD Biosciences, San Jose, CA, USA), washed twice in phosphatebuffered saline, and recovered in RPMI 1640 medium (Invitrogen, Karlsruhe, Germany) containing 10% fetal calf serum plus kanamycin at 37°C in 5% CO 2 atmosphere overnight. Nonadherent synovial tissue cells were carefully obtained together with the supernatants, centrifuged, and thoroughly washed with phosphate-buffered saline before subsequent analyses.
Flow cytometry
Fixation of leukocytes and lysis of erythrocytes for flow cytometry were done for quantitative analyses in TruCOUNT™ tubes (Becton Dickinson, Basel, Switzerland). Cells were stained with BD Multitest™ reagent for CD3/CD16 + CD56/CD45/ CD19 markers as well as phycoerythrin-conjugated anti-CD27 (clone L128) and fluorescein isothiocyanate-conjugated anti-IgD (clone IA6-2). All antibodies were purchased from BD Biosciences. Data were acquired by flow cytometry using the BD FACSCalibur Flow Cytometry System and Cel-lQuest software (BD Biosciences Immunocytometry Systems). Analyses were performed after gating on anti-CD45 stained lymphocytes. The number and frequency of CD19 + PB B cells and subsets (percentage of CD19 + B cells) were determined in a minimum of 20,000 events in the CD45 gate. Data collection was continued to 1 × 10 5 events in case there were fewer than 0.5% CD19 + cells. The number and frequency of CD27 -IgD + 'naïve', CD27 + IgD + non-switched MemB, and CD27 + IgDswitched MemB were determined according to their IgD and CD27 surface expression. All meas-
Statistics
Results are presented as median and 25% to 75% IQR for data in non-Gaussian distribution. The Wilcoxon rank sum test was used for comparison of two-tailed groups. The exact significance in the Mann-Whitney U test was used for comparison of two independent groups. Statistical analyses were performed with SPSS software version 15 (SPSS Inc., Chicago, IL, USA). Results with a P value of less than 0.05 were considered significant.
Clinical response
Ten patients did not achieve a significant clinical improvement during the observation period. Six patients fulfilled the EULAR criteria for moderate clinical response, and another 19 for good clinical response. Best achieved clinical response, by definition, could have occurred at any time during the 12month observation phase after RTX. Ten of our good-response patients fulfilled this EULAR definition for the first time after 3 months, another 7 patients after 6 months, and 2 patients only 9 months after RTX. Duration of good response was documented for a median of 3 months but in fact may have been substantially longer with regard to the 3-month visit intervals. The criterion of moderate response was documented continuously in these same 19 patients for a median of 9 months. Patients experiencing good response had significantly higher anti-CCP antibody titer (P = 0.046) at baseline but significantly lower CRP serum concentration (P = 0.010). They had significantly shorter disease duration (P = 0.008) and fewer DMARD treatment attempts (P = 0.010) but were comparable for ESR, RF, swollen and tender joint counts, corticosteroid dose, and DAS28 at baseline. Median CRP in the entire study population dropped significantly from 11 mg/L (IQR 3 to 24) at baseline to values below the detection limit of 3 mg/L, with an IQR of between less than 3 and 9 mg/L at month 3 (P = 0.019), less than 3 to 6 mg/L at month 6 (P < 0.001), and less than 3 to 7 mg/L at month 9 (P = 0.008). Good responders had significantly lower CRP serum concentrations than the other patients 6 months after RTX. ESR improved significantly from a median of 33 mm/hour (IQR 26 to 40) at baseline to a median of 11 mm/hour (IQR 7 to 25, P = 0.011) at month 3 and a median of 11 mm/hour (IQR 5 to 20, P = 0.006) at month 6. Good responders had significantly lower ESR than the other patients after 3, 6, and 12 months. After 12 months, median CRP concentrations as well as ESR were in the same range as baseline values.
Composition of peripheral blood B cells at baseline
Median total number of CD19 + B cells (Figure 1) in the entire cohort was 169/μL (IQR 77 to 281); this parameter did not significantly differ between good responders and patients with moderate or no response in the later course. A median of 69% of B cells (IQR 55% to 79%) at baseline were 'naïve' B cells. Their absolute number was significantly higher (P = 0.034) in subsequent non-responders or moderate responders (median 199, IQR 137 to 297) than in good responders (median 109, IQR 56 to 142). In contrast, the frequencies of non-switched MemB (median 8%, IQR 4% to 13%), of switched MemB (median 16%, IQR 12% to 22%), and of CD27 -IgD -B cells (median 4%, IQR 3% to 7%), as well as their absolute numbers, were not significantly different between the two patient groups.
Early response to depletion
The first RTX infusion reduced the number (Figure 1) and frequency of PB CD19 + B cells to a median of 3/μL (IQR 2 to 7) and a median of 2% (IQR 1% to 4%) of CD45 + cells, respectively. The median of all B-cell fractions decreased significantly in number (P < 0.001), but increasing frequency of switched MemB and of CD27 -IgD -B cells upon the first RTX infusion indicated relative resistance of these subsets. When a B-cell frequency of below 0.5% of lymphocytes [6] or the number of CD19 + cells below 5/μL was used as an arbitrary surrogate of complete depletion, these values were reached in 61.3% and 48.6% of the cases, respectively, 14 days after the first infusion. Neither the achievement of one of these limits upon first infusion nor the absolute number or frequency of any defined B-cell subset in the early depletion phase was indicative for subsequent clinical response.
Peripheral blood reconstitution
The median number of B cells increased continuously from months 3 to 12. This PB B-cell repletion was dominated by 'naïve' B cells. The time point of repopulation tended to be earlier in moderate or non-responders than in good responders; however, this difference was statistically not significant. In contrast, the absolute number of switched MemB at month 6 (P = 0.049), month 9 (P = 0.045), and month 12 (P = 0.003) was significantly higher in patients with no or moderate timematched clinical response to RTX than in good responders ( Figure 1). This result was the same when correlating switched MemB with best achieved response at any time of the treatment cycle. In contrast, the number and frequency of nonswitched MemB, of CD27 -IgD -, and of 'naïve' B cells were not correlated with the clinical response after RTX.
(page number not for citation purposes)
Comparison of peripheral blood and synovial B cells
The B-cell composition in six simultaneously collected samples from the PB and synovium during B-cell repletion ( Figure 2)
Rheumatoid factor and cyclic citrullinated peptide antibodies
Therapeutic intervention with RTX led to significant decreases in RF and CCP antibody serum concentrations from months 3 to 12 (P < 0.05) (Figure 3). In patients who were RF + at baseline, the lowest RF concentration (median of 36% from baseline) was achieved after 6 months. Five of the 21 initially RF + patients became RFupon B-cell depletion, three of them last- ing until the end of the observation whereas two others started to have positive RF tests again after 6 months. All patients with persistent or transient RF conversion were good clinical responders. A decrease of RF serum concentrations in comparison with baseline levels was significantly stronger in good responders than in the other patients. These results were similar for month 3 (P = 0.022), month 6 (P = 0.002), month 9 (P = 0.002), and after 12 months (P = 0.001). A significant reduction in median RF concentrations was long-lasting in good responders but was limited to a maximum of 6 months in moderate and non-responders. RF serum concentrations dropped more steeply than expected from the kinetics of slowly reduced IgG, IgA, and IgM total Ig concentrations.
In contrast to RF, CCP antibody concentrations decreased more continuously and in parallel to Ig levels. They reached their minimum of a median of 60% from baseline levels after 12 months. Good responders started with a tendency toward higher anti-CCP serum concentrations than the other patients but showed a significantly stronger reduction of CCP-directed autoantibodies after 9 months (P = 0.041) and 12 months (P = 0.027).
Immunoglobulin isotypes
Total IgG serum concentrations started to be significantly reduced (P < 0.05) already after the first infusion, whereas IgA and IgM serum concentrations were more stable. These isotypes came down somewhat later, with significantly reduced levels from months 3 to 12 (P < 0.001) when compared with baseline. Three different patients marginally underwent the lower limit for normal IgG (7 g/L) or IgA (0.7 g/L) serum concentrations during the observation phase. Two other patients developed IgM concentrations below the lower normal limit of 0.4 g/L. None of the patients with a drop of any Ig isotype below the lower normal limit developed clinical symptoms of immunodeficiency, but seven of these patients experienced good clinical response and one patient experienced moderate clinical response.
Figure 3
Antibody concentrations Antibody concentrations. Course of (a) rheumatoid factor (RF), (b) cyclic citrullinated peptide (CCP) antibodies, and (c) total immunoglobulin IgG, IgA, and IgM serum concentrations after therapeutic B-cell depletion in good responders (continuous line) and in non-responders or moderate responders (designated as other patients, dotted line). Asterisks indicate statistically significant differences (P < 0.05) between these two patient groups.
After 12 months, the IgG and IgA serum concentrations decreased to medians of 86% and 79% and IgM serum concentrations decreased to values between 75% and 68% from baseline. Median IgG, IgA, and IgM serum concentrations were somewhat lower in good responders at any follow-up visit, but at no time point were the relative decreases of these concentrations from baseline significantly different between good responders and the other patients.
Discussion
Convincing clinical success of therapeutic depletion brought the B cells back into the research focus of RA pathogenesis. The present data indicate that concentrations of one of their products, serum RF, as well as the repletion kinetics of switched MemB in the blood and their migration into joints are linked to inflammatory activity. In the following sections, we will discuss the impact of these possibly linked findings on our understanding of RA pathogenesis and their potential for scheduling re-treatment.
Identical B-cell clones in different joints and the blood of RA patients reflect the systemic autoimmune character of RA [32]. While PB B cells become rapidly depleted after RTX, improvement in RA symptoms is delayed. This disconnection in time might suggest similar B-cell persistence mechanisms in the RA synovium, as recently described for splenic marginal zone and GC B cells in Peyer's patches in a rodent anti-human CD20 depletion model [33]. Given these data, it was important to prove in synovial biopsy studies that CD20 + B cells could be depleted, though to various degrees and rapidities, in the inflamed synovial microenvironment [21][22][23][24]. Reports on the size and number of lymphoid aggregates were also somewhat contradictory [21,23]. Clinical improvement upon RTX was correlated with a decrease in the number of synovial B cells in one biopsy study [22] and with reduced plasma cell numbers in another biopsy study [23]. In summary and according to the recently formulated 'roadblock hypothesis' [34], histology data after RTX draw the picture of an ongoing process of renewing B cells in the synovium which can be interrupted by therapeutic intervention.
As a consequence of B-cell depletion, the synovial Ig production (including RF and CCP specificities) decreased in quantity. The effect on RF and anti-CCP idiotypes appeared less or even absent in aggregated lymphocellular infiltrates [24,35], which was the histological subtype with the highest autoantibody production [35]. So far, these data appear to be in accordance with high IgM anti-CCP serum antibodies and with RF titer, which together with high-grade synovial CD20 -CD79a + B-cell infiltrates were negative predictors for RTX response in another longitudinal analysis [22]. Negative data in cross-sectional analyses of the same serological and histological parameters draw a close connection of these prognostic items into question [36]. Good responders among our patients were characterised by significantly higher numbers of 'naïve' B cells in PB and also by higher CCP serum antibody and lower CRP concentrations at baseline. Given that we searched for statistical significance of difference for many comparisons, a statistical error of multiple testing has to be considered. In addition, as the same items were previously examined in similar settings without providing support for our observations, evidence appears to be too weak for recommending the determination of 'naïve' B cells as a predictor of response to RTX.
We confirmed [1] in this study that serum RF decreases faster and to lower relative levels than expected from the corresponding IgM isotype kinetics. Thus, RF-producing B cells were more sensitive to RTX than B cells of other specificities. A more lasting reduction of RF titer in good responders when compared with less favourably responding patients furthermore indicates that the depletion of RF-expressing cells is more profound and might have direct therapeutic impact. In contrast, the kinetics of anti-CCP antibodies followed, with large inter-individual variation and despite their well-established diagnostic and prognostic roles in RA [37][38][39], just the corresponding isotype concentrations. This finding is confirmatory of previous studies [7,23]. RF-expressing B cells may act pleiotropic, but they provide a function with unique consequences for the involvement of multiple antigens as seen in RA [10]: Their receptors can complex Igs of different specificity, together with any bound autoantigen or foreign antigen. Processing of these complexes and antigen presentation to their T-cell counterparts may lead to affinity-matured, classswitched MemB for a variety of specificities [12]. It appears likely, following this line of argumentation, that the profound reductions of GC-derived memory and (in parallel) of RF-producing B cells 6, 9, and 12 months after RTX were therapeutically relevant and linked processes.
Murine models for studies on self-reactive B cells localised the failure of tolerance mechanisms in the bone marrow [40]. Although the secondary lymphoid organs examined in an animal model may differ importantly from lymphoid neogenesis observed in RA synovitis [9,33], GC-forming B cells appear essential for mutual B-cell and T-cell activation and for proinflammatory cytokine response in human synovitis [41]. T-celldependent affinity maturation of RF-producing B cells is anatomically linked to the GCs of secondary immune organs [42][43][44], but non-switched MemB, another source of RF, do not require lymphoid aggregates [9,19]. It appears notable, in this context and with regard to our data, to recall the higher specificity of IgA RF and its impact on the RA disease course in comparison with non-switched RF [45,46]. As close antigen and major histocompatibility complex (MHC)-restricted B cell-T cell interaction is responsible for shaping of the receptor repertoire by somatic hypermutation of the B-cell receptor, this process is obviously of critical importance for the induction of autoimmune B cells. It was shown in a recent publication that somatic hypermutation in RA is predominantly operative in (page number not for citation purposes) CD19 + IgDclass-switched B cells and that the frequency of such mutated B cells is substantially modified by RTX treatment [47]. Thus, therapeutic interruption even of abortive GCs appears promising.
PB B cells started to repopulate in all of our patients during the first 12 months after RTX. It is currently unknown whether the reoccurrence of somatically mutated plasmablasts in the PB in the early repletion phase is a recirculation phenomenon from their survival niches or the result of rapid de novo differentiation. In accordance with the literature [4][5][6], it was neither time of reoccurrence nor the numbers of total B cells or quantitatively dominant 'naïve' B cells in PB in our study that were correlated with best achieved or time-matched clinical response. The same was true for CD27 + IgD + and CD27 -IgD -B cells, which thereby appeared to be irrelevant for clinical monitoring.
In contrast, robust data on the course of switched MemB in relation to disease indicate that their monitoring might find a place in clinical application as an alternative to or might add to quantitative RF analyses [12]. These data on a level of statistical significance are essentially in agreement with reports about a trend toward shorter time between B-cell repopulation and clinical relapse in patients reconstituting their PB B-cell compartment with a higher proportion of switched MemB [4,6]. Finally, accumulation of switched MemB in the synovium, the end organ of immune-mediated processes, underscores the relevance of this B-cell subset in RA. Taken together, these data indicate that synovial lymphoid structure formation depends on trafficking of circulating rather than on locally expanding B cells, thereby allowing B-cell monitoring after RTX not only in the synovium, but also in the blood.
When median DAS28 values increased continuously from 2.1 to 2.8 in good responders and from 3.3 to 4.5 in the other patients, the approximate cutoff values of switched MemB in the blood between the two groups of our study were two cells per microlitre after 6 months, four cells per microlitre after 9 months, and six cells per microlitre 12 months after RTX. In analyses of all available data irrespective of the time after RTX, the upper cutoff value for patients in DAS28 remission (< 2.6) was also two CD27 + IgD -B cells per microlitre. Persistence of the CD27 + IgD -B-cell subset above this threshold indicated resistance to RTX, while exceeding the threshold after good response was associated with disease relapse. We estimate that it is unlikely that lower doses of concomitant steroids in good responders were responsible for this observation. Therefore, it would appear to be worthwhile to prospectively test the clinical outcome of repetitive B-cell depletion on the basis of the proposed monitoring procedure in comparison with other strategies.
Conclusions
Our data indicate that B-cell maturation in the PB after RTX does not just display lymphatic ontogeny [4,15] but provides clinically relevant data when focussing on switched MemB.
Clarifying whether the disease-related cell phenomena are sufficiently antecedent to a clinical flare to be useful in clinical practice requires ongoing research in a prospective setting.
Competing interests
Roche Pharmaceuticals Switzerland (Basel, Switzerland) supported this study with an unrestricted research grant of 30,000 Swiss Francs. Roche did not have any influence on the collection, evaluation, or interpretation of the data or on the preparation of this manuscript.
Authors' contributions
BM supervised data collection, performed statistical analyses, and wrote the manuscript and takes the collective responsibility for the integrity of the data and conclusions. CAD supervised and analysed the cellular and serologic tests. SE and EV provided the synovial samples. IV performed immunohistochemistry. MF contributed to data analysis and manuscript preparation. DA, H-RZ, and PMV contributed to patient recruitment and clinical data collection. All authors were actively involved in the drafting of the manuscript and in critical revision. All authors read and approved the final manuscript.
Additional files
The following Additional files are available online:
Additional data file 1
Schematic overview of B cell developmental stages (names in boxes) and corresponding surface markers (in boxes with punctured lines) that were used in this study. Immunoglobulin class-switch of B cells is functionally linked with antigen dependent, MHC restricted affinity maturation, and anatomically related to germinal centre (GC) formation. CD27+IgD-class-switched B cells can either be post-germinal centre memory B cells, or plasmablasts, which are directed to further plasma cell development. We show in this study that the kinetics of class-switched B cells is associated with the course of RA disease activity. MHC-II: class 2 major histocompatibility complex, TCR: T cell receptor, CD27 and CD70: TNF-α family members and co-stimulatory molecules on B cells and T cells. See http://www.biomedcentral.com/content/ supplementary/ar2686-S1.jpeg
Additional data file 2
Histological preparations from RA synovitis in the early B cell reconstitution phase after RTX. Haematoxylin-eosin staining and immunohistochemistry for synovial B cells during the early peripheral blood B cell repletion phase. Flow cytometric analyses for IgD and CD27 expression from the same sample are depicted as patient 3 in Figure 2A. The infiltrating CD20+ B cells form a few small lymphoid aggregates with large CD79a+ CD20-plasma cells (arrows). This exemplary staining was performed in a synovial sample from a flaring knee joint using CD20 (clone L26) and CD79a antibodies (clone JCB117) from Dako, Glostrup, Denmark. Immunohistochemistry slides have been obtained using a three-step streptavidin-biotin technique, and new-fuchsin as chromogen. | 2016-05-12T22:15:10.714Z | 2009-05-06T00:00:00.000 | {
"year": 2009,
"sha1": "81f3bc961bbb8e5ad12d9a0f32b936f137ebbbaf",
"oa_license": "CCBY",
"oa_url": "https://arthritis-research.biomedcentral.com/track/pdf/10.1186/ar2686",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "81f3bc961bbb8e5ad12d9a0f32b936f137ebbbaf",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15150389 | pes2o/s2orc | v3-fos-license | Biomedical Applications of Magnetically Functionalized Organic/Inorganic Hybrid Nanofibers
Nanofibers are one-dimensional nanomaterial in fiber form with diameter less than 1 µm and an aspect ratio (length/diameter) larger than 100:1. Among the different types of nanoparticle-loaded nanofiber systems, nanofibers loaded with magnetic nanoparticles have gained much attention from biomedical scientists due to a synergistic effect obtained from the unique properties of both the nanofibers and magnetic nanoparticles. These magnetic nanoparticle-encapsulated or -embedded nanofiber systems can be used not only for imaging purposes but also for therapy. In this review, we focused on recent advances in nanofibers loaded with magnetic nanoparticles, their biomedical applications, and future trends in the application of these nanofibers.
Nanofibers
Nanofibers are fibers with diameters less than 1000 nm [1]. Varying in length from tens of nanometers to a few microns, fiber features create surface topographies that affect various applications used in the nano-and biotechnology fields. Nanofibers tailored from natural and synthetic polymers have gained much interest because they are easy to synthesize and the structural, functional, and compositional properties of these nanofibers are tunable [2][3][4][5]. They can be produced by interfacial polymerization, electrospinning (ES), and electrostatic spinning. Carbon nanofibers are graphitized fibers synthesized under catalytic conditions. Among the possible techniques used to prepare nanofibers, such as phase separation, template synthesis, self-assembly, and drawing, ES is one of the most efficient, simple, and versatile methods owing to its relatively simple and cost-effective setup [6][7][8].
In the ES process, polymer nanofibers are produced by applying a strong electric field between a grounded target and the polymer solution. The polymer solution is fed with the use of a syringe pump through a metallic needle (spinneret) at a constant and controllable rate. The collector plate acts as the counter electrode on which the fibers are collected as a non-woven mesh or membrane. The process conditions and properties of the polymer solution influence the diameter of polymer nanofibers, which range from 10 to 1000 nm. The key advantage of producing polymer nanofibers with extremely small diameters is their large surface-to-mass ratio, high porosity, and superior mechanical performance [4,9]. Moreover, the functionality of the polymer nanofibers can be affected by the polymer molecules located at the surface of the polymer nanofibers, thus enabling customization of the nanofiber properties by tailoring the nanofiber surface compositions and morphologies. This ability to customize has led to diverse applications of nanofibers in wound healing, biosensors, drug delivery systems, medical implants, tissue engineering, dental materials [10][11][12], filtration membranes, military protective clothing, and other industrial applications [7,13].
Spinning polymer blends to create composite nanofibers allows for further tunability of nanofibers that can fulfill specific industrial requirements in terms of their material properties, thereby increasing their potential applications. Most recently, investigations have targeted the inclusion of other nanoscale structures within the nanofibers to produce structures with added functionality. For example, silver nanoparticles have been included in synthetic polymer nanofibers to produce a highly antimicrobial material [14]. In another study, emulsion droplets were successfully included in nanofibers. Self-assembled structures such as liposomes, micelles, and micro-emulsions have gained much attention in recent years [15][16][17]. They have been used as carrier systems for the delivery of antimicrobial agents, drugs, flavors, dyes, antioxidants, enzymes, and other functional compounds [18][19][20]. A combination of nanofibers and self-assembled structures, such as micelles, can thus create a novel delivery system with superior properties and many potential applications.
Owing to the advantageous features of nanofibers and magnetic nanoparticles (MNPs), many researchers have incorporated MNPs into biodegradable nanofibers to produce paramagnetic nanofiber scaffolds. To prepare these composite nanofibers, one of the most commonly used techniques is the mixing of dry inorganic powder with a polymeric solution followed by ES, although the nanocomposites formed are not stable and tended to agglomerate. To overcome this problem, various surface treatments have been used including salinization, polymer coating, and grafting. In addition to this type of surface coating, dispersion of Fe3O4 nanoparticles in the nanofibers has been performed in both water and organic solvents, as well as sodium citrate and oleic acid, although the complete dispersion of Fe3O4 nanoparticles was not achieved due to these incompatible interfaces [21].
Hybrid Nanofiber System
The addition of inorganic components to the polymer system facilitates the preparation of nanofibers with specific functionalities. A recent patent described an approach for preparing hybrid nanofibers by adding nanoparticles into the polymers [22]. In this patent, nanoparticle dispersion was easily carried out, and a porous structure was also created using salt dissolution. Antibacterial nanofibers utilizing Ag as an antibacterial agent have been prepared using ES [23]. To prepare the hybrid nanofibers by ES, the chemical precursor of silver NPs, AgNO3, was mixed with cellulose acetate (CA) or polyacrylonitrile (PAN) solution. The ES process was then followed by photo-reduction to form the silver NPs within the nanofibers formed. Chemical precursors of other metals were also used to obtain other types of hybrid nanofibers. For example, the chemical precursor of Pd was used to make poly(acrylonitrile-co-acrylic acid) (PAN-co-PAA)/Pd by ES [24]. In a separate study, gold NPs (AuNPs) were directly mixed with polymers prior to the ES process and subsequently electrospun (E-spun) to obtain hybrid nanofibers of poly(vinyl pyrrolidone) (PVP)/Au [25]. Other than with these metals, nanoparticles have also been prepared with functional metal oxides [26]. Table 1 summarizes the different inorganic components used for the preparation of hybrid nanofibers. Yang et al. reported a new method for preparing aligned fibrous arrays of composite magnetic nanofibers by ES [31]. As shown in Figure 1, nanofibrous arrays using polylactic acid (PLA) fibers can be applied in scaffolds without any structural changes. Also, the fiber morphologies remain intact after loading the MNPs. Moreover, their functionality can be controlled by adding selected types of NPs. Recent studies have demonstrated the possibility of obtaining composite nanofibers by ES of ceramics and biopolymers. Hydroxyapatite (HA), a major component of bone, is a widely used bioceramic. Hybrid E-spun nanofibers containing HA as a bone regeneration implant material revealed high mechanical strength and good biocompatibility [28]. Observations from a scanning electron microscope (SEM) image revealed that the incorporation of HA did not change the required morphology and had a final structure consisting of smooth and interconnected nanofibers with high volume. A nanofibrous PLA/HA composite prepared by ES had good mechanical strength with fibers on the nanometer scale. This composite is promising as a temporary substrate for bone tissue regeneration. Inorganic compound-loaded nanofibers have been prepared using a combination of ES and the sol-gel process using common precursors such as SiO2, TiO2, and Al2O3 [33].
Polymer nanofibers loaded with Au, Ag, Pt, or Pd nanoparticles can be produced by ES with the addition of metal salt solutions as precursors. The diameters of the nanoparticles were in the range of 5 to 15 nm. These nanofibers have also been reported to have highly effective catalytic properties.
Methods to Prepare MNPs (Magnetic Nanoparticles)
MNPs are prepared via basic inorganic chemistry methods. Specifically, MNPs are prepared with magnetite, maghemite or iron alloys as the core magnetic material. MNPs can be prepared either by a single-step or a multi-step procedure, each of which has its advantages and disadvantages. There is no universal technique available for MNP synthesis. Commonly used methods for MNP synthesis will be briefly discussed in the following section.
Precipitation
One simple chemical method available for the preparation of MNPs is the precipitation method. It was developed to use aqueous solutions of iron (II or III) ions. Precipitation of MNPs can be accomplished using one of two methods: wet precipitation or co-precipitation. The wet precipitation method was developed first for MNPs preparation [34]. In the co-precipitation method, used for the preparation of iron oxide particles (Fe3O4), two stoichiometric solutions containing Fe 2+ and Fe 3+ ions are mixed with a base [35]. This co-precipitation method results in large nanoparticle sizes that are dependent on the pH of the solution. To synthesize MNPs successfully, the oxidation of the iron (II) precursor should be avoided because it leads to the conversion of Fe3O4 (magnetite) to Fe2O3 (maghemite), which might impair advantageous properties of Fe3O4 (magnetite) in its application as a contrast agent in magnetic resonance imaging (MRI). It has been shown that in the spinel structure of magnetite, cationic vacancies are in the octahedral positions, which result in a lower net spontaneous magnetization [36]. In particular, Basti et al. found that Fe3O4 magnetite provided stronger proton relaxivities in MRI than Fe2O3 maghemite [37]. As the process involves a large quantity of water, however, it is very difficult to scale-up the process [38]. One widely used method to effectively prevent oxidation is by bubbling N2, which leads to a reduction of the particle sizes. However, it is not easy to perform both precipitation and the addition of protective coating materials to the magnetic particles because maintaining the pH is laborious.
Reverse Micelle Formation
Micelle formation is a classic process in surfactant chemistry [39]. Normal micelles are usually synthesized in aqueous medium, whereas reverse micelles are formed in a mixture of a non-polar solvent and water. To produce iron oxide-based magnetic particles, the inorganic precursor of iron (III) chloride dissolved in aqueous medium is slowly added to the oily medium, followed by the addition of pH regulators [30,[40][41][42][43]. The advantage of the reverse micelle method is to obtain organic-coated MNPs with controlled particle size. Also, it is possible to obtain inorganic-coated MNPs using reverse micelles [29,[44][45][46][47][48]. The disadvantages of this method are that the remaining monomer hinders the coating of MNPs, it is difficult to scale-up the process due to the use of large amounts of organic solvent required, and it is not easy to prepare particles with a range of 20 to 500 nm because particle sizes depend on the size of the micelles [49][50][51][52][53][54].
Thermal Decomposition
Thermal decomposition is a popular method used in industry to synthesize MNPs because it does not use organic solvents [55]. The use of this method has led to advances in the preparation of metallic nanocrystals and semiconductors. Ferric and ferrous fatty acid complexes are widely used precursors of iron oxide super-paramagnetic particles because they are cheap, less toxic, and easier to scale-up for mass production [56]; However, the disadvantage of this method is that it is very difficult to control the particle size.
Liquid Phase Reduction
Strong reducing agents, such as NaBH4 and LiAlH4, are used to prepare MNPs through the reduction of magnetic or non-magnetic metal oxides to magnetic metal oxides. NaBH4 is the most commonly used reductant because it is soluble in both methanol and water [57][58][59][60]. The advantages of hybrid MNPs produced by this method are that they are very active, even under mild conditions, and can penetrate the polymer coating; however, they are difficult to handle due to their sensitivity to moisture.
Preparation of MNP-Functionalized Nanofibers
The development of MNP-functionalized nanofibers has attracted interest due to their potential use in scientific and industrial applications. The most widely used method for the preparation of MNP-loaded nanofibers is ES.
The first documented use of electro-hydrodynamics to modify the shape of a liquid meniscus under the influence of an electric field was reported by William Gilbert in the late 16th century [61]. In particular, he noticed that, when a suitably electrically charged piece of amber was brought near a droplet of water, it formed a cone shape and small water droplets were ejected from the tip of the cone. This was, in fact, the first recorded observation of electrospraying. The process of electrospinning (ES), developed by Anton Formhals in the thirties and forties of the last century [62][63][64] can be viewed as a special case of electrospraying. Larsen et al. [65] was the first to combine electrospinning with sol-gel methods to design nanofibers from inorganic oxides and hybrid materials. ES is a very simple method to produce nanofibers of a diameter range of 3 nm to 10 μm [66,67]. The technique mainly relies on the electrostatic repulsion of surface charges on the charged polymer solution and other variables, such as the solution flow rate, solution concentration, applied voltage magnitude, and the distance between the needle and the collector [68,69]. As shown in Figure 2, the ES setup involves a high-voltage direct current (DC) supply, a syringe pump, and a grounded collector. The basic ES technique involves the application of voltage on the polymeric droplet, thereby charging the droplet, followed by Taylor cone formation of a charged fiber jet, and accumulation on the grounded collector. Synthetic or natural polymers are widely used in making electrospun nanofibers [70], as well as the combination of synthetic and natural polymers, such as alginate/chitosan composite fibers [71]. Luong-Van et al. [72] prepared MNP-loaded poly(ε-caprolactone) (PCL) and MNP-functionalized poly(lactic-co-glycolide) (PLGA) nanofibers using ES. The size and morphology of magnetically-functionalized E-spun nanofibers can be controlled by changing the polymer concentration. Singh et al. reported the development of magnetic nanofibrous scaffolds composed of PCL and MNPs for bone regeneration [73]. In this study different weight ratio % of MNPs were distributed in PCL solutions up to 5 to 20 wt % and subsequently E-spun into nonwoven nanofibrous webs. It was demonstrated that the fiber diameter was dependent on the MNPs added which in turn changes the electrical conductivity and viscosity of the solution. Other studies [74][75][76][77] have also used synthetic polymers like poly(L-lactic acid) (PLLA) and polyurethane to make electrospun nanofibers. Furthermore, Fan et al. [78] prepared PAN/Fe2O3 nano-composite fibers by suspending the Fe2O3 nanoparticles in PAN/DMF solution, followed by ES. The addition of MNPs to the precursor solution for the preparation of magnetic nanofibers can be beneficial because the spatial distribution of MNPs can be confined within the range of 200-500 nm in a one-dimensional structure (without stacking of the nanoparticles in the radial direction) [79]; however, size deformation of the nanofibers may occur due to the change of viscosity resulting from the addition of the MNPs to the precursor solution. Multi-functional polystyrene-based nanofibers with embedded MNPs have also been obtained in a single-step ES process [80].
Biomedical Applications of MNP-Functionalized Nanofibers
Nanofibers functionalized with MNPs have gained attention because of their potential applications (Table 2), including the use in sensors, tissue regeneration scaffolds, and drug delivery systems [81,82]. Many kinds of magnetic nanofibers have been synthesized, although they have not been applied in many practical applications. In the following section, several applications of nanofibers loaded with MNPs are discussed.
Scaffold for Bone Regeneration
Bone regeneration by tissue engineering is very important for bone defects resulting from tumor resection, trauma, and skeletal abnormalities. Even though many kinds of scaffolds used for tissue engineering for bone repair have been investigated [93,94], the development of a successful scaffolding material is needed to satisfy clinical requirements.
Using the ES technique, Meng et al. prepared a novel nanofibrous scaffold composed of maghemite super-paramagnetic MNPs, hydroxyapatite, and PLA [32]. The nanofibrous pellet was implanted into the lumbar transverse defect of a white rabbit. The rabbits were raised in rabbit cages fixed with permanent magnets to provide a static magnetic field after surgery. The observed enhancement of tissue regeneration in the lumbar defect when the magnetic field was applied pointed to a novel strategy to improve bone tissue regeneration based on MNPs-functionalized nanofibers.
The resulting nanofiber webs became more hydrophilic with improved mechanical properties due to the addition of MNPs into the PCL scaffold. The addition of MNPs resulted in magnetic properties of the nanofibrous scaffolds, typical for weak ferromagnetic or super-paramagnetic materials, as well as increased hydrophilicity, accelerated scaffold degradation, and apatite-forming ability. When osteoblast cells were cultured on the MNP-loaded nanofibers versus on pure PCL nanofibers, initial cell adhesion and penetration increased. Furthermore, rats implanted with MNP-loaded PCL nanofibers showed significantly better bone regeneration with minimal adverse reactions. Singh et al. [73] incorporated MNPs in PCL and studied adhesion, spreading, and penetration of mesenchymal stem cells (MSCs) (Figure 3), which revealed enhanced cell penetration depth with higher content of MNPs.
Another study [85] reported MNP-functionalized PLLA using trifluoroethanol (TFE) as a cosolvent. The composite nanofibers formed by ES showed paramagnetic properties with minimum cytotoxicity and enhanced cell attachment.
Hydroxyapatite (HA), used extensively for bone regeneration, has also been applied as a scaffold. In one study [84], MNPs were loaded into the pores of HA to make a magnetic biomimetic scaffold for bone repair. This magnetic HA scaffold had good cell adhesion, differentiation, and proliferation ability.
Cancer Therapy
Hyperthermia-based cancer therapy using MNPs has recently gained wide interest; However, the application of free MNPs has several limitations due to low solubility, poor cancer targeting, and leakage of MNPs from the tumor location. Therefore, current strategy involves MNP-loaded electrospun nanofibers for localized hyperthermia-based tumor treatment.
In a recent study, 50-nm iron oxide nanoparticles (IONPs) were loaded into polystyrene (PS) electrospun nanofibers to allow for repeated heating by applying an alternating magnetic field (AMF) with minimum IONP leakage. IONP-loaded PS nanofibers were made by spontaneous ES of IONPs after dispersing of IONPs in a PS solution containing a mixture of tetrahydrofuran and dimethylformamide (1:3 volume ratio) to form uniform nanoscale electrospun fibers. Most of the human SKOV-3 ovarian cancer cells attached to IONP-loaded PS nanofiber mats through applying an AMF were dead as a result of the cancer hyperthermia effect [88].
In another study [89], electrospun MNP-loaded chitosan nanofibers were prepared by two different methods: a) direct adsorption of MNPs into nanofibers by immersion of MNPs into a chitosan solution, and b) direct immersion of chitosan in an Fe 2+ /Fe 3+ solution and co-precipitation of MNPs by ammonium hydroxide. However, there were not many differences in the morphologies and in vitro hyperthermic effects in Caco-2 cells between the nanofibers made by the two different methods; although, cross-linking of the MNP-loaded chitosan nanofibers with gluteraldehyde during ES was performed. Similarly, in another study [90], cross-linked chitosan nanofibers using iminodiacetic acid (IDA) were prepared using the co-precipitation method to increase the loading of the MNPs into the nanofibers. In addition, Ganesh et al. loaded MNPs into thermoplastic poly(ethylene terephthalate) (PET) nanofibers [91]. MNP-loaded nanofibers have potential to be used as cancer hyperthermia therapy through the application of an AMF.
Tissue Engineering
Tissue engineering is the substitution of any human organ or tissue with artificial functional materials using a combination of biology, medicine, and engineering. Recently, Preslar et al. manipulated target cells in a scaffold using physical means, such as magnetic force [92]. Nanofibers loaded with MNPs used as the scaffold play an important role in wound healing [93], because the MNPs in the nanofiber scaffold assemble the tissue and aid tissue formation by magnetic force. Using this technique, the magnetic force-based tissue engineering and mechano-transduction methods can control cellular signaling, artificial blood vessel development, and bone tissue formation [85,94]. In addition, this method is an effective way to mimic signal transduction in vivo and convert extracellular mechanical stress to intracellular chemical cues [94]. Furthermore, MNP-loaded nanofiber scaffolds have been applied in cell sheet engineering for skin tissue formation through the creation of multi-layered keratinocytes by magnetic force [95].
Conclusions and Future Prospects
In this review, magnetically functionalized E-spun nanofibers have been discussed from the standpoint of biomedical applications, including scaffolds, regenerative medicine, and cancer therapy. These fibers combine inherent characteristics of standard E-spun nanofibers, such as high porosity, high surface-area-to-volume ratio, flexibility, and ease of fiber production, with unique and advantageous magnetic properties.
In the future, magnetic E-spun nanofibers will likely be used as a functionalized scaffold in tissue engineering because MNP-loaded nanofiber scaffolds have in vitro osteogenesis and tissue compatibility and the ability to regenerate bone in vivo due to their increased hydrophilicity, accelerated degradation, ability to form apatites, and enhanced mechanical properties by magnetic force, suggesting that they can be used as a new class of bone regenerative materials. In addition, electrospun nanofiber mats with a conformal coating on the surface of the mats may be used as an ideal wound-dressing material because they are antibacterial, nontoxic, non-antigenic, permeable for gaseous exchange, resistant to shearing forces (due to their elasticity), and are highly water-absorbent. However, the use of highly engineered electrospun nanofiber mats tailored for wound healing in the clinic requires further study. Furthermore, nanofiber systems may be used in the future as an alternative approach to gastro-retentive drug delivery systems for enhanced bioavailability and the controlled release of drugs because they provide prolonged contact time with the gastric mucosa, controlled release of the drug, and good stability. | 2016-03-22T00:56:01.885Z | 2015-06-01T00:00:00.000 | {
"year": 2015,
"sha1": "3ee281eb81d54777a21aad18778b508bbd386333",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/16/6/13661/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3ee281eb81d54777a21aad18778b508bbd386333",
"s2fieldsofstudy": [
"Materials Science",
"Medicine"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
248906647 | pes2o/s2orc | v3-fos-license | Guest workers and development–security conflict: Managing labour migration at the Sino-Vietnamese border
This article investigates the increasing development–security conflict in China’s immigration management through the case of a policy trial regularizing Vietnamese labour migration in two Guangxi border cities. China’s border regions host low-income immigrant labourers from neighbouring nations. In the 2010s, China launched a series of policy initiatives to regulate temporary and irregular migrant flows. Based on fieldwork and policy research, this study analyses the development and early implementation of this trial, with a focus on state perspectives. It shows how state actors mobilize migrant temporariness and other policy tools within a negotiation process that aims to resolve tensions between developmental policy aims for transnational economic integration and a drive towards securitizing cross-border mobility. I conclude that state actors fail to reach a balance between the conflicting development and security concerns. I also argue that China’s current risk-averse policy environment makes the development–security policy conflict in its immigration management more difficult to resolve. My findings contribute to our understanding of contemporary Chinese policymaking, including immigration policymaking, as well as to the literature on the development–security nexus in temporary labour management schemes.
this border city in south-west China, stops by for an impromptu inspection. Wang disapproves of the scene, claiming that it poses a public safety risk, but does not take action against the informal restaurant. He knows that the workers from Vietnam have to spend hours waiting for the approval of a monthly Chinese residence permit allowing them to legally work within the town and adjacent economic development zones. Monthly permits are a feature of the cross-border labour migrant regularization trial in this area. 1 The trial, launched in 2017, allows two Guangxi border cities to welcome migrants who previously were irregular migrant workers. These migrants alleviate the labour shortage in the area's large sugar cane processing sector and in manufacturing plants.
The tension between commercial and security interests, previously less prominent at the Guangxi border with Vietnam, has intensified. On the one hand, the regularization trial, highly anticipated by officials like Wang, 'fits with central government priorities' 2 such as deeper integration of China with Southeast Asian economies under the China-led Belt and Road Initiative. On the other hand, central authorities remain ambivalent about the entry of ordinary foreign labour into the Chinese labour markets at a time of growing state concern with irregular migration.
Foreign labour migrant management is a relatively new issue for the Chinese state, since domestic low-income labour has largely catered to China's developmental needs. Following the rapid increase of foreign migration after China's accession to the World Trade Organization in 2001, the state has focused on attracting highly educated professional talent which is considered beneficial to China's economic transition; China's immigration framework does not permit most forms of low-income migration labour. However, as China's working-age population shrinks, labour market demand for less-educated foreign nationals to fill niche markets and local labour shortages has emerged, from Japanese call centre workers and Filipino domestic workers to Southeast Asian agricultural workers.
China now faces a dilemma in the management of labour migration: how to increase control over incoming temporary labour migration, while maintaining a flexible, low-cost source of labour? 3 This study asks the question of how Chinese state actors resolve conflicting developmental and security concerns in their management of temporary labour migration. Specifically, to what extent do policy tools such as legal limits on duration and location of stay allow them to reconcile these tensions? The study investigates these questions through the case of special economic zones (SEZs) along Guangxi Province's border with Vietnam. China's emerging policy response speaks to the wider literature on temporary labour migration policy design and implementation.
There has been a distinct developmentalist bias in China's reform-era immigration regime, with the state paying relatively little attention to immigration security. This started to change following the growth of immigration and the well-documented politicization of African trader communities in Guangzhou, which led to more restrictive local immigration control. 4 The 2012 Exit and Entry Administration Law, China's main immigration management law, reflected growing interest in immigration control as seen by the inclusion of sections on national security and irregular entry, residence, and employment. 5 The National Immigration Administration, China's first national-level immigration agency was established in 2018, and the Administration has prioritized strengthening border control and centralizing the management of borders. 6 This increased central state interest in managing international mobility has reached Chinese border areas which have experienced long-standing cross-border labour mobility, much of which is short-distance and circular. Prior to the increase in cross-border labour migration in the south-west over the last decade, cross-border unregistered marriage migration was the primary target of immigration control. 7 The arrival of economic integration strategies such as the Belt and Road Initiative in the borderlands show the tensions between these top-down development plans and local mobility practices, where an increase in investment accompanied by added control can interrupt existing cross-border social and economic ties. 8 This article situates the regularization trial in the context of China's ongoing state immigration management reforms. As with temporary labour programmes in other parts of the world, the Guangxi trial was developed in response to increasing security concerns around irregular migration. Like those schemes, the trial shows the tensions between commodifying labour and increasing limits on cross-border mobility. The dynamics have changed circular border mobility patterns, leading to hiring problems for employers and stricter bifurcation between regular and irregular labour flows. I show how these unintended outcomes of the trial are aggravated by national security authorities' use of short-term residence permits to signal and maintain control over a newly visible and controversial migration flow. The case of the Guangxi trial demonstrates that, in the context of political negotiation and conflicting policy goals, security-oriented actors' use of policy tools such as temporariness severely impedes developmental goals.
I argue that the development-security policy conflict is more difficult to resolve in China's risk-averse policy environment, which favours security-oriented immigration measures. My findings contribute to the literature on Chinese migration, including border migration, Chinese policy implementation in the Xi Jinping era, as well as to previous findings on the development-security nexus in recent temporary labour migration programmes: while researchers have documented the negative effects of increased securitization on migrant rights and circular movement, limits on migrants' duration of stay are generally not seen as hurting the developmental aims of receiving countries.
In the following, I review relevant labour migration research before introducing the Guangxi case. I then use policy and interview data to analyse development-security dynamics during the planning of the regularization trial and its first years of implementation. Finally, I conclude with a discussion of the implications of these findings for the fields of temporary labour management and Chinese immigration border policy.
Development and security in state responses to labour migration
States around the world consider foreign labour migration management to be a balancing act between developmental and security concerns. While including foreign migrants in the lower tiers of the labour market is associated with social and political costs, economic incentives for these schemes remain strong due to factors such as demographic change and labour market segmentation, leading states to balance employer interests against opposing actors. Giving migrants temporary or 'time-delimited' migration status has been a major 'tool' for nation states to control labour migrant entry and settlement. 9 Early post-war temporary low-income labour migration or 'guest worker' programmes primarily focused on supporting businesses' access to low-cost foreign labour. These programmes are generally considered to have 'failed' at keeping migrants temporary, leading to unintended large-scale migrant settlement or increased irregular migration flows, with research documenting how policymakers failed to grasp the complex social nature of migrant behaviour. 10 To prevent such outcomes, a new generation of temporary labour migration programmes, starting from the 1990s, has generally been smaller in scale and scope, with more state involvement. They combine the search for these economic benefits with stricter conditions attached to residence and tend to have a dual aim of alleviating labour shortages and reducing irregular migration. 11 This new type of temporary labour schemes, also called 'circular' when it includes policies on rotation and repeated movement, has been presented by policymakers as an optimal or 'win-win' solution for solving the tension between development and security: they seem to reconcile the interests of actors who want to control or limit migrant settlement, while providing employers with flexible labour. Though research has mainly focused on how temporary labour migration programmes have made migration management in Western Europe and settler states such as Canada and Australia more restrictive, such programmes have also become widespread across Asia, with temporariness of contracts and stay, usually in the range of several years, as their key features. 12 However, research into these programmes has found thatas with earlier schemesconsiderable gaps between intention and reality remain. Firstly, high expectations of control require temporary migration to be increasingly 'securitized'. 13 States can accomplish this by making use of non-state actors such as employers and brokers to further monitor migrant mobility or by embedding temporary labour programmes in special legal regimes within economic zones, thereby adding another layer of migrant selection and further limiting the risk of unexpected sociopolitical impact. 14 Despite the considerable investment this requires, there is little evidence that these programmes reduce irregular movements, while limiting the duration of legal stay tends to decrease migrant circularity compared to that in areas of free cross-border movement. 15 More convincing is the large body of evidence documenting a trade-off between the level of restrictions and the protection of migrant rights, with workers in highly securitized programmes more vulnerable to exploitation. 16 Secondly, the security-development nexus is affected by the politics surrounding temporary labour migration programmes. The tension between admitting foreign labour migrants and the aim of fully controlling their movement can mean that state actors responsible for temporary labour migration control tend to be confronted with 'often incompatible goals'. 17 Changing national security priorities, administrative rivalries or conflict, and public opinion can exacerbate such tensions, leading to the variety of policy designs and outcomes that have marked these schemes in recent decades. Depending on local political circumstances, states choose to restrict temporary labour migration to migrants from particular ethnic or cultural groups or from countries that are considered a lower security risk, to particular economic sectors, or states tighten oversight of migrant return following any controversy. 18 Compared to the documentation of the impact of restrictive policy tools on migrant rights and security outcomes, the ways in which relevant state actors use these tools in policy development and implementation, including during policy testing and adjustment, have been relatively understudied. Less attention has been given to the developmental impact of recent temporary labour migration policies, with programmes generally considered able to recruit enough migrant workers to fulfil economic aims. The case of the Guangxi regularization trial, as a new temporary labour migration programme in China that strengthens immigration control in a border region with previously relatively free circular migration, puts state actors' use of these tools and their effects centre stage.
Border mobility and foreign employment in China's south-west
China's rapid and uneven development has been fuelled by large-scale internal migration to coastal regions. As China's domestic labour force becomes older, more mobile and increasingly educated, labour-intensive agriculture and manufacturing sectors in the north-east and south-west border regions of the country face labour shortages. Guangxi, a province-level autonomous region in China's south-west with the third-lowest GDP per capita among Chinese provinces in 2019, is home to the fourth largest domestic outmigration population in China. Employers in the region increasingly rely on seasonal or longer-term labour migrants from bordering countries such as Vietnam. 19 The China-Vietnam border displays the permeability of many Asian borders, which divide people who often share deep cultural and socio-economic ties. 20 Heavily militarized in the decade following the 1979 Sino-Vietnamese border war, border management was relaxed following the normalization of diplomatic relations in 1991, with numerous mountainous border crossings gradually reopening in the following years 'as land mines were removed'. 21 To regularize post-Cold War cross-border mobility in a period when it was difficult to obtain personal passports, administrative border zones were established nationwide. Local residents registered in these areas can apply for border resident passes that allow them to legally cross the border and stay in a neighbouring state's border regions for one to seven days at a time, depending on the locality.
These local exit-entry regulations have provided the Sino-Vietnamese border population with economic advantages during decades of rapid growth. In recent years, registered border crossings by residents living close to land borders made up about a third of the total of border crossings in and out of China. Economic activity is facilitated by dozens of border checkpoints opened specifically for Vietnamese and Chinese border resident pass holders. While foreign nationals working in other parts of China generally hold work visas linked to an employer and are required to have a university education and relevant work experience, in border areas foreign nationals usually use their border resident passes which permit some types of economic activity but not long-term employment, or they work irregularly. Easy cross-border mobility and loosely enforced duty-free import quota have brought about significant cross-border economic integration, with Guangxi regularly generating the highest cross-border trade value of any Chinese border region. In the 2000s, a boom in Southeast Asian mahogany red wood trade attracted significant migration from other parts of China to the Guangxi borderlands.
The Guangxi-Vietnam border has a relatively small manufacturing sector but is strategically located between the Pearl River Delta -China's manufacturing powerhouseand Southeast Asia. Since the early 2000s, China's south-western border zones have been included in several national economic strategies aiming to close the development gap between inland and coastal areas. Policies for regional economic integration, such as the Belt and Road Initiative, also include border development as a goal. SEZs provide the regulatory environment for investment from coastal regions to these areas. Despite frequent diplomatic tension between the two nations, policymakers consider the mostly stable and predictable China-Vietnam land border more suited to government-sponsored development plans than Myanmar's conflict-ridden border. 22 By framing policy requests within these central initiatives, local government actors can lobby for a special economic zone or a specific policya key feature of China's reform-era policy development. Two cities -Dongxing (pop. 160,000), located on the shores of the Gulf of Tonkin, and Pingxiang (120,000), which is connected to Vietnam by land and railwere designated 'key development and opening-up experimental zones' (重点开发开放试验区) in 2012 and 2016, respectively (see Figure 1). 23 Pingxiang was granted further policy innovation privileges in cross-border investment and trade in 2019, when it became part of the Guangxi Free Trade Zone. The cross-border labour regularization policy was pioneered in these two cities.
Since 2010, the number of border area labour migrants working in non-seasonal jobs has sharply increased to accommodate growing demand for manufacturing labour. 24 Migrants increasingly come from areas further away from the border. While not much data are available, a survey completed in the Yunnan border city of Ruili, where crossborder dynamics are similar to those in Guangxi, found that only 28 per cent of a sample of cross-border labour migrants were from borderland areas. Whereas women from nearby areas previously dominated circular labour migration in Guangxi, this 'new pattern of migration' is more diverse. 25 In addition, increasing numbers of foreign labour migrants have migrated beyond border provinces to China's coastal provinces, where their irregular immigration status is more precarious but salaries are higher. One credible source estimates that there are 100,000 irregular Vietnamese labour migrants in China. 26 Meanwhile, local residents' outward labour migration has increased following an economic downturn in the border region due to tightened anti-smuggling law enforcement, combined with rapid improvement in infrastructure. This trend consolidated demand for cross-border migrants in the region's large agricultural sector and emerging manufacturing zones.
The Guangxi regularization trial's recent start makes it well-suited for studying the development and initial implementation of a pioneering policy. The relative absence of border security restrictions at the time of research allowed me, a foreign researcher, to conduct field research among local government actors in this region.
Methodology
This research uses a variety of qualitative data -45 interviews combined with policy analysisto gain insight into the trial and its complex sociopolitical embedding. First, I conducted 25 interviews with local stakeholders (7 officials, 8 labour brokers, 6 employers, and 4 researchers), which took place in experimental sites Dongxing and Pingxiang, the border city Chongzuo, and the Guangxi regional capital Nanning in May 2019 and December 2019 to January 2020. A letter of introduction stating my status as a visiting PhD researcher at a Chinese institution helped me gain access to border city-level employment and border security officials. However, interview access to security officials was limited. I compensated for this limitation by interviewing two immigration policy researchers working within public security research institutions who were familiar with the regularization trial.
In addition to these interviews, I conducted semi-structured short interviews with 20 residents in the Pingxiang area, focusing on their perceptions of Vietnamese labour migration and the ongoing policy trial. For these interviews with residents, which helped me triangulate findings, I sought out people in different urban, semi-urban, and rural parts of the trial area. I also talked with Vietnamese migrants at government service centres and employment sites who spoke Mandarin. However, this analysis focuses on Chinese perspectives on the trial, rather than Vietnamese migrant experience or the make-up of migrant communities. 27 Shortly after my last visit, the policy trial was suspended due to COVID-19 border disruptions.
Finally, I analysed policy documents and official discourse on the trial in government and state media between 2015-20. Official debate, when accessible, is a key source for gaining insight into the political process that plays out during Chinese policy experimentation. Shifting state discourse is also an important aspect in the securitization of immigration, making such discourse relevant to the study of immigration management.
Developing a temporary labour migration programme at the Guangxi border (2015-2017) Over the last two decades, Guangxi authorities condoned irregular labour migration to improve regional economic development. As a result, reliance on Vietnamese migrants increased in labour-intensive sectors. My experience on the ground was that local populations and officials generally welcomed this new labour force, describing migrants as culturally similar, hardworking, and willing to work in undesired jobs. However, central authorities perceived the increase of non-seasonal labour migration into the border zones, and further into China, as a security risk. Local authorities in Guangxi responded by framing labour migration as a tool to achieve national development goals. They successfully lobbied national authorities to launch a policy trial regularizing these new flows of labour migrants.
A laissez-faire approach to Vietnamese migrants in the border area labour market As local workers moved away in greater numbers, Vietnamese migrants became a key part of the labour force at the Guangxi border. Migrants mostly work in labour-intensive jobs that Guangxi locals are no longer willing or available to do. The locals would only consider doing the same jobs for higher pay in the coastal areas. Border residents are more inclined to go into business as cross-border traders or retailers, economic activities that interviewees described as more desirable due to their relative independence. Locals associate agricultural and factory work with 'cheap' Vietnamese labour migrants willing to do exhausting work. Only one interviewee saw young Vietnamese employed in service jobs as competing with local workers. The cross-border migration flow shows how, even in a relatively underdeveloped part of China, 'social borders' around different types of labour solidify to create a demand for outside labour. 28 This segmented labour market solidified as labour recruitment networks expanded. Building on earlier waves of Vietnamese marriage migrants and business travellers, cross-border kinship networks which were created facilitated seasonal agricultural work and expanded into an intermediary market recruiting workers from neighbouring provinces and other parts of northern Vietnam for hundreds of Guangxi processing and manufacturing companies. In a typical year prior to the start of the regularization trial, about 10,000-15,000 Vietnamese labour migrants worked in the Pingxiang area, with the figure multiplying during the sugar cane harvest. Compare this with 'about 20,000-30,000' employable locals. 29 Circular migration was considered the norm for both agricultural and other workers. While many migrants work in China for multiple years and local economic planners count on their labour supply, permanent settlement was not usually considered an end goal, except in the case of marriage.
Policymakers and members of the public cite cultural proximity with Vietnamese migrants as the main rationale for a lack of tension surrounding the labour trial. According to the same official Wang mentioned earlier, there would be more conflicts between locals and migrants if migrants did not share a similar 'Southeast Asian culture'. However, only part of the rural cross-border population can communicate with Vietnamese border residents in a similar dialect. Outside rural areas, daily interaction between migrants and locals is limited. In the last decade, increased demand for migrants in manufacturing plants has increased this divide. As in Yunnan, more workers now work and live at employment sites at the Guangxi border and speak little or no Mandarin. 30 Around 2010, Guangxi's laissez-faire approach to the increase in Vietnamese labour migration came to the attention of regional-and national-level public security authorities and attracted criticism for its 'soft' approach towards irregular migration. 31 Local authorities were held responsible for 'chaotic' labour recruitment, which led to unregulated fees and labour conflicts, and for the increase in Southeast Asian migrants taking up irregular residence in border areas and other parts of China. 32 Public security officials estimated that Guangxi 'led the nation' in irregular migration and that a majority of irregular Vietnamese labour migrants entering the country in Guangxi ended up in Guangdong, creating extra work for public security there. These complaints led to pressure on Guangxi border authorities to control irregular migration, with arrests of irregular labour migrants increasing an average of 20 per cent annually from 2010 onwards. 33 In response, some Guangxi officials started to consider regularization as a solution for controlled labour migration, inspired by Yunnan's Dehong Prefecture, where cross-border labour has been managed through local regulations since 2014. To maintain economic stability and cross-border labour flow without running into continuous conflict with higher-level authorities, they had to break with previous 'non-policy'. 34
Guangxi's developmentalist framing of Vietnamese labour regularization
In Chinese policymaking, experimental policies are the outcome of negotiations between central policymakers and subnational actors, often in response to a regulatory failure. Experimental policies do not have a fixed timeline, and their impact on future policy varies case by case. 35 In the Guangxi cross-border labour trial, local officials framed their demands for expanding Vietnamese labour mobility in the context of China's national strategy for border development. In their requests, they offered a mix of economic and security-based rationales: SEZs would require growing labour supply; relatively cheap foreign labour could enhance the competitiveness of these traditionally 'left-behind' areas; and regularizing existing migrant labour would address growing border security concerns. Because China's existing immigration laws do not allow for foreign low-income labour, securing central-level approval for labour regularization would be a significant policy innovation.
National-level research delegations to the Dongxing SEZ, at the time the only national-level zone in the region, became aware of local-level interest in securing the regulation of migrant labour. A mini-trial of 10 employers in Dongxing provided 'firsthand experience' for a State Council Development Research Center team to evaluate. 36 This led to the inclusion of a single-line statement in the 2015 State Council strategy for border development allowing 'the employment of foreign nationals in accordance to regulations' in border region SEZs, with the Ministry of Human Resources and Social Security as the responsible government authority. 37 This national-level document was subsequently invoked at every step in the regularization trial's development.
The early phase of the regularization trial focused on its developmental potential. After cross-border labour regularization received central approval, local government actors started to openly discuss the key role that previously irregular Vietnamese workers were playing in areas of their economy, calling for speedy implementation because the 'labour dividend' accruing from cross-border migrants willing to do tiring work for salaries 20-30 per cent lower than the local average might run out in a decade. 38 Demand for such workers will continue to rise, one official with the department of commerce writes, to fulfil the development goals of the SEZs. 39 They also calculated that lower salaries and social insurance payments for foreign workers would allow employers to save respectively RMB 1454 and RMB 779 a month per workeras compared to the cost of hiring a worker from China's eastern provinces or a Guangxi local.
In 2016, a bilateral cooperation mechanism between Guangxi and its four bordering Vietnamese provinces (Quang Ninh, La ̣ng Sơn, Cao Băǹg, and Hà Giang) became active at regional and city levels. A subsequent 2017 Guangxi regional work plan detailed the regularization trial, ending a period in which regional authorities had remained passive to local-level requests for policy support. The region's commercial authorities, also in charge of SEZ development, were made responsible for overseeing the policy.
The 2017 strategy strikes a balance between economic development and border security. It describes Vietnamese migrants as a 'beneficial complement' to Guangxi's local labour market, who should receive 'maximum convenience', while also requiring local authorities to exercise 'maximum control' over irregular mobility. The plan stipulates that workers are eligible for half-year residence permits, and it requires employers to police migrant employees. If successful, the regularization trial was slated to be scaled up to the entire border region by the end of 2018. 40 In this phase, officials who were interviewed recalled a sense of optimism and described the trial as a step forward in China's evolution to becoming an immigration destination. 41 '2017 was kind of a big year for us as the autonomous region started to make policies', the already-mentioned human resources official Wang told me. 42 The nationallevel experimental status of the SEZs made it possible to receive various policy benefits, among which the regularization trial was considered the most noteworthy. Following central approval, local and regional leaders 'highly prioritized' it. 43 However, the policy's momentum also meant local migration management would be subject to increased higher-level government surveillance. In what interviewees described as a shift towards a 'subtler' relationship with higher levels of government, local authorities balanced local commercial interests, such as the demand for flexible cross-border agricultural labour, with the expectations of superiors who decided the trial's future. 44
Implementing cross-border labour regularization (2017-2019)
Human resources, exit-entry, and special zone management authorities were the main actors implementing the regularization trial in the border cities. While a large amount of cross-border labour migration has been regularized since 2017, implementation varies within the trial area and irregular migration persists. I show that central public security authorities wanted local authorities to further strengthen control over migrant mobility, rejecting requests for policy relaxation. This emphasis on immigration control destabilizes the existing circular labour migration dynamic, making it harder for employers to hire migrants while paradoxically creating new irregular networks.
Post-trial mixed effects on cross-border labour migrant flows
In 2019, two years after the implementation of the regularization trial, its effects on the ground were mixed. State media and government reports enthusiastically cited examples of coastal businesses that relocated to Guangxi for its affordable Vietnamese labour, but economic development had not been revitalized by the SEZs' advantageous policies. Officials described economic development as 'alright', 'not great', or, at best, in a 'stable' state. 45 Implementation was difficult because national public security authorities refused to issue the half-year work permits that the 2017 plan had announced. Instead, migrants continued to cross the border and apply for a new residence permit on a monthly basis. The planned scale-up of the trial area in 2018 did not materialize, indicating that central authorities considered expansion premature.
For employers, the regularization trial made hiring Vietnamese employees 'legal but harder'. 46 Enforcement efforts focused on bigger employers, such as the sugar cane processing factories, and new companies moving to the SEZs. Workers had to leave their company for several days a month to renew their permit, leaving less time for work, while the monthly cost of renewal (RMB 120) was significant for low-wage workers. The turnover rate was high because each month migrants could choose whether or not to return to the company, or even whether to return to China at all. The strict mobility management of workers resulted in migrants frequently quitting within a month. Companies that had relocated to Guangxi because of the special zone incentives and cheaper labour costs had difficulty training and retaining Vietnamese employees. 47 Despite strengthened management over the Vietnamese working population -'we now know their identity and what they are doing', as one official put it 48work permit enforcement varied throughout the trial zone. According to researchers' estimates, most Vietnamese workers in the Pingxiang City area now have work permits, and authorities claimed that the regularization trial has greatly reduced hiring difficulties. In 2018, 145,000 monthly permits were issued to workers at over 500 companies, although monthly numbers highly fluctuated. Work permits were less common among Vietnamese working in agriculture and construction outside the Pingxiang urban area.
As a 30-year-old native of adjacent county Longzhou explained: 'We are not Pingxiang. They are a city, and . . . they have these policies. We just smuggle.' 49 Reflecting the leniency of Vietnamese authorities in issuing border resident passes, a migrant woman in Pingxiang interviewed and quoted in a state media report stated that she was from Hanoi, officially not part of the trial area. 50 Regularization rates in Dongxing were much lower, likely due to differing border pass regulations that allow border migrants to stay for three days at a time (versus one day in Pingxiang).
Another impediment to successful implementation was identifying eligible workers. There was confusion over whether workers in agriculture, the sector with the largest labour shortage, qualified for the trial. In May 2019, officials told me that farms with a legal representative could participate in the trial, but by December the trial applied to industrial and service workers only. Central authorities required the trial to be limited to industrial activities in line with economic upgrading goals for the area, and local officials claimed that there had never been Vietnamese workers engaged in agriculture in China. However, intermediaries explained that for these types of work, local employers continued to rely on irregular migrants, or used work permits registered at another type of company.
During the regularization trial, residents and intermediaries noted intensified border management in urban and rural trial areas. More employers were fined for hiring Vietnamese workers without permission, and unregistered migrants were detained unlike in the past when police issued warnings to unregistered workers before dropping them off at the border. A new border information system, part of a nationwide upgrade of border equipment, detected overstaying on a border resident pass automatically. 'If you're still working illegally and you get caught, you are put on a blacklist and can't enter China for five years,' explained one intermediary. 51 Controls at checkpoints policing the inland border of the border area were also tightened. While it became more important to meet the legal residential requirements of a border resident pass, monthly work permit or a passport visa, the enforcement of irregular employment regulations remained uneven. 52 Some migrants were positive about the regularization. A middle-aged migrant from La ̣ng Sơn who had worked in a wood processing plant in Pingxiang for about 10 years summarized her experience of the change, saying: 'I no longer need to be scared of the police.' 53 Previously border crossings were often communally organized for safety, but now workers were able to go back home for a holiday or into town for the night. Overall, the changes in labour and border management increased the difficulty of border crossings, making a negative impact on migrant flows. As the risks of overstaying on a border resident pass increased, circular workers who previously crossed the border frequently now had to follow permit rules. Intermediaries noted that the most qualified Vietnamese workers had options beyond Guangxi, for instance switching to newly opened factories on the Vietnamese coast (often Chinese-owned). Others tried to stay under the radar altogether by going 'irregular all the way' and staying in China for longer periods, especially if they planned to seek work in other parts of China. The trial deepened an ongoing trend of bifurcation between regular and irregular border migration, with those unable to maintain regular statuspreviously they were mostly brideslimited in their mobility and rights while in China.
Securitizing the cross-border labour trial
In the first years of the trial's implementation, the developmental benefits of an increase in cross-border labour were limited. Local actors complained that border security concerns were outweighing economic goals. However, as the trial progressed, central authorities asked for further control measures over migrant mobility. Most notably, the National Immigration Administration had to be convinced that local authorities had sufficient control over irregular recruitment practices before extending the duration of residence permits. National immigration authorities were said to worry about increased regular migration in the border zones leading to more irregular migrants moving toward China's coastal regions: 'Once they are in, they will move throughout the country. Who will be responsible for that?' 54 National-level employment authorities also expressed concerns about guaranteeing minimal interference with local employment and migrant rights.
Addressing central authorities' concerns became a key priority for local government actors, leading to 'constant changes in the rules'. 55 In January 2019, the Pingxiang City government published a new plan to further co-opt labour intermediaries and employerswho had partially persisted in their previous roles in the informal migrant labour ecosysteminto the policy trial, by increasing their responsibility for migrant behaviour and movement in China. Intermediary companies could be given one of four statuses: (A) recommended, (B) regular, (C) warned, or (D) suspended. 56 It became common for both agents and firms to temporarily lose their hiring qualifications due to violations. SEZ authorities developed a smartphone app through which authorities, employers, and intermediaries would be able to track workers. The new plan also featured migrant rights such as equal pay more prominently. Because foreign ordinary workers currently have no way to participate in Chinese social insurance, employers continue to save on labour costs. A newly developed commercial insurance for cross-border workers covers compensation and treatment in case of injury for RMB 23 per month, a fraction of social insurance payments (for comparison, payments in Guangxi are equivalent to a quarter of salary costs).
Besides adjusting implementation, some local officials continued to lobby for policy relaxation. An article by two Dongxing officials in an influential Beijing-based party policy journal argued that the regularization trial offered broad lessons to China's approach to labour immigration. The authors criticized the national labour migration regulations as 'seriously outdated' in their focus on highly skilled immigrants and that the regulations restricted small businesses from hiring foreigners. 57 Pointing to Japan, Korea, and the EU, the officials called for an overhaul of national foreign employment regulations and simplified procedures for current border area cross-border labour trials.
By the end of 2019, Pingxiang's tightened management of the regularization trial started receiving recognition from regional and national authorities. Delegations from the State Council, National Immigration Administration, and the National Development and Reform Commission visited the trial area. During a December 2019 visit, the Commission praised Pingxiang's human resources department for developing commercial insurance for Vietnamese workers. 58 Guangxi's border management, previously criticized for being soft on irregular migrants, was endorsed by the Guangxi party leadership for its control of irregular migration. 59 In 2019, delegations from Yunnan and Inner Mongolia and an international delegation from Mongolia visited Pingxiang to learn about the trial. The border city of Jingxi was expected to be included in the trial, and several bigger cities along the border, such as Qinzhou and Beihai, also expressed their interest.
Border city officials were hopeful that permit restrictions would be relaxed, and that the regularization trial would be expanded and eventually regularized. However, security concerns remained. A regional-level official involved in the trial described the situation as a matter of 'security interests over economic interests', and that these were unlikely to be resolved quickly. 60 At the local level, central instructions to treat crossborder labour mobility as a security risk sat uneasily with local experience in these areas. The prioritization of border security over local development risked alienating locals and migrants who were used to decades of flexible cross-border mobility. Local economic officials were uncertain about the developmental benefits of the special economic zones and managed 'both upper-level requirements and the demands of the populace' through selective implementation. 61 However, it is only when central security concerns are met that policy space for temporary labour migration can be safeguarded.
Discussion
Though small in scale, by experimenting with temporary labour migration China has joined the ranks of countries that actively recruit foreign migrants for temporary employment in specific, less compensated parts of the labour market. The very existence of the regularization trial showcases central authorities' willingness to innovate in a sensitive policy area. Although China, often defined by its large population, is considered unlikely to relax restrictions on foreign labour migration nationwide any time soon, the trial is an official acknowledgment of foreigners' role in the lower segment of the labour market in parts of the country. However, in the first years of the trial, as different state actors negotiated its terms, they failed to resolve the development-security conflict, resulting in a partial, securitized implementation of the trial at the expense of developmental goals. Placing the Chinese case in a comparative context helps explain this outcome, while illuminating the limits of temporary labour migration policy tools such as migrant temporariness and legal exemption regimes.
Firstly, the Guangxi regularization trial shows how globally prevalent policy tools in managing the tension between developmental and security concerns are also part of the policy repertoire of Chinese state actors, who conservatively adapt them to Guangxi's border context. The trial's 2017 design features a doubly restrictive 'zoning' of the trial area, superimposing the legal exemption regime of the new SEZs on the existing exceptional regulatory context of the borderland area by allowing only Vietnamese border residents working in the special economic zones to participate in the trial. While social unrest has not been significant, the bilateral set-up allowing only Vietnamese nationals from border regions to apply for worker permits is an instance of limiting temporary labour to groups deemed to be a lower security risk. The trial's planned six-month permit length put it at the short end of common time-delimited work permits.
During implementation, the national immigration authorities continued to require monthly renewal of workers' residence permits, unwilling to extend their length of stay to six months. In doing so, the National Immigration Administration, whose mandate includes both development and security-related immigration affairs while remaining part of the public security apparatus, prioritized the goal of reducing irregular migration. Development-oriented state actors, especially at the local level, in turn resorted to security measures to secure the immigration agency's approval. The locally developed 2019 regulations strengthened management over private actors such as employers and intermediaries, while the trial was restricted to industries considered to be in line with economic upgrading plans, rather than those with the most urgent labour needs.
Secondly, confirming earlier findings, these policy restrictions had an impact on migration flows. In its pre-regularization phase, Vietnamese border residents and other migrants who overstayed were able to move back and forth either independently or with help from the irregular intermediary industry, maintaining a relatively high degree of spontaneous circularity. Requiring migrants to renew permits on a monthly basis, however, resulted in an extremely managed form of circular migration. Given international experience on how temporariness interferes with employers' need for labour force stability, it is unsurprising that, as an extreme case of securitized temporary labour migration, the high regularization threshold led to high migrant turnover, dissatisfied employers, and other unintended 'substitution effects', 62 including selective implementation, increased irregular migration and redirected migration flows to other areas.
Taken together, the first years of the Guangxi trial show that state actors, in their efforts to address conflicting policy aims, advanced a security-oriented approach that negatively impacted developmental outcomes. In the context of the policy trial, control-oriented policy instruments became moves in an ongoing policy negotiation 'game' between the National Immigration Administration and other state actors. 63 Development-oriented actors further securitized the trial, accepting short-term developmental costs, with the aim of a more liberal long-term outcome once the border policy ecosystem was considered sufficiently secure. Reflecting the increase in central oversight of border area development and control within the SEZs, a return to the local state's previous role in facilitating irregular labour was no longer possible. Instead, development-oriented actors had to accept the uncertain long-term impact of extreme migrant temporariness on migration flows.
However, whether central security authorities will allow the length of work permits to be extended depends on uncertain factors in China's wider policymaking context. In terms of immigration issues, these include an increased concern about the security risk of irregular migration at the national level, and the progress of controversial institutional reforms around the military-to-civil transition of China's border guards. Overall, a generally risk-averse policy environment that has resulted in reduced policy innovation and increased centralization, a well-documented trend under the Xi Jinping administration, plays a role. 64 Risk-averse immigration management results in new resources and influence favouring security goals, while liberalizing aspects of the state's immigration agenda are repeatedly stalled or face limited implementation. The Guangxi labour trial, as a legally indeterminate experimental policy, illustrates this trend, highlighting the increased incentives to securitize rather than promote economic development.
Finally, the Guangxi labour trial provides further evidence that border area successes in achieving transnational economic integration invite increased central state scrutiny. In the case of Guangxi cross-border mobility, local state actors were willing for years to re-purpose the existing border migrant regime by tacitly including new flows of labour migrants, even those not from border zone areas. Relatively under-regulated in the past compared to China's north-western and north-eastern borders, the Guangxi border is now transitioning to a more standardized national border management, further shifting the power balance from local officials to central officials. This trend intensified when the COVID-19 pandemic broke out and managing irregular migration became a top national priority, speeding up the ongoing securitization of irregular border migration documented in this article. 65 As tensions between nation-building and local cross-border cultures at China's southwestern borders are transformed by new economic, geopolitical and demographic realities, it is important to go beyond the 'border resident perspective' 66 dominating Chinese border migration literature to study how these new trends impact border migration and its governance. The Guangxi regularization trial contributes to the global study of temporary labour migration by highlighting the risks of overly relying on securitizing policy measures during policy development. As China's immigration management system expands and modernizes, it increasingly displays a global tendency towards 'securitization and marketization [to go] hand in hand'. 67 ORCID iD Tabitha Speelman https://orcid.org/0000-0001-8690-7233 Notes | 2022-05-20T15:24:18.504Z | 2022-05-17T00:00:00.000 | {
"year": 2022,
"sha1": "7046ce25226bbf145727b44bb564ca49c10340f5",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0920203X221098546",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "21843ca80aa7f5d57492d6cffb563f5337e6827f",
"s2fieldsofstudy": [
"Political Science",
"Sociology"
],
"extfieldsofstudy": []
} |
219729546 | pes2o/s2orc | v3-fos-license | What are the benefits and harms of risk stratified screening as part of the NHS breast screening Programme? Study protocol for a multi-site non-randomised comparison of BC-predict versus usual screening (NCT04359420)
Background In principle, risk-stratification as a routine part of the NHS Breast Screening Programme (NHSBSP) should produce a better balance of benefits and harms. The main benefit is the offer of NICE-approved more frequent screening and/ or chemoprevention for women who are at increased risk, but are unaware of this. We have developed BC-Predict, to be offered to women when invited to NHSBSP which collects information on risk factors (self-reported information on family history and hormone-related factors via questionnaire; mammographic density; and in a sub-sample, Single Nucleotide Polymorphisms). BC-Predict produces risk feedback letters, inviting women at high risk (≥8% 10-year) or moderate risk (≥5 to < 8% 10-year) to have discussion of prevention and early detection options at Family History, Risk and Prevention Clinics. Despite the promise of systems such as BC-Predict, there are still too many uncertainties for a fully-powered definitive trial to be appropriate or ethical. The present research aims to identify these key uncertainties regarding the feasibility of integrating BC-Predict into the NHSBSP. Key objectives of the present research are to quantify important potential benefits and harms, and identify key drivers of the relative cost-effectiveness of embedding BC-Predict into NHSBSP. Methods A non-randomised fully counterbalanced study design will be used, to include approximately equal numbers of women offered NHSBSP (n = 18,700) and BC-Predict (n = 18,700) from selected screening sites (n = 7). In the initial 8-month time period, women eligible for NHSBSP will be offered BC-Predict in four screening sites. Three screening sites will offer women usual NHSBSP. In the following 8-months the study sites offering usual NHSBSP switch to BC-Predict and vice versa. Key potential benefits including uptake of risk consultations, chemoprevention and additional screening will be obtained for both groups. Key potential harms such as increased anxiety will be obtained via self-report questionnaires, with embedded qualitative process analysis. A decision-analytic model-based cost-effectiveness analysis will identify the key uncertainties underpinning the relative cost-effectiveness of embedding BC-Predict into NHSBSP. Discussion We will assess the feasibility of integrating BC-Predict into the NHSBSP, and identify the main uncertainties for a definitive evaluation of the clinical and cost-effectiveness of BC-Predict. Trial registration Retrospectively registered with clinicaltrials.gov (NCT04359420).
Background
Breast cancer is the most common cancer in the UK and a leading cause of death in women [1]. Each year, approximately 55,000 women are diagnosed with breast cancer, of whom approximately 11,400 will die from the disease [1]. Although deaths from breast cancer have been decreasing in many Western countries, the incidence of breast cancer is continuing to increase [2][3][4]. To identify breast cancer at an earlier and more treatable stage, nearly two million women are screened in the National Health Service Breast Screening Programme (NHSBSP) in England every year [5]. The NHSBSP currently invites women aged 50 to70 years (though some breast screening units are trialling screening from ages 47 to 73 years) for three-yearly mammograms. The NHSBSP also undertakes screening of very high-risk women with high-penetrance mutations in genes such as BRCA1, BRCA2 and TP53. These women are offered annual Magnetic Resonance Imaging screening between ages 30 to 50 years and annual mammography between 40 to 70 years.
In 2013, the National Institute for Health and Care Excellence (NICE) recommended that women at high risk of breast cancer who are not high penetrance gene carriers (lifetime risk ≥30%, 10-year risk ≥8%), should be offered annual breast screening between the ages of 40 to 59 years; and those at moderate risk (lifetime risk 17-29%, 10-year risk 3-7.9% aged 40 years), should be offered annual mammography from 40 to 49 years [6], but considered for annual screening aged 50 to 59 years. NICE guidance also recommends that women at high risk of breast cancer are offered chemoprevention with tamoxifen, anastrozole or raloxifene (considered in moderate risk) and advice on weight control and physical activity [6]. So far, it is estimated that only about 1 in 6 women who are at high-risk as defined by NICE (≥8% ten-year risk of breast cancer) have been actively identified by attending Family History, Risk and Prevention (FHRP) Clinics [7,8].
Risk stratification in the NHSBSP could identify many of the 5 in 6 women who are at high-risk but are not aware of this, as well as a larger number of women at moderate risk. It is possible to accurately estimate a woman's individual risk of developing breast cancer through information on breast density derived from mammography and self-report questions assessing family history and factors affecting hormone levels, e.g. using the Tyrer-Cuzick algorithm [9]. A previous study (PRO-CAS) provided 10-year risk estimates to over 54,000 women in the NHSBSP in Manchester, England [10]. This study was the first time that personalised breast cancer risk estimates were calculated for large numbers of women from the general breast screening population. The PROCAS study found that at least 3% of women are high risk (≥8% 10-year risk) when all risk factors including mammographic density are assessed and a further 10% are at moderate risk (5-7.9% 10-year risk) [7,8]. Given that only 0.5% of the population have identified themselves as high risk, this means that there are approximately an additional 450,000 women in England (aged 30 to 70 years) at high risk that NICE guidance indicates should be offered chemoprevention and annual mammography.
The introduction of risk stratification in the NHSBSP could allow the potential benefits of more frequent screening and/ or chemoprevention to be realised on a population basis, and potentially allow women at lower risk to have less frequent screening recommended. In principle, a risk-stratified NHSBSP should result in a better balance of benefits, harms and NHS costs and there is some emerging early evidence to support this premise [11]. The benefits might be fewer breast cancers due to chemoprevention, and reduced breast cancer mortality arising from NHSBSP detecting more breast cancers at an earlier and more treatable stage. There might also be grounds for reducing screening for women at lower risk, who would be less likely to develop high grade tumours [12]. Reducing screening in women at lower risk would produce fewer harms of screening in this group, such as fewer false positive test results in lower risk women [13].
The consequences of introducing risk stratified screening in the NHSBSP are unclear. In the PROCAS study, communication of risk estimates happened 3 to 5 years after women provided their questionnaire data and consent [10], so that study provides limited information about the consequences of receiving risk estimates: the main purpose of that study was to validate risk prediction algorithms rather than as a new screening service model [8]. It is likely that, if aware of their risks, a sizeable proportion of women at high/moderate-risk would opt for chemoprevention with anastrozole/raloxifene/tamoxifen [7,8,10,14,15], as well as extra mammography in high-risk women [8]. The overall net effect of chemoprevention and additional screening is likely to be beneficial from a reduction in breast cancer incidence and mortality. By contrast, there are also several possible harms that could be brought about by the receipt of risk estimates. Communicating personal risk information to women could induce undue anxiety and worry. Although the best available evidence suggests that this is unlikely, this evidence has limitations such as a long time-lag between women agreeing to risk assessment and receiving risk results [16].
In addition, the mere offer of risk stratified screening may have potential adverse effects. It is possible that by offering risk stratified screening as part of the NHSBSP, women are put off from attending screening and thereby receiving its benefits. Evidence from PROCAS [10] suggests this is unlikely. Furthermore, as with all screening programmes within the NHS it is important that patients are provided with the necessary information in order to possess the knowledge to make an informed personal decision about whether to attend screening, and any treatment options that follow from screening [17]. There is currently no clear evidence to indicate whether risk-stratified screening could result in more informed decisions or not [18].
A final important group of possible drawbacks of implementing risk stratification are the potential costs, both personal and financial, to implementing the communication of risk information on such a scale, including increased NHS staff workload and additional healthcare resources. Evidence is therefore required about the key drivers of the relative cost-effectiveness of communicating breast cancer risk estimates compared with current NHSBSP practice, understanding the key uncertainties in the current evidence base and potential value of future research [19]. Overall, it is imperative in order to highlight whether risk stratified screening will induce harms and if so, how they can be mitigated so as not to outweigh benefits, and allow more effective use of healthcare resources.
We have developed an automated system (BC-Predict) for offering an assessment of breast cancer risk to women when they receive their NHSBSP invitation, and generating letters to feedback this risk to women and relevant healthcare professionals. A development phase involved working with healthcare professionals that ensured that the care pathways were workable, and that informatics procedures functioned as intended. The patient information materials were co-produced with women who would be eligible for BC-Predict to promote good understanding and informed choices, and also minimise harms such as unnecessary worry.
In BC-Predict, risk estimation can be offered in realtime to women invited for breast screening via an online web system to allow consent and self-report measures to be provided. Risk assessment is based on self-report questions and breast density estimates automatically derived from mammography, and can also incorporate information from currently known breast cancer Single Nucleotide Polymorphisms (SNPs), derived from DNA contained in saliva samples. Women who receive a clear mammogram result are then sent a letter providing their 10-year breast cancer risk within 6 to 8 weeks after their mammogram. Thus all women will know their risks. Those women at moderate (> 5% but < 8% 10-year risk) or high (≥8% 10-year) risk are encouraged to attend a consultation at a FHRP Clinic, to discuss the offer of more frequent screening and chemoprevention.
Although developmental work has shown BC-Predict to function as intended, it would not be appropriate to implement a system such as BC-Predict outside of a research setting, given the uncertainties around potential benefits, possible harms and cost effectiveness [20]. It would not even be proportionate or ethical to conduct the required large-scale definitive evaluation of clinical and cost-effectiveness, as this which would require the participation of hundreds of thousands of women to have sufficient power to detect its effect on breast cancer incidence and stage. Therefore, in line with the MRC Framework for Developing and Evaluating Complex Interventions [21], the present research has the goal of identifying and resolving key uncertainties regarding the feasibility of integrating BC-Predict into the NHSBSP and assessing the feasibility of a definitive study to assess whether the intervention translates into measurable effects on breast cancer incidence and stage, and is a costeffective use of NHS resources. The present research will therefore quantify key drivers of the relative costeffectiveness of communicating breast cancer risk estimates compared with current NHSBSP practice, understanding the key uncertainties in the current evidence base and potential value of future research.
A particular concern during the development phase was that that women from low socioeconomic and minority ethnic backgrounds are less likely to attend for screening [22][23][24]. Commonly cited reasons include language barriers, cultural incongruences and lack of understanding and knowledge about screening [22,25,26]. It is not presently known whether the introduction of risk-stratified screening would exacerbate these issues further or lead to increased non-attendance. In developing BC-Predict, interviews with a cohort of British-Pakistani women from low socioeconomic backgrounds found that views toward risk-stratified screening are favourable. However, as with the present screening programme language barriers could still prevent access and reduce women's ability to make informed decisions [27]. Given this, in the present study we will assess whether women from low socioeconomic status backgrounds are less likely to take up the offer of riskstratified screening.
The overall aim of the present research will be to establish whether providing women eligible for NHSBSP with personalised breast cancer risk (BC-Predict) estimation is feasible, by (a) measuring important potential harms and benefits of BC-Predict, (b) identifying the key drivers of the relative cost-effectiveness of embedding BC-Predict into the NHSBSP, and (c) attempting to understand the key issues affecting implementation of BC-Predict as part of the NHSBSP. This overall aim will be met by evaluating the BC-Predict system in a 16month study running within the Greater Manchester, East Cheshire and East Lancashire NHS breast screening programmes, with the following three overarching objectives: Quantifying important potential benefits, particularly
Study design
A non-randomised fully counterbalanced study design will be used, to include equal numbers of participants from all sites who will be offered NHSBSP and BC-Predict. Specifically, in the initial 8-month time period, four screening sites will offer women eligible for breast screening BC-Predict. Three screening sites will offer women usual care NHSBSP. In the following 8-month time period the study sites switch to offer the other intervention (NHSBSP rather than BC-Predict; and vice versa). This 'counter-balanced' design will allow estimates of effect to be obtained from both within-sample and between-sample analyses.
Setting
Women will be recruited from seven sites within three NHS Breast screening programmes: three sites within the Greater Manchester programme (Withington Community Hospital, Oldham lntegrated Care Centre and the Trafford mobile screening van only), and two sites each based in the East Cheshire (Macclesfield District General Hospital and Stockport mobile breast screening van locations) and East Lancashire (Burnley General Hospital and East Lancashire mobile breast screening van locations) programmes. Women invited to screening in East Cheshire and Withington/Trafford in the first 8 months of the study will be offered BC-Predict and women in East Lancashire and Oldham offered screening as usual. After 8 months, BC-Predict will be offered to women in East Lancashire and Oldham, with women in Cheshire and Withington/Trafford offered screening as usual.
Participants
Recruitment is over a 16-month period and sites will each be open to recruitment to BC-Predict for a period of 8 months. Two groups of women will be invited to participate in the study (a) women invited for first time screening ("prevalent screens"), and (b) women invited during the screening round within which they reach 60 years ("incident screens" i.e. women aged 57 to 63 years). Posters advertising the study will be displayed in each of the participating screening sites to increase awareness of the study.
We will include women who are invited for usual care (NHSBSP) at each site to compare with women offered BC-Predict; however NHSBSP women will not be consented to the study as controls, as their personal information will not be accessed. Instead, core outcome measures will be obtained in aggregated form. This will provide a comparison with uptake to these services in the BC-Predict arm. Posters at study sites will inform women being offered NHSBSP that they can request their data is not included in any analysis.
Inclusion criteria are that the participant: (a) is born biologically female; (b) is invited for first breast screening appointment (any age); is aged 57 to 63 years (only at East Cheshire and East Lancashire NHSBSP); and (c) is able to provide informed consent and complete a risk assessment questionnaire. Exclusion criteria are that the participant: (a) is born male; (b) previously had breast cancer; (c) had bilateral mastectomy; or (d) has previously participated in the PROCAS study [10].
Procedure
Women being offered BC-Predict will be sent an invitation letter one to two working days after their breast screening invitation letter is sent. The BC-Predict invitation letter will be sent along with the participant information sheet and instructions directing prospective participants to the online risk assessment platform. Each invitation letter will include details of the participant's "Date of first offered appointment". This is the first breast screening appointment date that was offered to the participant. This date is of relevance because participants will be able to join the study either before the date of their first offered appointment or up to sixweeks after. After this time it will no longer be possible for them to login to the BC-Predict risk assessment platform. Prospective participants will be directed to telephone the study helpline if they have any questions, or if they require any further information prior to deciding whether or not to take part. The timeline from the participant perspective is shown in Fig. 1. An overview of data-flows is shown in Fig. 2.
Once participants have consented to the study online, they will be directed to the BC-Predict risk assessment questionnaire. Participants will be able to enter part of the questionnaire, save and return to it at a later date, as long as they do this within their six-week recruitment window. Assessment of the online questionnaire during the pilot phase estimated that most women would be able to complete this within 30 min. If a prospective participant doesn't have access to the internet, a paper version of the questionnaire can be posted out to be completed along with a paper version of the consent form. The data recorded on the questionnaire will then be manually inputted into the online risk assessment platform by a member of the study team, and the standard process will be followed from this point.
Once a clear mammogram result has been provided, a risk feedback letter is generated based on the answers participants give in their questionnaire and mammographic breast density (calculated from uploaded raw data by Volpara systems). The percentage density is inserted into an online version of Tyrer-Cuzick v8 that includes an algorithm to adjust density for age BMI and menopausal status in into an odds ratio known as density residual [14]. The risk feedback letter will inform women that they are at "high" (≥8% 10-year risk), "moderate" (≥5% but < 8% 10-year risk), "average" (≥ 2% but < 5% 10-year risk), or "below average" risk (< 2% 10-year risk). Each letter will explain how the risk estimates were derived, and the implications of these. Each group of women will receive this letter in the post, along with a leaflet providing additional detail on breast cancer risk factors, signs and symptoms of breast cancer and how risk might be managed. Those women who complete the BC-Predict risk assessment questionnaire over the telephone will also be sent copies of this questionnaire so that they can check for data entry errors and the electronic consent form for participant's records. All BC-Predict participants will have been invited for breast screening but a proportion may choose not to attend their breast screening appointment; attendance at breast screening mammogram is not a compulsory part of the study. The Participant Information Sheet explains that participants' mammographic breast density will be included in their risk assessment, providing they attend their mammogram within 6-weeks of their first offered appointment. It is also explained that including mammographic density increases accuracy of the risk assessment. Any participant who declines a mammogram or has a mammogram after this time will not have this data included in their risk assessment, which is explained in their risk feedback letter.
To assess self-reported harms and benefits, and to inform an economic analysis, a randomly selected subsample of n = 2108 women (n = 1054 each from usual care NHSBSP and BC-Predict) will be asked to complete questionnaires assessing psychological benefits and harms of BC-Predict at baseline, 3-months and 6months. For women in both groups, the request to complete the questionnaire will be sent shortly after their mammography invitation but before their first offered mammogram appointment, asking for their help in evaluating a new approach to providing NHS breast screening. They will be given instructions to complete an online consent form and questionnaire using their unique study identification number on SmartSurvey (https://www.smartsurvey.co.uk/). The same women will be asked to complete the questionnaire three and 6 months after their first offered mammogram appointment. Women in both experimental groups will only receive follow-up questionnaires once they receive a clear mammogram result.
The risk assessment and feedback will take account of each patients' journey through the NHSBSP. The study team will periodically check screening outcomes for participants. There are a number of initial screening outcomes: (a) clear mammogram and woman will be invited for routine breast screening in three-years (routine recall); (b) mammogram taken is technically inadequate so repeat mammogram is required (technical recall); (c): suspicious mammogram and further assessment required (recall for assessment). For all scenarios the GP will be informed of the participant's involvement in the study and provided with their risk feedback.
Participants who are confirmed as having a routine recall screening outcome will receive their risk feedback after this, approximately 6 weeks after their mammogram. Participants who are invited for a technical recall/ recall for assessment appointment, and attend this appointment within 6 months of their first scheduled technical recall/recall for assessment appointment, and who are subsequently confirmed as not having breast cancer will receive a risk feedback letter following confirmation of an absence of breast cancer. Participants who are invited for a technical recall or recall for assessment appointment but do not attend within 6 months of the first scheduled technical recall/recall for assessment appointment (i.e. those for whom there is no screening outcome within 6 months of initial screening outcome) will receive their risk feedback 6 months after joining BC-Predict. Participants who do not attend a breast screening appointment within six-weeks of their first offered breast screening appointment will receive their risk feedback after this six-week period (i.e. 7 to 8 weeks since their first offered breast screening appointment).
Participants who are diagnosed with breast cancer will not receive a standard risk feedback letter. Participants will be sent a letter 1 year after diagnosis which will offer them feedback from the study. If they opt to receive this, they will be sent a personalised letter explaining their breast cancer risk factors.
Measures
Two main types of measures will be used: core outcome measures and self-reported measures.
Core outcomes
The following nine core outcomes will be compared at 6months post completion of recruitment for those offered BC-Predict and those offered usual care (NHSBSP): 1. Screening attendance at first offered screening episode.
2. Screening attendance within 180 days of episode opening.
3. Number of technical recalls. 4. Number of recalls for assessment. 5. Number of routine recalls. 6. Number of breast cancer diagnoses (and type/ grade).
7. Subsequent consultation in FHRP clinics (and mode: telephone or face-to-face).
8. Subsequent enrolment for more frequent screening. 9. Subsequent prescription of chemoprevention. Data will be collected on each of the following aspects of this: (a) participant agrees/disagrees in clinic to take chemoprevention, (b) chemoprevention not appropriate, (c) chemoprevention appropriate but prescription not filled, (d) chemoprevention appropriate and prescription filled.
Data for the nine core outcomes will be collected for each consented BC-Predict participant by the research staff for the 6 months following participants' mammography appointment. For these participants, information will be available directly from NHSBSP and FHRP clinic records. For those in the usual care arm of the study, anonymised data will be provided by NHSBSP and FHRP services, to provide overall numbers for each of the core outcomes. We will prospectively record any refinements to procedures, to allow examination of how these impact on uptake of services.
We will also assess uptake of BC-Predict, and examine variation by study site. Where changes to recruitment procedures are made, we will keep notes of this, and examine the effects of these changes on uptake of BC-Predict, to inform how risk stratified screening should be rolled out. We will also examine variations in uptake of services by Index of Multiple Deprivation deciles derived from postcode of women [28] invited for NHSBSP, to assess any potential exacerbation of health inequalities brought about by BC-Predict.
Self-reported outcomes
The self-reported measures of potential harms and benefits of BC-Predict to be completed by a sub-sample of participants are shown in Table 1.
Analysis plan and power calculations Core outcomes
In total, approximately n = 18,700 women will be offered BC-Predict and n = 18,700 will be offered usual care NHSBSP over the total 16-month period. Based on the recruitment rate in PROCAS, n = 18,700 women being offered BC-Predict should result in 8000 women taking it up. The attendance rate to usual NHSBSP in Greater Manchester is 69% [5, 10].
Core outcomes will be compared for cohorts of women who are invited to BC-Predict and those who are invited to usual care (NHSBSP). Thus we will have in excess of 8000 participants in BC-Predict and NHSBSP groups for both comparisons: (a) within-site and (b) between sites over the same time period. The primary outcomes are binary. Logistic regression will be the primary statistical analysis method. We shall assess heterogeneity effects using interaction tests in the logistic regression, to examine differences in outcomes by time or location or screening type (prevalent v incident). Even in the presence of geographic or temporal heterogeneity of the effect, or of carryover effects continuing beyond the crossover period, we will still have sufficient data for a valid and fully powered comparison.
For core outcomes 1 and 2, we are interested in equivalence, in that we anticipate that invitation to BC-Predict will not substantially affect screening attendance. With 18,700 women in each group we will have in excess of 90% power to establish equivalence, defined as a 95%CI on the difference which does not exceed ±5% on the attendance rate at first offered appointment, if the latter is around 50% [37]. Similarly, we will have more than 90% power for the same comparison for eventual attendance within 180 days if the latter is around 70%.
Arguably, the most difficult to collect core outcome is 9, the proportion taking up chemoprevention. On the basis of PROCAS results we would anticipate that 1169 of the 8000 who consent to BC-Predict would have sufficient risk to be considered for chemoprevention and that 10% of these would take it up [8,38,39]. Thus 117 of the 8000 women (1.5%) receiving the intervention might be expected to be prescribed chemoprevention. It is anticipated that very few in the 18,700 sent the standard screening invitation would be prescribed chemoprevention, but even if as many as 0.9% did so, we would have 90% power to detect this as significant at 5% level with two-sided testing, and 80% power if 10% took up chemoprevention. The Greater Manchester Medicines Management group has agreed a shared care protocol stating that the initial prescription of tamoxifen and anastrozole should be made by a FHRP specialist. As such data from even those in the control arm should be available from prescriptions made in the FHRP clinics.
Self-reported outcomes
Analyses will focus on comparisons between the responses of the BC-Predict and NHSBSP groups at 6 months follow up, controlling for baseline responses and baseline patient characteristics. We will use ANCOVA, first with baseline responses to the same questionnaires as covariates, secondly treating both baseline and 6month responses as related endpoints, using hierarchical linear models. Out of the available self-reported outcomes, we have selected the primary outcome to be anxiety (State Trait Anxiety Inventory) at 6-months, but we will also examine effects on all measures included in Table 1, as well as effects at 3 months. We will use the variables concerning knowledge, and attitudes to screening, as well as screening attendance, to assess the extent to which decisions to attend screening are informed, in line with a standard approach to assessing this [17]. The measures of health status (EQ. 5D-5 level) and capability (ICECAP-A) will be converted to preference weights using published algorithms [40] and population tariffs [41], as appropriate.
In addition to providing information about potential harms of BC-Predict, secondary analyses will examine whether women who are randomised to receive questionnaires differ in terms of uptake of screening or BC-Predict. This will inform about the likelihood of biases being introduced by comparisons of questionnaire responses in a possible subsequent definitive trial.
The sample size calculation is based on the six-item short-form of the state scale of the State Trait Anxiety Inventory [29], which measures general anxiety currently experienced on a scale of 20 to 80. Previous research in England with women invited to breast cancer screening found a mean state anxiety score of 37 [42]. A score of 49 has been found in patients with a diagnosis of anxiety disorder [43].
Assuming a two-tailed independent samples t-test, then n = 1054 (n = 527 women per experimental group) will be required to have 90% power (with α = 0.05) to detect a small standardised difference of d = 0.2. This equates to a difference between adjacent response categories (e.g. "not at all" and "somewhat") on 2.5 of the 20 items on the full form of the scale. We anticipate that asking 1054 women per group will result in responses from n = 527 women per group being obtained at both baseline and 6 months, assuming a 70% response rate on both rounds.
Economic analysis
An early economic analysis [44] will aim to identify the indicative estimates of the incremental costs and consequences and key drivers of the cost-effectiveness of a riskstratified NHSBSP compared with the usual NHSBSP. A decision-analytic model-based cost-effectiveness analysis Table 1 Self-reported measures to be assessed, at each of the three timepoints. Baseline
months 6 months
State Anxiety [29] State Anxiety [29] State Anxiety [29] Cancer Worry [30] Cancer Worry [30] Cancer Worry [30] Risk perceptions [31] Risk perceptions [31] Risk perceptions [31] Attitudes to screening [32] Attitudes to screening [32] Knowledge [33] Knowledge [33] Intention (future screening) [32] Intention (future screening) [32] Intention (future screening) [32] Health status (EQ-5D5L) [34] Health status (EQ-5D5L) [34] Health status (EQ-5D5L) [34] Capability [35] Capability [35] Capability [35] Satisfaction with information [36] Satisfaction with information [36] *Informed choices regarding screening will be estimated from attitudes to screening at baseline, knowledge and screening attendance, using a standard approach [17] **Women invited to BC-Predict will receive the above. Women invited to NHS-BSP will receive the above minus the satisfaction with information questionnaire will capture the incremental NHS costs and consequences for a cohort of women eligible for NHSBSP in the UK over a life-time horizon. A decision-analytic model (a decision-tree combined with a published model) [11] will be structured to represent the care pathways of current NHSBSP practice (no risk feedback) and the proposed BC-Predict intervention in a sample of women eligible for the NHSBSP. The cost of the risk-stratified NHSBSP will be identified using a micro-costing study [45] and take account of the cost of the addition of SNPs to the risk estimation algorithm. The decision-tree will recognise the uptake of appropriate healthcare services (General Practice contact; FHRP Clinic referral and proportion of women starting chemopreventive medication e.g. anastrozole/tamoxifen/raloxifene). A published model [9] will be used to understand the lifetime impact on NHS costs and patient consequences of using different screening intervals based on risk-prediction, or usual NHSBSP. Using an economic model allows data assimilation from various sources (BC-Predict and systematic reviews; structured expert elicitation methods [46]) in a structured framework [47]. The model-base case analysis will focus on changes in health status (using EQ-5D-5 L) but explore the impact on capability (ICECAP-A) in a scenario analysis. These data will be obtained from the self-reported outcomes (health status (EQ-5D5L) [34]; capability (ICECAP-A) [35]) collected in the prospective study (see Table 1) and supplemented with published data to allow estimation of the impact on a life-time horizon. The EQ-5D5L [34] and ICECAP-A [35] have published preference weights that will allow calculation of quality adjusted life years for health and capability with and without the intervention. Parameter uncertainty in the decision-tree component of the model will be quantified using probabilistic sensitivity analysis for the base case analysis and scenario (capability) analysis. These two outcomes (health and capability) will then be used in two distinct value of information analyses (Expected Value of Perfect Information (EVPI) and expected value of partial perfect information (EVPPI)). The EVPI represents the maximum amount that should be spent on future research to gain perfect information to eliminate the possibility of a wrong (funding) decision. Further steps are then necessary to understand key parameters driving the uncertainty. This involves estimating the EVPPI that tells a decision maker which parameters are contributing to the uncertainty in the model and help to guide what type of additional evidence is most valuable.
Three sub-studies
The present research also includes sub-studies. Although integral to the overall research, they are described here, to facilitate clear presentation of their aims and methods.
Sub-study one: incorporation of SNP information into BCpredict risk estimates Objectives
To determine uptake and acceptability of a DNA based risk estimate as part of routine NHSBSP appointments, and to quantify the higher proportions of women at high/moderate and lower risk obtained by adding SNP information.
Background A subset of women will have the option to provide a sample of saliva from which DNA can be extracted. DNA will be extracted with standard techniques and currently known breast cancer SNPs associated with breast cancer typed. The results of this testing will be incorporated into the BC-Predict risk algorithm. Adding a genetic SNP score from a saliva sample to other risk factors not only potentially increases the accuracy of risk estimation, but also increases the discrimination of risk estimation, so that more women are identified as being at higher or lower risk, and fewer identified as being at population-average risk. It thereby increases the proportion of women identified at high-risk who can benefit from being offered NICE approved additional screening and drug prevention from 4 to 6%.
Methods
In total, it is expected that 1000 women will provide a DNA sample and receive a personalised breast cancer risk estimate incorporating their Polygenic Risk Score. All women invited for screening at Withington Community Hospital and Oldham Integrated Care Centre will be potentially eligible for the SNP sub-study, however, this will only be offered to women on a pragmatic basis depending on whether a member of staff is on site to assist with taking consent. A separate paper consent form will be completed by the participant in addition to the online consent form for the main study. Women giving their consent will be provided with an Oragene kit to place their salivary sample. They will be guided on site by a member of staff as to how to complete the sample.
Methods: data analysis
The proportion of women in the 1000 providing saliva DNA who are classified as NICE actionable high and moderate risk as well as below average risk will be compared to their classification without a SNP Polygenic Risk Score. Chi square statistics will compare the difference between risk categories with and without the addition of the SNP Polygenic Risk Score. These data will also be used in the proposed economic analysis.
Sub-study two: understanding acceptability and implementation of BC-predict Objectives
The main objectives are to explore service users' views on acceptability of BC-Predict (interviews) and to assess the perceived impact of BC-Predict on the NHSBSP, FHRP Clinics and General Practice (focus groups).
Background
In addition to the quantitative measures of impact of BC-Predict, we will also carry out qualitative work as part of a process evaluation to understand the key issues behind successful implementation of the BC-Predict system [48]. The qualitative work will comprise one-to-one interviews with NHSBSP service users to explore acceptability of BC-Predict amongst women at varying levels of risk, where there is currently a dearth of evidence [49]. It will also employ focus groups with healthcare professionals to investigate the implementation, delivery and impact of BC-Predict on the current NHSBSP. This qualitative work will give insight into capacity issues, indication of the training and support required to deliver BC-Predict on a larger scale, as well as communication challenges and pathways for both service users and healthcare professionals. This will enable us to build an evidence base to inform practice and policy should BC-Predict be rolled out to the wider NHSBSP.
Methods
Patient interviews: design, sample, recruitment and data collection A purposive sample of below average, average, moderate and high-risk women who had received BC-Predict will be invited to participate in a semi-structured interview. Below average and average risk women will be invited for interview 1 month after receiving their risk feedback letter. Moderate-risk and high-risk women will be invited for interview 6 months after receiving their risk feedback letter. This gives women in the moderate and high-risk groups the chance to explore extra screening options or medications prior to the interview. The BC-Predict online platform allows easy identification of women in each risk group in each location. We will aim to recruit up to 40 women to these interviews (up to 10 women per group) with variation in which study sites to which women were invited. In addition, questionnaire responses will guide sampling to allow variation in uptake of chemoprevention. Data will be collected by semi-structured interview either face-to-face or over the phone, audio recorded and transcribed verbatim. The decision to stop recruitment will be based on whether the data collected is sufficient to answer the research questions and aims [50]. Therefore the depth of the data will be used as an indicator to cease recruitment. The decision to end recruitment will also be based on the active exploration of negative cases, as well as when there appears to be no new content being discussed in the final interviews of each risk group. All interviews will cover core issues including acceptability of BC-Predict and lifestyle modifications. Other issues will be covered are those that are most relevant to the risk estimate communicated, e.g. uptake of chemoprevention (e.g. GP advice) in higher risk women, and reassurance in below-average risk women. We will be sensitive to considering naturally occurring variation, e.g. women recruited differently due to SNP collection or across different study sites, or from diverse ethnic backgrounds.
Healthcare professional focus groups: design, sample, recruitment and data collection General Practice, Radiology and FHRP Clinic staff will be invited to participate in focus groups 2 months after BC-Predict has stopped being provided in each location. The groups will examine how well prepared they and associated staff were for implementing BC-Predict, along with views on acceptability of BC-Predict and how its implementation could be facilitated when widely implemented. We will run a multidisciplinary focus group in each location (total sample =~36). Focus groups will be audio recorded and transcribed verbatim. If a participant is unable to attend the focus group but would like to take part, they will be given the option to be interviewed face-to-face or over the phone.
The analysis of these data will also be used to generate a list of additional resources required and a quantitative estimate of the impact on resources such as staff time. The groups will also aim to estimate the approximate cost of providing the BC-Predict intervention. These estimates will inform the economic analyses.
Methods: data analysis
For both interviews and focus groups, data will be analysed using a manifest level approach to thematic analysis as the themes are likely to be predominately deductive. Thematic analysis seeks and reports the patterns inherent within the data collected. It is a common qualitative analysis method that results in a rich, complex, yet accessible account of the data [51]. Themes will be coded at the manifest (or explicit) level [52]. It will do so taking an essentialist approach, which means that we aim to report the experiences, meanings and the reality of the participants [53].
Coding will be conducted systematically and iteratively. Negative cases will be sought to test the emerging coding framework. Regular coding meetings will be held to refine the coding structure. Data will be coded by independent researchers to check reliability in qualitative methods and ensuring that the fit between data and analysis is maximised. Percentage agreement on presence will be calculated. Coding will continue until the team are satisfied that codes and themes adequately describe and capture the data. Data will be stored and organised within Nvivo software.
Sub-study three: assessing feasibility of increasing screening interval for women at low risk Objectives To evaluate the impact of providing materials for women at low risk explaining that less frequent screening may provide a better balance of benefits and harms for them.
Background
Women at lower risk of breast cancer are likely to receive less benefit from the NHSBSP but are more likely to experience overdiagnosis and treatment for cancers that will not cause them harm if left untreated. In this sub-study, we have chosen the risk threshold of 1.5% or below in newly screened women over 10 years, as this is the average risk level for a 40-year old woman, who currently would not be screened for a further 10 years. (Note that in the rest of the PROCAS study, a threshold of 2% or below was used to indicate below-average risk [10], so we use the term "low" to distinguish this distinct threshold in the present research). PROCAS indicated that 13.5% of women screened have a 10-year breast cancer risk of less than 1.5% when assessed by Tyrer-Cuzick and mammographic density [10]. This group of women are at a lower risk of developing breast cancer and the tumours they develop are much more likely to be early stage and slow-growing [10,12]. Existing data suggest that a risk stratified NHSBSP may not only be potentially cost-effective [11], but also that it may be potentially more cost-effective to delay screening in lowrisk women by optimising the screen interval [54].
Nearly 90% of the population have indicated that "screening is almost always a good idea" [55] and many women would feel aggrieved if they felt they were being denied a service inequitably. Relatedly, attendance at screening provides reassurance and peace of mind [56], so a lack of screening may result in increased worry about breast cancer. However, this view may be partly due to a lack of general awareness of issues such as overdiagnosis [57]. Many national screening figures believe that less frequent screening for women at low risk may be an important component of risk-stratified screening. Further, our ongoing developmental work suggests that this idea is acceptable to many women, including women who have received a below-average risk estimates.
Methods: design, sample, recruitment and data collection
In the final 4 months of the 16-month period of implementation of risk estimation, we will extend the offer of risk provision to include information about how, for women at low risk of breast cancer delaying further NHSBSP for a further period of 5 years may provide a better balance of benefits and harms for them. This information will be presented to all women in East Lancashire and Oldham as part of the invitation process, and repeated for women at low risk (< 1.5% over 10 years) as part of their risk feedback letter and accompanying leaflet. Every woman identified as being at low risk will be asked to complete the online questionnaire assessing harms and benefits of BC-Predict at 6 months, in contrast to the main BC-Predict study, where only a subsample will be asked to complete this questionnaire.
Methods: data analysis
The primary outcome for this sub-study will be intentions to take up screening in 3 years in women assessed 6 months after being told that they are at low risk and receive the recommendation to delay attending screening. Our main focus will be on estimating the proportion of women intending to take up screening, but we will also formally compare this proportion with women who receive provision of "below average risk" breast cancer risk estimates (< 2%) but no screening recommendations, recruited over the previous 12 months of BC-Predict. Intention to attend screening is a consistent predictor of subsequent screening attendance (r = + 0.42 in a systematic review that identified k = 19 such tests) [58].
Qualitative process analysis
A qualitative process analysis will be conducted in line with MRC guidance [48]. A sample of low risk women (up to n = 12) who had received low risk estimates will be interviewed 1 month after receiving the feedback letter. They will be sample purposively to provide variation in the three screening sites (Oldham Integrated Care Centre, Burnley General Hospital and East Lancashire mobile breast screening van). Interviews will focus on the extent to which the possibility of receiving an estimate of low risk was considered before consenting to risk estimation, the acceptability of the information communicated and particularly the recommendation to delay screening, and any deliberations about delaying screening. Data will be analysed using a manifest, inductive thematic analysis, and will involve comparison of interviews with those women told they are at below average risk, but given no particular recommendation.
A decision analytic model-based economic analysis
A decision analytic-model based economic analysis will be used to understand the potential relative cost-effectiveness of using a modified screening interval for women identified to be at low-risk of breast cancer as part of a stratified-BSP compared with the current NHSBSP. This analysis will build on an early economic analysis we previously conducted [11]. The model will be populated with data from the published literature and the current study to understand the relative costs and benefits of the modified screening interval for low-risk women assuming the perspective of the NHS and the impact on QALYs over the lifetime horizon for the defined population of women eligible for NHSBSP. Extensive sensitivity analysis will be used to understand the key drivers of relative cost-effectiveness when implementing a modified screening interval for low-risk women.
Discussion
The present research aims to provide evidence on the feasibility of risk-stratified screening, by providing information about likely effects, both positive and negative, and which of these effects are likely to drive cost effectiveness. Due to the wish to avoid participant burden, some additional potential benefits and harms of screening were not examined, and merit consideration in future research.
One key issue that the present research does not cover relates to the possible benefit that risk estimation may prompt women to consider changes in their healthrelated behaviours to reduce cancer risk. An estimated 20-30% of breast cancer cases are thought to be attributable to excess weight, weight gain lack of physical activity (PA) and high alcohol intakes [59][60][61]. In general, communicating personalised risk in the absence of supportive programmes has small effects on increasing healthy lifestyle behaviours that are not maintained [62,63]. Nevertheless, studies that have used personalised risk communication to bring about changes in healthrelated behaviours to date have not used additional strategies to optimise behaviour change for which there is good evidence [63]. Further, even small effects on these behaviours are achieved by communicating personalised risk information, then large population reductions in these unhealthy behaviours should follow.
The BC-Predict feedback materials include information on which behaviours are likely to reduce breast cancer risk, but the programme does not include any attempts at promoting health-related behaviour change. Women at higher breast cancer risk will have a greater proportional risk reduction through following healthy lifestyle recommendations [64,65]. There is evidence that these women may also be more motivated to initially engage with evidence-based behaviour change programmes, maintain engagement, and thereby produce more behaviour change [66]. By contrast, it is possible that the provision of low risk results to women my produce false reassurance. This could result in women at low risk being less inclined to engage in behaviours likely to promote health although the wider evidence suggests that is not likely [67].
The present research will provide key information on feasibility of implementing risk-stratified screening into routine breast cancer screening. It complements two large ongoing trials. The WISDOM trial in the USA [68] and the MyPeBS trial in several European countries [69] are designed to show that risk-stratified screening is non-inferior to routine breast cancer screening, in terms of the number of late-stage cancers detected. In particular, the present research does not focus on effectiveness, but instead will provide information about the likely harms and benefits of risk-stratified screening, and will identify what are the key uncertainties that are likely to inform effectiveness and cost-effectiveness. It also has a more pragmatic focus than these two large ongoing trials, in considering what are the likely effects on the healthcare system when implementing risk stratification as part of routine NHSBSP, including an explicit quantitative and qualitative process analysis of the effects of this implementation.
Abbreviations EVPI: Expected value of perfect information; EVPPI: Expected value of partial perfect information; FHRP: Family history, risk and prevention; NHSBSP: National health service breast screening programme; NICE: National institute for health and care excellence; PROCAS: Predicting risk of cancer at screening study; QALY: Quality-adjusted life year; SNP: Single nucleotide polymorphisms drafted by DPF, based on previous documents prepared by DPF, DGE, LG, and VW. DPF is responsible as guarantor for the overall content. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted. The author(s) read and approved the final manuscript.
Funding
This study is being run as part of the PROCAS-2 Programme Grant, and is funded by the National Institute for Health Research (Ref: RP-PG-1214-20016). It was supported by the NIHR Manchester Biomedical Research Centre (IS-BRC-1215-20007), and Genesis Breast Cancer Prevention (GA15-003). The views expressed are those of the authors and not necessarily those of the NIHR or the Department of Health and Social Care. Sub-study one is funded by Prevent Breast Cancer (GA18-001). Sub-study three is funded as part of a Breast Cancer Now project grant (2018RP005). These funding sources had no role in the design of this study and will not have any role during its execution, analyses, interpretation of the data, or decision to submit results.
Availability of data and materials Not applicable, as protocol paper.
Ethics approval and consent to participate NHS ethical approval for the study described in the manuscript was granted by Harrow Research Ethics Committee (ref 18/LO/0649)/ IRAS project ID 239199. All participants in BC-Predict complete written consent (usually online). All data participants in the comparison (usual NHSBSP) condition will be elicited in aggregate form, so individual consent will not be obtained.
Consent for publication
Not applicable, as protocol paper. | 2020-06-18T14:35:45.820Z | 2020-06-18T00:00:00.000 | {
"year": 2020,
"sha1": "24125836cc79ef2816e4ad3b6a0ebbc85fd35b57",
"oa_license": "CCBY",
"oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/s12885-020-07054-2",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c7cb33eea91ba104872ea468d54453bb2e493b66",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14257467 | pes2o/s2orc | v3-fos-license | Emission Controls Using Different Temperatures of Combustion Air
The effort of many manufacturers of heat sources is to achieve the maximum efficiency of energy transformation chemically bound in the fuel to heat. Therefore, it is necessary to streamline the combustion process and minimize the formation of emission during combustion. The paper presents an analysis of the combustion air temperature to the heat performance and emission parameters of burning biomass. In the second part of the paper the impact of different dendromass on formation of emissions in small heat source is evaluated. The measured results show that the regulation of the temperature of the combustion air has an effect on concentration of emissions from the combustion of biomass.
Introduction
The main intention of European Union is to exploit the potential of energy savings and renewable sources. In Slovakia the most promising renewable energy source seems to be biomass. Its use has growing importance. The most common form of biomass is wood, either in pieces or as wood waste. During the combustion process of renewable fuels pollutants are generated into the atmosphere and have a negative impact on human health. The most monitored pollutants are particulate matter, carbon monoxide, nitrogen oxides, and sulphur dioxide [1,2].
Emissions emitted during combustion are mainly constituted of gaseous and particulate pollutants. The aim is to reduce the concentration of these substances to acceptable levels, since the emissions have a significant proportion of air pollution [3].
The solid particles are entrained with flue gas stream from the combustion chamber of boiler. Particulate matter (PM) consists of soot, inorganic matter (ash), and organic matter (nonvolatile flammable). Particles are imported into the flue gas by ash, nonvolatile, and combustible soot.
Particulate matter formation during fuel combustion depends on many factors, including flame temperature, composition and concentration of combustion reactants, and residence time within the reaction zone [4]. Although PM formation from combustion is not fully understood, it is suspected that the process involves both nucleation and condensation mechanisms [5].
The size of particles formed during combustion is dependent on the time spent in the formation and oxidation zones. The size of a biomass exhaust particle can span a range from less than 0.01 m to greater than 100 m. However, the majority of biomass combustion aerosol is typically smaller than 1 m in diameter [6].
Today is the greatest attention paid to the size of particles (aerodynamic diameter) less than 10 m (PM10), which may penetrate into the respiratory tract. Particles of this fraction are divided into two groups based on different sizes, the mechanism, the composition, and behaviour of the atmosphere.
The first group is made up of particles of size below 2.5 m (fine respirable fraction-PM2,5), arising from chemical reactions nucleation, condensation of gaseous emissions generated at the surface of particles, or coagulation of the finest particles.
The second group created particles in the range of the size from 2.5 to 10 m (coarse fraction-PM2,5 to 10).
Finest particles with a diameter below 2.5 m (PM2,5) are considered to cause the greatest harm to human health. They deposit deep in the lungs and block the reproduction of cells [7][8][9]. The Scientific World Journal Various types of wood have different composition and properties such as calorific value and ash melting behavior of temperature, which greatly affect the production of PM.
In this work, experimental measurements were carried out and focused on the formation of PM during combustion of different types of dendromass in a small heat source. The effect of various temperatures of the primary combustion air to the emission parameters is also evaluated.
Measurement of Emission Parameters
Methods for measuring emissions of pollutants can be divided in principle into measuring of particulate matter and gaseous substances. Methods and measurement principles are based on the emission properties of the fluid medium. One of the method for measuring particulate matters is presented below.
Gravimetric Method. Gravimetric method is the manual single method with sampling of the flow gas by probe. It is based on determination of the median concentrations by sampling from multiple points of measurements crosssection and their subsequent gravimetric assessment. Solid contaminants are usually separated by an external filter.
Representative sampling is performed by sampling probe suitable shape and the correct speed under isokinetic condition [10].
Concentration of particulate matter in the flue gas is covered to standard conditions and can be determined for wet or dry flue gas. Measured volume of sample taken on the volume gas meter should be converted to standard conditions, that is, 101325 Pa pressure and temperature of 273.15 K (0 ∘ C). Therefore, the temperature and pressure of measured sample are measured before gas meter.
The cumulative collection can provide in the crosssection average concentration but not concentration profile. Flow velocity or flow of the sample gas is measured by ensuring of isokinetic conditions, for example, by aperture track and a total collected amount of gas by gas meter [11,12].
In gravimetric method, the taking of representative samples is realized by probe with appropriate shape right from the flowing gas [13].
To meet the increasing requirements toward the fine particulate determination, the multistage impactor probe was used in these experiments. Impactor separation system is intended to filtrate and separate solid emissions in threestage impactor. The construction of device allows parallel separation of solid elements PM 10 and PM 2,5 ( Figure 1).
The advantage of the gravimetric method is its simplicity and the relatively low sampler costs.
Experimental Measurement
As the heat source was used fireplace rated at 6 kW, which is designed for burning of piece wood. Bottom of the combustion chamber is topped with grate and the container where the ash falls. Access to the combustion chamber is through the doors that are glazed with high heat resistant glass.
Cooling/Heating of Combustion Air.
Changing the temperature of the combustion air inlet was performed on the primary combustion air. The heat exchangers are plugged to pipe of primary air supply for heating/cooling of combustion air. This way is the temperature of the incoming primary combustion air heated/cooled to the desired temperature level. The minimal supply air temperature was -5 ∘ C and gradually increased up to 40 ∘ C. The increase in temperature between the measurements was 5 ∘ C and was regulated by the heat exchanger, which is located behind the fan in a duct. Temperature control for the heat exchanger was ensured by circulatory thermostat Julabo F40.
The scheme of experimental stand for the heating/cooling air supply is shown in Figure 2.
In order to evaluate the quality of combustion process, the gas composition was measured by analyzer.
3.2.
Dendromass. During the experiment, the different types of wood were tested as well. Every measurement lasted 1 hour and was burned to about 1,5 kg of fuel. For the experimental measurements the following types of wood that are listed in Table 1 were used. The experimental heat source has the following air inlets:
Position of Secondary
(i) primary (frontal)-airflow through the grate and ashtray towards fuel, (ii) secondary (back)-process using residual combustible gases that would normally escape through the chimney. There is an increase in efficiency and thus lower fuel consumption, (iii) tertiary (top)-used for blowing off the windshield, preventing clogging, also contributing to improvement of combustion process, and reducing emissions. Fireplace is designed for burning of piece wood (see Figure 3).
In this task, the different positions of secondary air inlet were investigated. The aim was to evaluate whenever the location of air inlet has influence on the formation of particulate matter.
Results and Discussion
During the measurements concentrations of following emissions were recorded: CO, CO 2 , and NO and particulate matters in the flue gas. fireplace varied by changing the setting temperature on the refrigerated circulator. Different temperatures of the primary combustion air have impact on formation of gaseous emissions and particulate matter. Figure 4 shows the results of the measurement of carbon dioxide according to the set temperature of the primary combustion air.
The highest average CO 2 was recorded at 35 ∘ C of inlet air, while at 15 ∘ C of supplied air the lowest average value of 3.20% was registered. Carbon dioxide formation has a trend to increase with increasing temperature of the primary combustion air. Figure 5 shows the results of the measurement of carbon monoxide.
The highest average values reached 7193 mg⋅m −3 of CO and were recorded at 10 ∘ C inlet air, while at 30 ∘ C supply air reached the lowest average value of 5051 mg⋅m −3 . The results indicate that formation of carbon monoxide has a trend to decrease with increasing temperature of the primary combustion air.
Dependence of NO formation on the different temperatures of the primary combustion air to the experimental heat source shows Figure 6.
The highest average values of the measured NO (111.65 mg⋅m −3 ) were achieved at 10 ∘ C, and the lowest average values were measured at 20 ∘ C with a value of 80.16 mg⋅m −3 . NO production has a trend to decrease with increasing temperature of the primary combustion air.
The results of PM concentration depending on the temperature of primary combustion air are shown in Figures 7 and 8.
Measurement of particulate matter with a change of temperature of combustion air has reached the maximum concentration of 202 mg⋅m −3 . Minimum concentration of PM emission was generated at 35 ∘ C of combustion air.
Different Type of Dendromass.
The second part of the work deals with the effect of different dendromass to formation of solid particles. Generation of emissions is largely influenced by type of fuel that is burned in heat source. Every fuel has different properties and chemical composition, which ultimately affects the combustion process, the amount of actual emissions, and ash content. During the experimental measurements the same combustion conditions were secured, that is, uniform supply of primary, secondary and tertiary air, the same pressure in chimney (12 Pa), and a maximum dose of 1.5 kg of fuel.
Particulate measurements were conducted on all types of wood for 30 minutes. During this time were captured PM to the filters from each sample. These were subsequently stripped of moisture and weighed. Concentrations of particulate matter were determined by difference weight of the filter before and after the measurement. The highest amount of particulate matter was observed in measurements of white birch with bark and beech (Figure 9). It can be concluded that in terms of PM it is advantageous to supply the combustion air through second row.
Conclusion
The aim of this work was to demonstrate the impact of the primary combustion air temperature on emissions parameters.
The Scientific World Journal Presented results of emissions depending on the temperature of the primary combustion air do not indicate the most suitable setting of temperature. For each type of emission the lowest value at different temperatures of the primary combustion air has been reached.
From the experimental measurements of solid emissions it is clear that in terms of the lowest value of PM it is preferred to supply the primary combustion air into the combustion process at a temperature of 35 ∘ C.
It can be argued that the production of carbon monoxide (CO) decreases with increasing temperature at the expense of higher production of carbon dioxide (CO 2 ). The formation of CO is influenced by several factors and therefore its different concentration during the measurements cannot be attributed to changing temperatures of the combustion air.
In this research work analysis of the impact of different types of dendromass on the formation of particulate matters during the combustion process was carried out. The results of measurements indicate that the type of fuel has a considerable influence on the combustion process and the formation of particulate matters. This phenomenon is largely influenced by the different properties and chemical composition of different types of dendromass.
In the case of birch without bark, the lowest values of PM were measured, suggesting that the bark of firewood has a significant proportion on the formation of solid particles.
The measured results show that the type of firewood affects emission parameters of the heat source.
Computer modelling is becoming more powerful and developed, therefore gaining in popularity. It is emerging 6 The Scientific World Journal as an attractive tool to assist the combustion engineer in such areas as new process design, plant scale-up, retrofitting, and pollutant control. Therefore the numerical simulation of particulate matter formation will be done in the future research.
Conflict of Interests
There is no conflict of interests regarding the publication of this paper. | 2016-05-04T20:20:58.661Z | 2014-05-21T00:00:00.000 | {
"year": 2014,
"sha1": "ec429f30530844f9b75915f4c5ecbc7e8dcfa5f0",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/tswj/2014/487549.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e4f29ddeddb6d5e02971665365f786f0c81d46a0",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
3323174 | pes2o/s2orc | v3-fos-license | Correlation between particle size/domain structure and magnetic properties of highly crystalline Fe3O4 nanoparticles
Highly crystalline single-domain magnetite Fe3O4 nanoparticles (NPs) are important, not only for fundamental understanding of magnetic behaviour, but also for their considerable potential applications in biomedicine and industry. Fe3O4 NPs with sizes of 10–300 nm were systematically investigated to reveal the fundamental relationship between the crystal domain structure and the magnetic properties. The examined Fe3O4 NPs were prepared under well-controlled crystal growth conditions using a large-scale liquid precipitation method. The crystallite size of cube-like NPs estimated from X-ray diffraction pattern increased linearly as the particle size (estimated by transmission electron microscopy) increased from 10 to 64.7 nm, which indicates that the NPs have a single-domain structure. This was further confirmed by the uniform lattice fringes. The critical size of approximately 76 nm was obtained by correlating particle size with both crystallite size and magnetic coercivity; this was reported for the first time in this study. The coercivity of cube-like Fe3O4 NPs increased to a maximum of 190 Oe at the critical size, which suggests strong exchange interactions during spin alignment. Compared with cube-like NPs, sphere-like NPs have lower magnetic coercivity and remanence values, which is caused by the different orientations of their polycrystalline structure.
of approximately 120 nm 19 . Although the effects of size and shape on the behaviour of magnetic particles have been known for more than half a century 10 , the quantitative effects on Zn 0.4 Fe 2.6 O 4 NPs remained undiscovered in the size range of 20-140 nm until 2012 20 , when the critical size was found to be approximately 60 nm, and the saturation magnetisation (Ms) was found to be lower for spherical particles than for cubic ones.
Owing to the experimental difficulty in controlling particle sizes over a wide size range 21 , a systematic investigation of the magnetic domain structures of the most commonly used Fe 3 O 4 NPs is still lacking, although such an investigation is needed to meet the currently increasing requirements for various applications. The current study therefore investigates the size dependence of the magnetic properties of Fe 3 O 4 NPs with sizes of 10-300 nm. The study includes cube-and sphere-like NPs that were produced under well-controlled crystal growth conditions on a large scale. The critical size of highly crystalline cube-like Fe 3 O 4 NPs was examined through the correlation between the particle size measured by transmission electron microscopy (TEM) and the crystallite size estimated by X-ray diffraction (XRD). The value was further confirmed by observing the lattice fringes and examining the dependence of Hc on the particle size. The high Hc value obtained in this study is discussed in detail in terms of the spin interactions in a single-domain structure.
Methods
Two types of highly crystalline Fe 3 O 4 NPs, i.e., cube-like and sphere-like ones, were synthesised on a large scale under precise control of the Fe 2+ concentration, pH, temperature, and aeration rate. The cube-like Fe 3 O 4 NPs were prepared by a two-stage oxidation reaction, which is described in patent No. US 5843610A (Toda Kogyo Co., Ltd., Japan) 22 . In contrast, the sphere-like Fe 3 O 4 NPs were prepared by a one-stage oxidation reaction, which is described in patent No. US 4992191A (Toda Kogyo Co., Ltd., Japan) 23 . All data generated or analyzed during this study are included in this article and its Supplementary Information files.
The morphologies of the prepared NPs were analysed using field-emission scanning electron microscopy (FE SEM; Hitachi S-5000, Tokyo, Japan) and TEM (JEM-2010, 200 kV, JEOL Ltd., Tokyo, Japan). The crystallite size and chemical composition of the prepared NP samples were examined by XRD (RINT2000, Rigaku Denki Co. Ltd., Tokyo, Japan), using Cu Kα radiation with a scanning range of 2θ 10-80°. Their magnetic performance was assessed using a superconducting quantum interference device (SQUID, Quantum Design, Tokyo, Japan), operated at 300 K. The prepared cube-like NPs with particle sizes (d p ) of 9.6, 19.6, 24.4, 31.9, 45.3, 64.7, 130, 243, and 287 nm were named as C1, C2, C3, C4, C5, C6, C7, C8, and C9, respectively. The sphere-like NPs with d p of 93.3 and 121 nm were named as S1 and S2, respectively. , also demonstrate the high crystallinity of these particles. Impurity peaks and transition phases were not observed, which indicates that the particles prepared by the liquid precipitation method were pure, both chemically and in their crystalline phase. The crystallite sizes (d c ) of all the Fe 3 O 4 NPs were estimated by the Scherrer formula using the highest intensity XRD peak 24 , namely [311], and compared with the d p obtained from the TEM analysis, as listed in Supplementary Table S1.
Results and Discussion
The relationship between d p and d c is shown in Fig. 3. The straight line indicates a single crystalline particle, whereas the particles corresponding to points that fall below this line are polycrystalline. Notably, the d c and corresponding d p have almost the same value for particles with a diameter of 10 to around 80 nm, which indicates a single-domain structure. The particles with diameters of larger than 80 nm have a constant d c , which indicates a multi-domain structure. The critical size of these cube-like NPs was calculated as 78 ± 9 nm from the relationship between d p and d c . The critical size is usually obtained from the change in magnetic properties, such as the relationship of Hc and d p , and this is the first time it has been obtained from the relationship between d p and d c for Fe 3 O 4 NPs. The same tendency was observed by Lee et al. 19 in 2015 for multi-granule Fe 3 O 4 NPs, which have a smaller d c than our NPs. The HRTEM images shown in Supplementary Fig. S2 confirm that NPs with a diameter of up to 64.7 nm have a single crystalline structure, which is shown by the single direction of the lattice fringes. This result is consistent with those obtained for particles produced by a colloidal chemical synthetic route 7,14 .
The sphere-like NPs show different crystal properties. For these NPs, the d c is much smaller than the d p , and also much smaller than those of the cube-like NPs with a similar d p , as shown in Supplementary Table S1 and Fig. 3. Supplementary Fig. S4 shows a detailed comparison of the cube-like C7 and sphere-like S2 NPs, which have a similar average d p . The dark-field TEM image and electron diffraction pattern show that the C7 NP is polycrystalline with a single orientation. However, a polycrystalline structure containing different orientations was observed in the S2 NPs. This was further confirmed from the HRTEM images by the existence of different directions of the lattice fringe in a single S2 NP ( Supplementary Fig. S4(g-i)). Sphere-like NPs commonly consist of agglomerates of variously sized cubic NPs 5, 14, 25 . The different morphological structure arises from the different preparation processes 11 The hysteresis loops for these particles, shown in Fig. 4(a), show the ferrimagnetic nature of the Fe 3 O 4 NPs. The cube-like Fe 3 O 4 NPs possess high Ms values, which are affected by the d p as shown in Fig. 4(b). The Ms value, obtained by applying the law of approach to saturation 26 , increases with increasing d p for all samples, including the sphere-like NPs. This trend is consistent with those found in other reports on Fe 3 O 4 NPs with diameters lower than 100 nm 5,16,27,28 . The Ms value increased from 54.7 emu/g (9.6-nm NPs) to 84.7 emu/g (287-nm NPs), which is close to the theoretically estimated Ms for bulk Fe 3 O 4 (92 emu/g). The Hc and remanent magnetisation (Mr), which are also affected by d p , are shown in Figs 4(c) and 5. These values increase from around 0 for the 9.6-nm NPs, which are known to be superparamagnetic 26,29,30 , to a maximum value of around 190 Oe (Hc) and 13 emu/g (Mr) at a d p of around 80 nm, and then decrease continuously with further increases in d p . The trends in these two parameters are consistent with previous theoretical estimations 10,19,31 , and they have similar characteristics to those obtained for Zn 0.4 Fe 2.6 O 4 NPs 20 . The initial increase in Hc with domain size corresponds to the sixth power of the domain size. The high Hc value may be caused by the strong spin interactions in highly crystalline Fe 3 O 4 NPs during spin alignment, which has previously been observed in soft magnetic NPs 32 .
By fitting the measured Mr and Hc values using a log-normal distribution function, shown by the dark blue solid line, the critical sizes for the maximum Mr and Hc values were determined to be 77 ± 2 nm and 75 ± 3 nm, respectively. On average, therefore, the critical value for the transition is about 76 ± 4 nm. This value is consistent with the critical size of 76 nm estimated theoretically for the transition from single-to multi-domain behavior 12 . Furthermore, the critical size of about 76 nm is almost the same as the transition size obtained from the relationship of d c and d p , as shown in Fig. 3. Single-domain cube-like Fe 3 O 4 NPs, such as those with a size of 64.7 nm, can be applied effectively as starting materials for many real products, especially for new rare-earth-free magnets with high magnetic moments, by transformation into α″-Fe 16 N 2 NPs and subsequent dispersion and assembly under a magnetic field [33][34][35][36][37][38] .
The crystalline properties of the NPs affect their magnetic properties, especially Hc. The different crystalline properties of the cube-and sphere-like NPs result in their different magnetic performances, as shown in Figs 4(c) and 5. The measured Mr and Hc values for the sphere-like NPs (S1 and S2) are both lower than the corresponding fitted values for the cube-like NPs. This may be caused by their composite small crystallite size. A comparison of the hysteresis loops of the two NPs with similar d p (C7 and S2) in Supplementary Fig. S5 shows the difference between the two samples, although their Ms values are similar (79.4 and 79.7 emu/g as listed in Supplementary Table S1). The multiple orientations of the polycrystalline in the sphere-like NPs, which lead to the multiple orientations of their easy axes, is considered to be the reason for their lower Mr and Hc values compared with those of the cube-like NPs. This is consistent with a previous study on the particle-size and shape dependence of Fe 3 O 4 NPs 5 . However, further theoretical explanation and experimental investigation are still required.
Conclusions
The magnetic properties, including the Ms, Mr, and Hc, of Fe 3 O 4 NPs are highly influenced by the particle size and domain structure. The Ms increases with increasing particle size, regardless of the crystal structure and particle shape. After exceeding the superparamagnetic limit, the Hc and Mr values increase with increasing particle size up to a maximum value of about 190 Oe and 13 emu/g, respectively, at the critical size of 76 nm. Above this critical size, the Hc and Mr values decrease with further increases in the particle size, and the cube-like Fe 3 O 4 NPs change from a single-to multi-domain structure. The multiple orientations of the crystallites within the multi-domain-structured NPs lead to the decrease in the Hc value. These findings suggest that considerable attention should be given to the particle size and crystalline properties of Fe 3 O 4 NPs, which have potential biomedical and industrial applications. These applications require that magnetic particles are sized appropriately to achieve a good balance between effective surface area and satisfactory magnetic performance. | 2018-04-03T04:00:16.361Z | 2017-08-30T00:00:00.000 | {
"year": 2017,
"sha1": "05e14149a8ceb29a773c2f40dce87d3353c7f4e6",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1038/s41598-017-09897-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7ea6f7ecb8987fca6ce0b561956029da127e0b30",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
250721483 | pes2o/s2orc | v3-fos-license | Ckmt1 is Dispensable for Mitochondrial Bioenergetics Within White/Beige Adipose Tissue
Abstract Within brown adipose tissue (BAT), the brain isoform of creatine kinase (CKB) has been proposed to regulate the regeneration of ADP and phosphocreatine in a futile creatine cycle (FCC) that stimulates energy expenditure. However, the presence of FCC, and the specific creatine kinase isoforms regulating this theoretical model within white adipose tissue (WAT), remains to be fully elucidated. In the present study, creatine did not stimulate respiration in cultured adipocytes, isolated mitochondria or mouse permeabilized WAT. Additionally, while creatine kinase ubiquitous-type, mitochondrial (CKMT1) mRNA and protein were detected in human WAT, shRNA-mediated reductions in Ckmt1 did not decrease submaximal respiration in cultured adipocytes, and ablation of CKMT1 in mice did not alter energy expenditure, mitochondrial responses to pharmacological β3-adrenergic activation (CL 316, 243) or exacerbate the detrimental metabolic effects of consuming a high-fat diet. Taken together, these findings solidify CKMT1 as dispensable in the regulation of energy expenditure, and unlike in BAT, they do not support the presence of FCC within WAT.
Introduction
Identifying mechanisms to decrease mitochondrial coupling has gained considerable interest as a therapeutic approach to promote energy expenditure and treat obesity.2][3][4][5][6] While BAT abundance is limited in humans, 7,8 adipocytes within white adipose tissue (WAT) can develop a BAT-like phenotype in response to various stimuli, [9][10][11][12] and this phenomenon, termed "browning" or "beiging" is characterized by the induction of multilocular adipocytes, robust mitochondrial biogenesis, increased expression of thermogenic genes (eg, Ucp1, Cidea, and Pgc1α), and augmented mitochondrial respiration. 13,14Browning of WAT and the presence of functional beige adipocytes have been documented in the inguinal WAT of rodents and supraclavicular region of humans, [15][16][17][18] and transplantation of these cells attenuated obesity and associated hyperglycemia in mice. 19Considering that in obese populations WAT depots can comprise up to 60%-70% of body mass, 20 promoting energy expenditure within WAT represents a promising and readily available "inducible" target in humans.
Independent of UCP1, creatine-dependent adenosine diphosphate (ADP) recycling in adipocytes has been proposed to stimulate energy expenditure. 21,22First described in beige adipocyte mitochondria, the creatine kinase mediated production of phosphocreatine (PCr) drives the liberation of a molar surplus of ADP to stimulate mitochondrial respiration in ADP-limited conditions. 21In this model, ADP is transported into the mitochondrial matrix through adenine-nucleotide transporter (ANT) to be utilized by adenosine triphosphate (ATP) synthase as a substrate for ATP production, 21 while a phosphatase enzyme that is unidirectional regenerates creatine and inorganic phosphate at the expense of PCr.This continuous futile recycling of creatine (FCC) and ADP has been proposed to stimulate energy expenditure. 21,22In support of this model, preventing creatine transport by gene deletion of the creatine transporter, Solute Carrier Family 6 Member 8 (encoded by Slc6a8), or deleting the rate-limiting enzyme for creatine synthesis (glycine amidinotransferase, GATM), reduced creatine levels in adipocytes, and predisposed animals to obesity. 23,24Despite evidence in rodents suggesting that creatine is functionally involved in thermogenic pathways in adipose tissue, creatine monohydrate supplementation had no effect on BAT activation or energy expenditure in a human population known to have reduced creatine levels, 25 challenging the therapeutic potential of creatine-mediated energy expenditure within BAT.
The regeneration of PCr, regulated by creatine kinase, is critical for the proposed futile cycling of ADP .Four creatine kinase isoenzymes exist in mammals-historically two cytosolic forms ([creatine kinase, muscle-type (CKM)] and [creatine kinase, brain-type (CKB)]) and two mitochondrial forms ([creatine kinase ubiquitous-type, mitochondrial (CKMT1)], and [creatine kinase sarcomeric-type, mitochondrial (CKMT2)]), which exhibit differential tissue expression and are encoded by different genes, 26,27 have been identified.Despite being classified as a cytosolic protein, a recent report has suggested that CKB is targeted to the mitochondria to regulate the FCC in brown adipocytes, and adipocyte-specific CKB -/-mice have impaired energy expenditure and were susceptible to obesity. 28However, it remains unclear if CKB is a key protein regulating mitochondrial bioenergetics within WAT, as within WAT CKB is not detected on mitochondrial membranes, and siRNA-mediated reductions in CKB only reduced cytosolic creatine kinase activity. 29Additionally, reductions in CKB within WAT increases ADPlimited respiration and PCr concentrations, 29 responses opposite to the proposed CKB-mediated ADP recycling within the mitochondrial intermembrane space identified in BAT. 28Apart from CKB, the mitochondrial isoforms CKMT1 and CKMT2 have been proposed to be important for energy expenditure as both proteins have been detected in human WAT 29 and BAT, 30,31 and knockdown of Ckmt1 21 and Ckmt2 29 in cultured human white adipocytes reduced basal mitochondrial respiration 21 and mitochondrial creatine kinase activity. 29Additionally, in mice consumption of a high-fat diet (HFD) increased Ckmt1 and Ckmt2 expression in visceral and subcutaneous WAT, respectively, suggesting possible adaptions to promote creatine-mediated energy expenditure with chronic energy overload within WAT. 24 As a result, in the present study, it was hypothesized that Ckmt1 would represent a key creatine kinase mediating FCC within WAT.To interrogate this possibility, we evaluated the ability of creatine to support WAT mitochondrial respiration, determined which creatine-mediated proteins were expressed in human visceral adipose tissue, and given the presence of Ckmt1, determined if genetically decreasing Ckmt1 affected mitochondrial bioenergetics, prevented β 3 -agonist mediated changes in energy expenditure, or exacerbated HFD-induced WAT hypertrophy and whole-body glucose intolerance in mice.
Animals
When explicitly stated, experiments utilized male Sprague Dawley rats that were bred on site at the University of Guelph.
Ckmt1 null (KO) and wild-type (WT) mice were bred on site at the University of Guelph from creatine kinase ubiquitous-type, mitochondrial (CKMT1; Ckmt1 tm2Bew ) heterozygous mice.The colony was previously established from cryopreserved embryos, which were generously provided by Dr. Be Wieringa from Dr. Craig Lygate's repository and generated on a C57BL/6N background at the Toronto Centre for Phenogenomics, as previously described. 32All animals were group-housed in a temperature and humidity-regulated room on a 12:12-h light-dark cycle and were randomized to receive either (i) intraperitoneal (IP) injections of CL 316, 243 (CL; 0.2 mg/kg body mass; Sigma, C5976) or an equal volume of sterile saline (SAL) for 4 consecutive days (n = 6-10/group), or (ii) sucrose matched low-fat control diet (LFD; 10% kcal from lard; Research Diets D12450J) or (HFD; 60% kcal from lard; Research Diets D12492) for either 8 wk (animals housed at 24 • C) or 5 wk (animals housed at 30 • C).A shorter feeding intervention was selected for animals housed at thermoneutrality to increase the likelihood of detecting subtle differences in phenotype afforded by the ablation of Ckmt1.In all experiments, prior to tissue collection, an intraperitoneal injection of sodium pentobarbital (60 mg/kg; MTC Pharmaceuticals, Cambridge, ON, Canada) was used to anesthetize animals.All experiments were approved by the Animal Care Committee at the University of Guelph and met the guidelines of the Canadian Council on Animal Care.
Mitochondrial Isolation and Respiration
Mitochondria were isolated as previously reported with minor modifications.In brief, tissue was weighed [pooled WAT (iWAT and gWAT ∼6 g) or red gastrocnemius (RG) (∼200 mg)], minced with sharp scissors, and homogenized using a Teflon pestle (750 rpm).Thereafter, the homogenate was centrifuged (800 g; 10 min at 4 • C), the supernatant centrifuged (9400 g; 10 min at 4 • C), and, finally, mitochondria recovered by resuspending the pellet following a final spin (12 000 g; 10 min at 4 • C).To remove excess lipid from WAT, homogenate was filtered with a cheese cloth prior to following the first centrifugation step.3][34][35][36][37][38] For WAT experiments only MiR05 buffer was utilized.Mitochondrial respiration was detected using high-resolution respirometry (Oroboros Oxygraph-2 K, Innsbruck, Austria).Experiments were carried out at 37 • C, with constant stirring (750 r pm), and concentrations of substrates are listed in figure legends.Respiration data were normalized to mitochondrial protein content or tissue weight.
Creatine Metabolism-Related Proteins in Human WAT
We have previously determined the transcriptome analysis of visceral WAT (VFAT) progenitor cells (n = 5) and adipocyte proteome in human upper-body subcutaneous WAT (ASAT), glutealfemoral WAT (GFAT), and VFAT. 39,40We utilized these data sets to determine the possible presence of proteins specifically linked to creatine metabolism.
Murine white subcutaneous (9W) preadipocytes were cultured until confluence and differentiated with Dulbecco's modified Eagle's medium (DMEM; Sigma-Aldrich) containing 20 nm insulin, 1 nm triiodothyronine, 0.5 mm isobutyl methylxanthine, 1 μm dexamethasone, rosiglitazone 1 μg/mL, and 0.125 mm indomethacin according to the protocol published elsewhere. 41ifferentiated 9W adipocytes were transduced with lentiviral particles (MOI 1) from the pLKO.puroshGFP or pLKO.puroCkmt1 a/b on the last day of differentiation (day 8) for 24 h.Then, lentiviral particles were removed, and adipocytes were kept for another 3 d for oxygen consumption or gene expression measurements.
Seahorse Assay
Adipocytes (3×10 4 ) were seeded in a 24-well plate coated with gelatin 0.1% (Sigma-Aldrich) one day before the experiment and oxygen consumption rate (OCR) was measured with an XFe96 Seahorse Extracellular Flux Analyzer (Agilent) using a Cell Mito Stress Test kit.For measurements of basal OCR, cells were incubated in a medium supplemented with 1 mm pyruvate, 2 mm glutamine, and 10 mm glucose.Mitochondrial respiration was assessed by the addition (final concentration) of 0.01 mm creatine, 1 μm oligomycin, FCCP 1 μm (uncoupler), and 1 μm rotenone + antimycin A (complex I and complex III inhibitors, respectively).BCA protein assay was performed to estimate the protein concentration and normalize absolute OCR data.qPCR Gene expression analysis was performed by qPCR as previously described. 41In brief, RNA was extracted using TRizol (Thermo Scientific) and cDNA was synthesized using a High Capacity cDNA Reverse Transcription Kit (Applied Biosystems) following the manufacturer's instructions.qPCR was conducted using QuantiNova SYBR Green/ROX qPCR Master Mix (Qiagen).The primer sequences will be made available upon request.Gene expression was normalized by GAPDH.
Indirect Calorimetry
Mice were individually housed at 24 • C in metabolic cages within the Oxymax Comprehensive Lab Animal Monitoring System (CLAMS; Columbus Instruments, Columbus, OH, USA).Diet and water were provided ad libitum and the same light-dark cycle was maintained.Total carbohydrate and fat oxidation and energy expenditure were calculated from VO 2 and VCO 2 as previously described. 32,43ntraperitoneal Glucose and Insulin Tolerance Tests Animals were fasted for 4 h prior to receiving an intraperitoneal injection of glucose (2 g/kg body mass) or insulin (1 U/kg body mass, Novorapid).Blood glucose was measured through the tail vein using a hand-held glucometer (Freestyle lite, Abbott Laboratories, Saint-Laurent, QC, Canada), and the area under the curve (AUC) was calculated by subtracting baseline blood glucose.
Histology
Histological assessment was carried out on iWAT, gWAT, and BAT samples that were fixed for 24 h in 10% neutral buffered formalin and transferred to 70% ethanol for storage at 4 • C. Samples were embedded and stained on glass slides with hematoxylin and eosin (H&E) at the University of Guelph Animal Health Laboratory.Slides were imaged with Cell Sense software (Olympus, Tokyo, Japan) at 40x magnification to assess the adipocyte crosssectional area (CSA) in iWAT and gWAT using an Olympus FSX 100 light microscope (Tokyo, Japan).The average of 4 fields per animal were captured and analyzed using the ImageJ software (National Institutes of Health) to quantify CSA.
Statistical Assessment
All statistical analyses were carried out using Prism 9 (Graph Pad Software, Inc., La Jolla, CA, USA).The apparent K m for ADP was determined via Michaelis-Menten kinetics and is defined as the concentration of ADP, which achieves half maximal mitochondrial respiration (V max ).Unpaired t-tests were utilized to compare the apparent K m and % K m with creatine from ADP titrations in RG, adjusted by Benjami-Hochberg FDR.All other data were compared using either a one-way (human data) or a twoway ANOVAs, followed by Fisher LSD post-hoc analyses where appropriate.Data are expressed as mean ± standard error of the mean (SEM), and the statistical significance was set at P < .05.
Assessing the FCC
As creatine cycling has been suggested to promote ADP recycling in mitochondria isolated from iWAT in diverse situations, 21,23,24 we first investigated the stimulatory effect of creatine on in vitro mitochondrial bioenergetics.To achieve this, we adapted a well-established technique for isolating mitochondria in skeletal muscle (Figure 1A). 32,44,45Due to the relatively high WAT requirement, rats were utilized to eliminate the need to pool multiple mice for a single mitochondrial isolation.The P/O ratios from isolated WAT mitochondria were comparable to skeletal muscle RG and values reported in the literature, 46,47 validating our experimental model (Figure 1A).Next, we examined whether the addition of creatine would increase submaximal ADP-supported respiration and/or decrease P/O ratios within WAT, both indicative of creatine-driven recycling of ADP.Creatine had no significant effects on submaximal ADP respiration, maximal complex I/II respiration, or P/O ratios in mitochondria isolated from WAT (Figure 1B inset and 1C).While these data suggest creatine does not increase oxygen utilization or promote recycling within WAT mitochondria, for the proposed FCC model to be functional, creatine kinase would need to be retained in the intermembrane space (Figure 1D).While CKMT1 was detectable in isolated mitochondria (Figure 1C inset), it was not concentrated like other mitochondrial proteins (Figure 1C inset), possibly limiting the in vitro stimulatory effects of creatine.We therefore tested the ability of creatine to stimulate respiration in permeabilized WAT, a model that would contain all endogenously expressed creatine-related proteins.Since the original experiments detecting FCC were performed in mice, 21,22 we also wanted to ensure species differences were not confounding our interpretation.Similar to rat isolated mitochondria (Figure 1B and C), creatine did not stimulate submaximal ADP-supported respiration in either inguinal or gonadal WAT from mice (iWAT, gWAT) (Figure 1E and F), but creatine did stimulate submaximal respiration in permeabilized skeletal muscle fibres (Supplementary Figure S1).While these experiments suggest creatine does not stimulate ADP recycling in unstimulated WAT, the original experiments were conducted following cold exposure and CL 316 ,243 (CL) administration in mice, 21,22 raising the possibility that β 3 -adrenergic mediated gene transcription could be required for the detection of FCC.We therefore assessed the possible creatine-mediated futile cycling in WAT from mice after CL stimulation.CL administration increased the abundance of OXPHOS proteins and UCP1 in isolated mitochondria, responses indicative of β 3 -adrenergic mediated beiging (Figure 1G inset).In contrast to unstimulated tissue (Figure 1B), presumably as a result of increased UCP1 (Figure 1G inset), P/O ratios following CL administration could not be determined because of an apparent uncoupling (Figure 1G).As a result, we determined the total oxygen utilization over 5 min in the presence and absence of creatine.Even with this approach creatine did not result in greater oxygen utilization when stimulated with submaximal ADP concentrations (Figure 1H), suggesting β 3 -adrenergic signaling does not induce FCC in WAT.While we could not detect creatine-stimulated ADP recycling in isolated mitochondria following CL administration (Figure 1H), the increased content of UCP1 may have prevented the detection of creatine-mediated ADP recycling (Figure1I).It is difficult to model the in vivo activity of UCP1 in an in vitro isolated mitochondrial preparation, as the biological concentration of purine nucleotides guanosine diphosphate ([GDP] and ADP) and inorganic phosphate are all removed in this preparation. 48Additionally, isolated mitochondria do not retain their native architecture/size, 49 and swelling of mitochondria may impact membrane fluidity and rates of fatty acid flip flop (considered rate-limiting for H + -mediated uncoupling at a neutral pH), or increase the pH of the intermembrane space (ie, increased volume to dilute H + ), which places a larger emphasis on direct H + transport by UCP1. 50,51Given these methodological considerations we utilized a permeabilized Creatine does not stimulate mitochondrial respiration in isolated white adipose tissue mitochondria with or without β3-adrenergic stimulation with CL 316,243.Representative traces of isolated RG as a positive control (A), and white adipose tissue [WAT, (B)] mitochondria from rats, whereby the addition of creatine in WAT did not alter submaximal or maximal complex I/II respiration or P/O ratios (C), and representative images of western blots comparing protein content from RG homogenate and isolated mitochondria from WAT (inset of C).Schematic depicting the proposed futile creatine cycle (FCC) in the mitochondria (D), and creatine titrations in the presence of submaximal ADP in iWAT (E) and gWAT (F).Representative trace of respiration in the presence and absence of creatine in isolated mitochondria from WAT of mice following β3-adrenergic induced signaling depicting an inability to determine coupling ratios (G) or a creatine-mediated stimulation in oxygen consumption.Representative western blots in isolated mitochondria following saline (SAL) or CL 316,243 (CL) injections showing the upregulation of (UCP1: G inset).Real-time respiratory trace depicting oxygen utilization over 5 min in the presence of submaximal ADP following CL administration (H: inset is total oxygen use over 5 min).Schematic depicting CL-induced UCP1 and absence of a requirement for FCC-mediated ADP recycling for electron transport chain flux (I).Realtime respiratory trace depicting protocol in permeabilized iWAT (J).Quantified leak respiration in the presence and absence of GDP (K), GDP-mediated inhibition (L), respiration in the presence and absence of creatine (M), and the ability of creatine to stimulate respiration (N) in iWAT following CL administration.ANT, adeninenucleotide transporter; CI-V complexes I-V of the electron transport chain; CKMT1, creatine kinase ubiquitous-type, mitochondrial; COXIV, cytochrome c oxidase subunit IV; Cr, creatine; GDP, guanosine diphosphate; IMM, inner mitochondrial membrane; NAD, nicotinamide dinucleotide; OMM, outer mitochondrial membrane; PM, pyruvate + malate; PMD, PM + ADP; PMDG, PM + glutamate; PMDGS, PMDG + succinate; UCP1, uncoupling protein-1.Data expressed as mean ± SEM. * is different (P < .05)from saline.
tissue approach to assess mitochondrial respiration following CL (Figure 1J).CL markedly increased leak respiration (Figure 1K) and oxidative phosphorylation (Supplementary Figure S2) ∼5fold in iWAT.Guanosine diphosphate attenuated respiration to a greater extent follow CL (Figure 1L), which presumably resulted from increased UCP1 content.However, the relative inhibition exerted by GDP was similar following CL (Figure 1L), suggesting that unlike isolated mitochondria, H + conductance was retained in this in vitro model following CL administration.Importantly, in the presence of GDP-mediated inhibition of UCP1, creatine did not stimulate submaximal ADP stimulated respiration following CL (Figure 1M and N).The present data highlight β 3 -adrenergic signaling as an inducer of uncoupled/nonoxidative phosphorylation dependent respiration through the induction of UCP1, and challenges the necessity of ADP recycling for energy expenditure when beiging occurs.
Determining the Expression of Creatine-linked Genes in Human WAT
While we could not detect creatine-stimulated ADP recycling in either isolated mitochondria or in permeabilized WAT, it remains possible that the FCC is biologically relevant for energy expenditure.We therefore examined existing mRNA data that we previously published in human WAT progenitor cells separated based on the abundance of CD34 protein (Figure 2A). 39We utilized this methodology as human WAT progenitor cells with abundant CD34 (CD34 hi ) are linked with greater cross-sectional area and triglyceride esterification, while the absence of CD34 (CD34 -) is associated with beige characteristics, including higher expression of PPARGC1A (which encodes PGC1α) and UCP1. 39e therefore hypothesized that FCC related genes would be more abundantly expressed in CD34 -progenitor cells.In the present study, while we detected genes involved in creatine synthesis (SLC6A8, GATM and GAMT: Figure 2B) and creatine kinase (CKMT1, CKMT2 and CKM: Figure 2C), these genes were all detectable in both CD34 -and CD34 hi progenitor cells (Figure 2C).We also detected three alkaline phosphatases (CNTAP1, CNTNAP2 and ALPL; Figure 2D), although none was abundantly expressed in CD34 -progenitor cells.Since CD34 -cells have been linked to beige/energy dissipating characteristics, 39 and in general CD34 -cells express an abundance of genes encoding creatine-mediated proteins (Figure 2B and C), we re-examined our previously published proteomic data to determine the possible abundance of proteins involved in creatine metabolism (Figure 2E).While we identified 4220 proteins in human WAT, proteins involved in creatine metabolism, including SLC6A8, GATM, and GAMT, and possible alkaline phosphatases such as CNTNAP were not reliably detected (Figure 2F).In contrast, the alkaline phosphatase ALPL was detected in all WAT samples, and of the possible creatine kinase isoforms, only CKMT1 and CKB were detected in all samples (Figure 2F).However, the abundance of ALPL and the creatine kinase proteins displayed divergent hierarchical patterns, as while ALPL was less abundant in VAT, both CK proteins were most abundant in visceral WAT compared with subcutaneous WAT (Figure 2F).Altogether, while human WAT expresses several key creatinerelated proteins, there appears to be an absence of coordination in these genes, limiting the possibility of creatine futile cycling, as alkaline phosphatases are preferentially expressed in progenitor cells associated with TAG storage (CD34 hi ) and in subcutaneous WAT depots, while CK genes were conversely expressed in progenitor cells associated with beige phenotype (CD34 -) and in VAT.CKMT1 was the only CK isoform detected in both CD34 -progenitor cells transcriptome and in human WAT proteome.
CKMT1 Does Not Influence Fuel Utilization and Energy Expenditure During Acute β 3 -Adrenergic Stimulation with CL 316, 243
Since both mRNA and proteomic approaches demonstrate CKMT1 expression in human WAT, we next aimed to determine if CKMT1 directly regulates mitochondrial bioenergetics.In contrast to our hypothesis, in differentiated 9W white adipocytes, shRNA-mediated reductions (∼80%) in Ckmt1 (Figure 3A) increased submaximal respiration (Figure 3B and C, main effect), while the provision of creatine did not stimulate submaximal respiration in either control (GFP) or Ckmt1 knockdown cells (Figure 3B and D).We next tested the possibility that ablating Ckmt1 affected the biological response to various cellular stresses.To achieve this, we utilized a wholebody CKMT1 knockout (KO) mouse.Genotyping confirmed the absence of exon 3, which has been shown to render CKMT1 inactive, 52 within WAT depots in KO animals (Figure 3E).Additionally, the ability of creatine to stimulate submaximal respiration was attenuated in the skeletal muscle of KO mice further verifying the absence of CKMT1 32 (Supplementary Figure S1A and B).We next determined the role of CKMT1 in mediating β 3 -adrenergic induced energy expenditure.The acute induction of adaptive nonshivering thermogenesis through the administration of β 3 -adrenergic agonist CL increased VO 2 , VCO 2 , fatty acid oxidation, energy expenditure, and heat production, and decreased carbohydrate oxidation and respiratory exchange ratio (RER) (Figure 3F-M).Ablation of Ckmt1 did not affect any of these metabolic parameters, suggesting CKMT1 does not influence whole-body energy expenditure with β 3 -adrenergic stimulation.Whole-body energy expenditure may not adequately reflect WAT metabolism; therefore we next assessed the influence of CKMT1 on WAT browning following 4 d of CL administration.CL decreased adipocyte cross-sectional area (gonadal WAT only), increased the number of multilocular lipid droplets (both inguinal WAT and gonadal WAT), and increased mitochondrial proteins in WT mice, morphological and molecular changes that are indicative of a beige-like phenotype (Figure 3N-W).However, the absence of CKMT1 did not affect these responses.Additionally, CKB content was not inducible with CL treatment in either iWAT or gWAT (Figure 3Q, R, V, and W).In support of these findings, distinct structural changes within interscapular BAT were observed following CL treatment, characterized by a striking visual reduction in lipid droplet size, which was not affected by genotype (Supplementary Figure S3).Hence, while CL administration had prominent effects on adipose tissue, overall, our data demonstrate that Ckmt1 ablation does not influence CLinduced browning within iWAT and gWAT depots.
We next aimed to functionally assess the possibility that increasing CKMT1 protein through beiging with CL 316 243 could affect in vitro respiration using a permeabilized tissue approach.While CL markedly increased respiration ∼5-fold in both iWAT and gWAT regardless of genotype (main effect CL: Figure 4A and B), ADP was capable of stimulating respiration similarly following CL administration (Figure 4A and B).As a result, CL did not reduce the respiratory control ratio (respiration presence/absence of ADP: data not shown), further suggesting retained mitochondrial coupling in a permeabilized tissue preparation, despite the increase in UCP1 protein (Figure 3).Nevertheless, under conditions of submaximal ADP, creatine failed to drive respiration in either depot, and in fact decreased respiration in the presence of creatine in iWAT (Figure 4C and D, inset is change with creatine).ADP sensitivity was unchanged by CL in iWAT (Figure 4E), while treatment in gWAT increased the apparent ADP K m , indicative of a decrease in ADP sensitivity (Figure 4F).In both depots, genotype (Figure 4E and F) and creatine (data not shown) had no effect on ADP sensitivity.Combined, these functional data indicate that CL robustly increased mitochondrial respiration within iWAT and gWAT, and these responses were not altered by creatine or ablation of Ckmt1.
Ablation of Ckmt1 Does Not Exacerbate HFD-Induced Glucose Intolerance and Weight Gain, or Alter Resting Metabolism
Given the proposed implication of CKMT1 in energy expenditure and the previous observation that HFD increased Ckmt1 gene expression within WAT, 24 we subsequently examined the possible contribution of CKMT1 to HFD-induced obesity and insulin resistance.In female mice, HFD feeding predictably induced glucose intolerance (Figure 5A and B), increased body mass (∼10 g, Figure 5C), weekly caloric intake (∼40%, Figure 5D), rates of fatty acid oxidation, and absolute energy expenditure during the dark cycle (Figure 5E-J).The increase in energy expenditure following HFD was not attributable to changes in movement activity (Figure 5K) and was mitigated when normalized to body weight, suggesting the absence of changes in bioenergetics (Figure 5L).High-fat diet also increased the cross-sectional area of adipocytes (Figure 6A-E), but ablating Ckmt1 did not alter any of these morphological or functional responses, suggesting Ckmt1 does not influence the phenotypic response to HFD consumption.Moreover, HFD feeding decreased submaximal ADPsupported respiration, maximal complex I/II-supported respiration (Figure 6F and I), and the apparent K m for ADP (Figure 6H and K) in wildtype mice and ablating Ckmt1 had no effects on mitochondrial respiration that were distinguishable from wildtype mice.Notably, like our original experiments (Figure 1E and F), creatine did not drive mitochondrial respiration in the presence of submaximal ADP in either group or WAT depot (Figure 6G, H, J, and K).While these data indicate that the ablation of Ckmt1 does not affect the susceptibility to HFD-mediated glucose intolerance, given the protection females are afforded against HFD-induced obesity, 53,54 we wanted to confirm that our findings could be translated across the sexes by repeating metabolic experimental protocols in male mice.Aligning with our data in female mice, ablating Ckmt1 did not alter HFD-induced glucose intolerance or indices of whole-body metabolism and energy expenditure (Supplementary Figure S4A-I).Combined, these data demonstrate that, regardless of sex, ablation of Ckmt1 does not exacerbate mitochondrial dysfunction or changes in wholebody energy homeostasis in mice fed an HFD.
At Thermoneutrality, CKMT1 does not Influence HFD-Induced Glucose Intolerance, Insulin Resistance, or iWAT and gWAT Respiration
Given the previous findings suggesting the presence of FCC in WAT 21,23,24 and our failure to detect FCC in mice, we were concerned that our housing of mice at thermoneutrality was masking the presence of FCC.While housing mice at ambient temperatures below their thermoneutral zone is widely common practice, it has been shown to mask the obesogenic effects of HFD in UCP1 KO mice. 55We reasoned that in a similar manner, housing temperature may be contributing to metabolic uniformity observed between Ckmt1 WT and KO animals housed at room temperature.We therefore conducted additional experiments where mice were fed an HFD and housed at 30 • C. Housing temperature did not alter the biological response to an HFD, as at thermoneutrality, regardless of genotype HFD-fed animals became glucose intolerant (Figure 7A and B) and insulin resistant (Figure 7C and D).Additionally, in permeabilized iWAT and gWAT, HFD consumption reduced mitochondrial respiration (Figure 7E and H) and the apparent K m of ADP (Figure 7G and J) similarly in WT and Ckmt1 KO mice (main effect of HFD).Moreover, in the presence of submaximal ADP, stimulation of respiration could not be detected with the addition of creatine in either genotype (Figure 7F and I).These data demonstrate that ablation of Ckmt1 does not accelerate the progression of diet-induced obesity, even in mice housed at thermoneutrality.
Discussion
We aimed to determine the presence of creatine-mediated respiration in WAT in response diverse metabolic stimuli, and to establish the necessity of CKMT1 as a candidate enzyme regulating the FCC within WAT.However, despite the incorporation of various in vitro models (ie, differentiated adipocytes, isolated mitochondria, and permeabilized tissue) and cellular stress (ie, β 3 -adrenergic signaling, HFD, sex, and housing temperature), in Figure 5. High-fat diet induced changes in glucose tolerance, weight gain, and resting whole-body metabolism are consistent between Ckmt1 WT and KO mice.Glucose tolerance test (A), area under the curve (B), body mass (C: inset change in body mass), weekly food intake (D), VO2 (E), VCO2 (F), carbohydrate oxidation (G), lipid oxidation (H), RER (I), energy expenditure (J), activity (K), and energy expenditure normalized to body weight (L) in female Ckmt1 WT and KO mice after 8 wk of LFD or HFD-feeding.AUC, area under the curve; VO2, rate of oxygen consumption; VCO2, rate of carbon dioxide production; RER, respiratory exchange ratio.Data expressed as mean ± SEM. contrast to our hypothesis we could not verify the presence of creatine-mediated substrate cycling as a primary determinant of mitochondrial respiration within WAT.Additionally, molecular approaches that reduced Ckmt1 in cultured adipocytes and mice did not impair mitochondrial bioenergetics, solidifying Ckmt1 as dispensable for regulating energy expenditure within WAT.
While it was hypothesized that CKMT1 was an important regulator of the FCC within WAT, ablating Ckmt1 did not affect body weight, energy expenditure, glucose tolerance, or insulin sensitivity in HFD-fed animals, and neither sex nor housing temperature altered these responses.Furthermore, adipocyte hypertrophy and both basal and maximal respiration in iWAT and gWAT followed the anticipated changes in response to an HFD feeding, however, these indices were unaffected by genotype.These findings are in contrast to other gene deletion models (eg, UCP1, TNAP, CrT, and CKB), which exhibit pathological changes of metabolic parameters associated with obesity and insulin resistance 23,28,55,56 suggesting CKMT1 is not a primary contributor to creatine-driven nonshivering thermogenesis as it pertains to obesity, While creatine has previously been reported to stimulate respiration within mitochondria isolated from BAT and beige/WAT, 21,23,24,28,56 we could not verify this response in mitochondria isolated from either rats or mice, as creatine did not impact P/O ratios or submaximal respiration.The physiological intervention used to induce beige adipose tissue could be a contributing factor in the discrepant findings, as while others have primarily used cold exposure 21 to beige WAT, we employed pharmacological β 3 -AR activation with CL 316, 243.A recent report indicates cold and β 3 -ARs activate distinct populations of beige adipocytes within WAT 57,58 and a preprint report shows cold-exposure stimulates FCC-linked genes to a greater degree than CL administration. 59However, while the magnitude of the response may be different between CL and cold exposure, it is thought that browning of WAT through cold exposure is mediated primarily by β 3 -ARs 60,61 and creatine has been shown to stimulate respiration in isolated mitochondria derived from WAT after CL administration in younger mice. 21,22Also, in the present study, CL was effective at eliciting canonical characteristics of adipocyte browning, including multilocularity of lipid droplets, a marked increase in UCP1 (within iWAT) and markers of mitochondrial biogenesis, and respiratory capacity within several WAT depots.Alternatively, a difference between the present methodology and previous reports is the age of the mice, as young animals (5-7 wk of age) were utilized in previous experiments delineating creatine-mediated futile cycling 21 compared with older mice (18-24 wk) in the present study.Since adipose tissue browning regresses with ageing, 62 this may have contributed to the inability to detect an effect of creatine in the present study.However, if this point is accurate this raises questions about the translatability of the FCC to adult humans, especially since creatine supplementation does not affect energy expenditure in humans. 25While the FCC has been implicated within iWAT, 21 the discrepancy between the present study and those previously reported from a single group identifying the presence of FCC may also relate to the tissue studied and method of assessing mitochondrial respiration.While originally delineated in WAT/beige, 21 FCC has been particularly emphasized in BAT. 21,28,56While it remains possible that FCC exists within BAT, the necessity of BAT for FCC is difficult to rectify with the observations that SLC6A8 and GATM deletion impair WAT bioenergetics 23,24 or the original finding that creatine stimulated in vitro respiration in iWAT following cold exposure.Alternatively, since creatine transport is sodium dependent, genetic models that affect creatine metabolism may indirectly influence energy expenditure through sodium-potassium ATPase activity (NaK-ATPase), as opposed to a direct stimulation of creatine kinase-mediated ADP recycling.
Another important consideration is the methodology used to interrogate mitochondrial bioenergetics, as previous work has exclusively utilized isolated mitochondria in UCP1 null/inhibited preparations. 21,229][50][51] As a result of these knowledge gaps, while we originally examined mitochondrial respiration in the presence of GDP-mediated inhibition of UCP1, we have primarily utilized a permeabilized tissue preparation without purposefully removing/inhibiting UCP1 function.While our permeabilized adipose tissue preparation maintains in vivo cellular structure, in contrast to isolated mitochondria, a caveat of a permeabilized preparation is that endogenous substrates may persist within the tissue, and while this may retain mitochondrial coupling/minimize UCP1 H + conductance, it is conceivable that retained endogenous creatine concentrations already saturated the respiratory system, preventing the ability to further stimulate respiration with the provision of excessive exogenous creatine.However, this possible limitation does not extend to the in vivo observation that ablating Ckmt1 does not influence wholebody responses to diverse metabolic stressors or that creatine exerts no stimulatory effects on respiration in isolated mitochondria.While CKB has been suggested to be important for cold-induced creatine-mediated ADP recycling, and CKB was identified in the proteome of human WAT, we could not detect changes in CKB following β 3 -adrenergic activation, challenging the contribution of this CK isoform to regulating energy expenditure within WAT.Additionally, while CKB has been proposed to stimulate respiration through the production of PCr and ADP, reductions in CKB appear to increase, not decrease, PCr concentrations, and stimulate respiration in human adipocytes, 29 suggesting CKB is not required for mitochondrial respiration.This finding is further suggested by a recent report not validating the presence of CKB on mitochondrial membranes. 29Although we could not detect CKMT2 protein in human WAT, CKMT2 has previously been reported in the WAT of humas, 29 and is abundantly expressed in human WAT CD34 -progenitor cells (present study), suggesting this mitochondrial creatine kinase isoform may be biologically relevant to beiging/energy expenditure.Additionally, ANT-1 and ANT-2 are poorly expressed in CD34 -progenitor cells 39 creating the possibility of a greater reliance on creatine kinase-mediated ADP recycling to optimize ADP transport and mitochondrial oxidative phosphorylation.While CKMT2 has been detected on mitochondrial membranes within human WAT, 29 siRNA-mediated knockdown of CKMT2 only modestly affected mitochondrial creatine kinase activity, 29 suggesting this is not the only isoform on mitochondrial membranes within WAT.It is also unknown where CKMT1 and 2 are located on mitochondrial membranes, information that is required to further delineate the importance of these enzymes in the regulation of metabolism, as ANT-1 and ANT-2 reside in different locations 63 and recycling of ADP in close proximity to ANT isoforms may interact with OXPHOS proteins to functionally coordinate mitochondrial electron transport flux differently. 64This information is particularly pertinent, as unlike in skeletal muscle ([ADP]<[creatine]), 65 within WAT total [ADP] is estimated to be greater than total [creatine], 66,67 and while the concentrations of ADP/ATP and Cr/PCr within the intermembrane space, as well as the location of CK relative to ANT isoforms, remain unknown within WAT, the relative abundance of ADP and Cr would not predict enzymatic flux towards the production of ADP (Cr+ATP↔PCr+ADP+H + ).Together, these theoretical observations, in concert with the present data, indicate that creatine is unlikely to stimulate respiration within WAT.
In the present study, WAT/beige adipose tissue were examined because of the high abundance in obese individuals and potential to be exploited for therapeutic gain.A strength of the present study is the direct assessment of WAT mitochondrial respiration ex vivo; however, regardless of WAT/beige depot, creatine did not stimulate mitochondrial respiration in a variety of models.Additionally, despite our supposition that CKMT1 would be principally involved as a creatine kinase in the FCC, our data suggest that CKMT1 is not a primary contributor of this molecular pathway in beige adipose tissue (iWAT), at least in the context of pharmacological β 3 -AR stimulation and obesity.While it remains possible that FCC exists in BAT, the absence of a creatine-mediated response in WAT/beige (present data) and BAT following creatine supplementation, 25 and the increase in body weight associated with creatine supplementation 68,69 challenges the therapeutic potential of modulating FCC to promote energy expenditure.Future work should nevertheless seek to clarify the proteins involved in creatine metabolism, and their biological significance within adipose tissue depots, particularly CKB and PCr, which have been linked to WAT inflammation 29 and sustained activation of the inflammasome. 70ltogether, the present data provide evidence that (1) creatine does not drive mitochondrial respiration in differentiated beige adipocytes, isolated mitochondria or permeabilized adipose preparations, (2) despite the detection of Ckmt1 in human adipocyte progenitor cells and CKMT1 protein in human WAT depots, (3) the expression of proposed FCC-related proteins were not coordinated in human WAT, (4) shRNA-mediated reductions in Ckmt1 do not decrease mitochondrial respiration, and genetic ablation of Ckmt1 in mice does not alter, (5) adrenergic-induced energy expenditure, (6) adipose tissue respiration, or (7) the biological responses to high-fat feeding.While creatine has been suggested to promote ADP-recycling, the present study suggests creatine and Ckmt1 are not primary regulators of mitochondrial bioenergetics within WAT. area under the curve (B), VO 2 (C), VCO 2 (D), carbohydrate oxidation (E), lipid oxidation (F), RER (G), energy expenditure (H), and activity (I) in male Ckmt1 WT and KO mice after 8 wk of LFD or HFD-feeding.Area under the curve; VO 2 , rate of oxygen consumption; VCO 2 , rate of carbon dioxide production; RER .Data expressed as mean ± SEM.
Figure 1 .
Figure 1.Creatine does not stimulate mitochondrial respiration in isolated white adipose tissue mitochondria with or without β3-adrenergic stimulation with CL 316,243.Representative traces of isolated RG as a positive control (A), and white adipose tissue [WAT, (B)] mitochondria from rats, whereby the addition of creatine in WAT did not alter submaximal or maximal complex I/II respiration or P/O ratios (C), and representative images of western blots comparing protein content from RG homogenate and isolated mitochondria from WAT (inset of C).Schematic depicting the proposed futile creatine cycle (FCC) in the mitochondria (D), and creatine titrations in the presence of submaximal ADP in iWAT (E) and gWAT (F).Representative trace of respiration in the presence and absence of creatine in isolated mitochondria from WAT of mice following β3-adrenergic induced signaling depicting an inability to determine coupling ratios (G) or a creatine-mediated stimulation in oxygen consumption.Representative western blots in isolated mitochondria following saline (SAL) or CL 316,243 (CL) injections showing the upregulation of (UCP1: G inset).Real-time respiratory trace depicting oxygen utilization over 5 min in the presence of submaximal ADP following CL administration (H: inset is total oxygen use over 5 min).Schematic depicting CL-induced UCP1 and absence of a requirement for FCC-mediated ADP recycling for electron transport chain flux (I).Realtime respiratory trace depicting protocol in permeabilized iWAT (J).Quantified leak respiration in the presence and absence of GDP (K), GDP-mediated inhibition (L), respiration in the presence and absence of creatine (M), and the ability of creatine to stimulate respiration (N) in iWAT following CL administration.ANT, adeninenucleotide transporter; CI-V complexes I-V of the electron transport chain; CKMT1, creatine kinase ubiquitous-type, mitochondrial; COXIV, cytochrome c oxidase subunit IV; Cr, creatine; GDP, guanosine diphosphate; IMM, inner mitochondrial membrane; NAD, nicotinamide dinucleotide; OMM, outer mitochondrial membrane; PM, pyruvate + malate; PMD, PM + ADP; PMDG, PM + glutamate; PMDGS, PMDG + succinate; UCP1, uncoupling protein-1.Data expressed as mean ± SEM. * is different (P < .05)from saline.
Figure 3 .
Figure 3. Decreasing Ckmt1 in adipocytes (shRNA) and in mice (KO) does not affect mitochondrial bioenergetics.mRNA expression following shRNA of Ckmt1 (A), adipocyte respiration in control (GFP), and shRNA-mediated Ckmt1 knockdown cells in the presence and absence of creatine (B, C) and the creatine-mediated stimulation of respiration (D).Representative images of Ckmt1 WT and KO genotyping (E) in various tissues.VO2 (F), VCO2 (G), carbohydrate oxidation (H), lipid oxidation (I), RER (J), energy expenditure (K), heat production (L), and activity (M) in WT and KO animals 3 h pre-and post-CL 316 243 (0.2 mg•kg -1 ) injection.Representative images of stained with H&E from SAL and CL-treated mice imaged at x40 magnification (iWAT M, gWAT R), cross-sectional adipocyte area (iWAT N, gWAT S), frequency distribution of adipocytes by cross-sectional area (iWAT O, gWAT T), western blot analysis showing relative mitochondrial protein content, and representative images (iWAT P and Q, gWAT U and V) from male Ckmt1 WT and KO animals.Arrows indicate the presence of multilocular lipid droplets and scale bars are 43 μm.VO2, rate of oxygen consumption; VCO2, rate of carbon dioxide production; RER, respiratory exchange ratio.Ponceau staining was used as a loading control.Data expressed as mean ± SEM.
Figure 4 .
Figure 4. CL 316,243 treatment increases mitochondrial respiration similarly between Ckmt1 WT and KO animals in iWAT and gWAT, and creatine does not enhance this change.Submaximal ADP and maximal complex I/II-supported respiration in iWAT (top, A) and gWAT (bottom, B), difference in submaximal ADP-supported respiration (C and D), and ADP respiratory kinetics with the calculated apparent Km (E and F) in the presence and absence of creatine in male Ckmt1 WT and KO animals.PM, pyruvate + malate; PMD, PM + ADP; PMDG, PM + glutamate; PMDGS, PMDG + succinate.Data expressed as mean ± SEM.
Figure 6 .
Figure 6.High-fat diet-induced adipocyte hypertrophy and reductions in mitochondrial respiration are consistent between Ckmt1 KO and WT mice in iWAT and gWAT depots.Representative images of iWAT (left) and gWAT (right) stained with hematoxylin-and-eosin (H&E) imaged at x40 magnification (A), cross-sectional adipocyte area (B and D), and frequency distribution of adipocytes by cross-sectional area (C and E), from female Ckmt1 WT and KO animals fed LFD or HFD for 8 wk.Additionally, submaximal ADP and maximal complex I/II-supported respiration in iWAT (top, F), and gWAT (bottom, I), difference in submaximal ADP-supported respiration (G and J), and ADP respiratory kinetics with the calculated apparent Km (H and K) in the presence and absence of creatine.PM, pyruvate + malate; PMD, PM + ADP; PMDG, PM + glutamate; PMDGS, PMDG + succinate.Scale bars are 43 μm.Data expressed as mean ± SEM
Figure 7 .
Figure 7.At thermoneutrality, Ckmt1 KO animals display similar glucose tolerance, insulin sensitivity, and mitochondrial function on HFD compared to WT animals.GTT (A), glucose AUC (B), ITT (C), insulin AUC (D), submaximal ADP and maximal complex I/II-supported respiration in iWAT (top, E), and gWAT (bottom, H), difference in submaximal ADP-supported respiration (F and I), and ADP respiratory kinetics with the calculated apparent Km (G and J) in the presence and absence of creatine in female Ckmt1 WT and KO mice after 5 wk of LFD or HFD-feeding.PM, pyruvate + malate; PMD, PM + ADP; PMDG, PM + glutamate; PMDGS, PMDG + succinate.Data expressed as mean ± SEM. | 2022-07-21T15:03:02.311Z | 2022-07-19T00:00:00.000 | {
"year": 2022,
"sha1": "dc3fac4f8ac69cb15b3179ecfd7a64298e227533",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/function/advance-article-pdf/doi/10.1093/function/zqac037/45025846/zqac037.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a13ab9c369c0c7e3a849d2f491c3ceee08cbbd80",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
253024801 | pes2o/s2orc | v3-fos-license | Quantitative hematoma heterogeneity associated with hematoma growth in patients with early intracerebral hemorrhage
Background Early hematoma growth is associated with poor functional outcomes in patients with intracerebral hemorrhage (ICH). We aimed to explore whether quantitative hematoma heterogeneity in non-contrast computed tomography (NCCT) can predict early hematoma growth. Methods We used data from the Risk Stratification and Minimally Invasive Surgery in Acute Intracerebral Hemorrhage (Risa-MIS-ICH) trial. Our study included patients with ICH with a time to baseline NCCT <12 h and a follow-up CT duration <72 h. To get a Hounsfield unit histogram and the coefficient of variation (CV) of Hounsfield units (HUs), the hematoma was segmented by software using the auto-segmentation function. Quantitative hematoma heterogeneity is represented by the CV of hematoma HUs. Multivariate logistic regression was utilized to determine hematoma growth parameters. The discriminant score predictive value was assessed using the area under the ROC curve (AUC). The best cutoff was determined using ROC curves. Hematoma growth was defined as a follow-up CT hematoma volume increase of >6 mL or a hematoma volume increase of 33% compared with the baseline NCCT. Results A total of 158 patients were enrolled in the study, of which 31 (19.6%) had hematoma growth. The multivariate logistic regression analysis revealed that time to initial baseline CT (P = 0.040, odds ratio [OR]: 0.824, 95 % confidence interval [CI]: 0.686–0.991), “heterogeneous” in the density category (P = 0.027, odds ratio [OR]: 5.950, 95 % confidence interval [CI]: 1.228–28.828), and CV of hematoma HUs (P = 0.018, OR: 1.301, 95 % CI: 1.047–1.617) were independent predictors of hematoma growth. By evaluating the receiver operating characteristic curve, the CV of hematoma HUs (AUC = 0.750) has a superior predictive value for hematoma growth than for heterogeneous density (AUC = 0.638). The CV of hematoma HUs had an 18% cutoff, with a specificity of 81.9 % and a sensitivity of 58.1 %. Conclusion The CV of hematoma HUs can serve as a quantitative hematoma heterogeneity index that predicts hematoma growth in patients with early ICH independently.
Background: Early hematoma growth is associated with poor functional outcomes in patients with intracerebral hemorrhage (ICH). We aimed to explore whether quantitative hematoma heterogeneity in non-contrast computed tomography (NCCT) can predict early hematoma growth.
Methods: We used data from the Risk Stratification and Minimally Invasive Surgery in Acute Intracerebral Hemorrhage (Risa-MIS-ICH) trial. Our study included patients with ICH with a time to baseline NCCT < h and a followup CT duration < h. To get a Hounsfield unit histogram and the coe cient of variation (CV) of Hounsfield units (HUs), the hematoma was segmented by software using the auto-segmentation function. Quantitative hematoma heterogeneity is represented by the CV of hematoma HUs. Multivariate logistic regression was utilized to determine hematoma growth parameters. The discriminant score predictive value was assessed using the area under the ROC curve (AUC). The best cuto was determined using ROC curves. Hematoma growth was defined as a follow-up CT hematoma volume increase of > mL or a hematoma volume increase of % compared with the baseline NCCT.
Results: A total of patients were enrolled in the study, of which ( . %) had hematoma growth. The multivariate logistic regression analysis revealed that time to initial baseline CT (P = .
-. ) were independent predictors of hematoma growth. By evaluating the receiver operating characteristic curve, the CV of hematoma HUs (AUC = . ) has a superior predictive value for hematoma growth than for heterogeneous Introduction Spontaneous intracerebral hemorrhage is difficult to treat and continues to be a significant cause of morbidity and mortality globally (1,2). Only one in every five survivors is selfsufficient after 6 months, with a 30-day mortality rate ranging from 30 to 40% (3,4). Hematoma growth is associated with increased mortality and poor prognosis following intracerebral hemorrhage (5,6). Early detection of hematoma growth can enable more aggressive treatment techniques to be implemented (7,8). Although the computed tomography angiography (CTA) spot sign is a wellestablished predictor of hematoma formation, it is not frequently performed in many centers, particularly centers in areas with limited medical treatment (9, 10). Consequently, non-contrast computed tomography (NCCT) markers have garnered considerable interest. Originally, density and shape of a hematoma were utilized to predict hematoma growth (11). Later, other studies established the utility of NCCT in predicting hematoma growth (12)(13)(14)(15)(16). However, NCCT markers have several drawbacks. Numerous NCCT markers describe similar characteristics; however, there is no agreement on the appropriate image acquisition procedure, assessment, terminology, or diagnostic criteria (17). Therefore, it is essential to explore a quantitative index that can be used to anticipate the growth of hematomas based on information obtained using NCCT. The shape and density of hematomas are significantly represented by various NCCT markers. We used CT density measurement technology to quantify hematoma quantitative heterogeneity. The purpose of this study was to determine the correlation between quantitative heterogeneity and early hematoma growth.
Study design and population
We used data from the Risk Stratification and Minimally Invasive Surgery in Acute Intracerebral Hemorrhage (Risa-MIS-ICH) trial, which was a prospective multicenter cohort study. This study was registered in ClinicalTrials.gov (No. NCT03862729). The present study utilized retrospective data from this database, from January 2015 to October 2021.
Patients with the time to baseline NCCT less than 12 h and time to follow-up CT less than 72 h were included. The exclusion criteria were as follows: (i) CT at baseline was not NCCT; (ii) surgical intervention was performed before follow-up CT; and (iii) CT image quality was not optimum.
Definition of variables
Heterogeneous density of ICH was measured on a 5-point visual analog scale along with an incremental continuum. The density category of hematoma was described as "heterogeneous, " when there were at least three hypodense lesions within the dense hematoma, and "homogeneous, " when there were less than three hypodense lesions within the dense hematoma, as assessed on an axial section showing the maximum cross-sectional area of the hematoma (11,17). The "swirl sign" was defined as an area of low or equal attenuation (compared with the attenuation of the brain parenchyma) in a highattenuation brain hemorrhage. Areas of low or equal attenuation could vary in shape and can be circular, striated, or irregular. It could be at the edge of the hematoma (12). The "black hole sign" was defined as a relatively low-attenuation area (black hole) encased within a high-attenuation hematoma. The black hole could be round, oval, or rod-shaped, but not connected to adjacent brain tissue. The relatively low-attenuation region should have identifiable boundaries, with a difference of at least 28 HUs between the two density areas (14,17). The "blend sign" of hematoma was defined as a relatively low-attenuation region within the hematoma mixed with an adjacent high-attenuation region. A clear border between the low-attenuation region and the adjacent high-attenuation region should be easily identifiable by the naked eye, with a difference of at least 18 HUs between the two density regions of the hematoma. The two denser zones should be easily distinguishable by direct visual inspection of the scan without image zooming (13). "Deep ICH" was .
FIGURE
Hematomas were identified using the semi-automatic edge detection tool included in neuro-navigation workstation software (left). The auto-segmentation function was used to process the region of interest (ROI), which constituted the entire hematoma, to obtain the Hounsfield unit histogram and density-related parameters (right). Two example cases with (A) and without (B) hematoma growth are shown. Hounsfield units (HUs) are generally discrete in patients with hematoma growth, with a higher coe cient of variation (CV) of %, but are concentrated in patients without hematoma growth, with a CV of %.
described as ICH involving the thalamus, basal ganglia, internal capsule, or deep periventricular white matter, whereas "lobar ICH" was classified as ICH originating at the cortex and corticalsubcortical junction (18,19). "Hematoma growth" was defined as an absolute growth hematoma volume of more than 6 mL or a relative growth of more than 33% the volume from baseline CT to follow-up CT within 72 h (20-22).
Imaging analysis
Initial and follow-up CT scans are performed using normal clinical techniques. For future processing and evaluation, all image data were archived in the Digital Imaging and Communications in Medicine (DICOM) format. To determine the volume and density of the hematoma, two independent researchers (Mingpei Zhao and Wei Huang) examined the baseline NCCT markers of 158 patients by workstation software (iPlan 3.0, Brainlab, Feldkirchen, Germany). The researchers were unaware of the patients' clinical history and follow-up CT findings. Hematomas were detected layer by layer on the axial section using a semi-automatic edge detection method. The region of interest (ROI) included the entire hematoma and was processed using the auto-segmentation function to obtain the histogram of HU, the mean of HU, and the coefficient of variation (CV) of HU ( Figure 1). Detailed processing is shown in Supplementary Image. The follow-up CT images were processed in the same manner, and two stroke neurologists independently reviewed all measurement results (Liang-Hong Yu and Fu-Xin Lin). The CV of hematoma HU represented the heterogeneity of hematomas.
Statistical analysis
Categorical variables were described as percentages, and the chi-square test or Fisher's test was used to determine the . /fneur. . distribution differences across groups. Continuous variables with a normal distribution were presented as means and standard deviations, compared using a two-tailed Student's t-test. The median (25 th −75 th quartile) of skewed data was used to compare them using the Mann-Whitney U test. We utilized univariate analysis to identify potential relevant determinants of hematoma growth. We then used multivariate logistic regression to determine the independent determinants of hematoma growth. In the multivariate analysis, factors with P < 0.05 in the univariate analysis and those known to be associated with hematoma growth as confounders were included. The optimal cutoff was determined using the receiver operating characteristic (ROC) curve analysis, and the predictive value of the discriminant score was determined using the area under the receiver operating characteristic (AUC) curve analysis. SPSS version 26.0 (SPSS Inc., Chicago, Illinois, USA) and R version 4.1.0 ("R" foundation for statistical computing, Vienna, Austria) were used for analysis. Two-tailed P-values were reported, and P < 0.05 was considered statistically significant.
Patient characteristics
This study enrolled a total of 158 patients with ICH ( Figure 2). There was no significant difference in patient demographics between included and excluded patients (Supplementary Table 1 Patients with early hematoma growth had a shorter time to baseline CT (P = 0.101), a smaller mean HU of hematoma (P < 0.001), a larger CV of hematoma HUs (P < 0.001) and were more likely to have diabetes mellitus (P = 0.045), a black hole sign (P = 0.045), and heterogeneous density (P < 0.001) than those without early hematoma growth. Age, sex, hypertension, oral anticoagulants, oral antiplatelet drugs, admission systolic blood pressure (SBP), baseline Glasgow Coma Scale (GCS) score, deep ICH, relevant laboratory indicators, swirl sign, baseline ICH volume, and standard HU of hematoma did not differ significantly between patients with and without hematoma growth.
Analysis of risk factors for hematoma growth
Diabetes mellitus, heterogeneous density, black hole sign, blend sign, mean HUs, and CV of hematoma HUs were all linked with hematoma growth in univariate logistic regression ( Table 2). Univariate logistic analysis factors that were significant were retained for the multivariate logistic model. The multivariate analysis revealed that time to baseline CT, heterogeneous density, and the CV of hematoma HUs were significant predictors of hematoma growth (Table 3).
ROC analysis determines the critical value of CV of hematoma HU value
In comparison to heterogeneous density (area under the curve = 0.750), the CV of hematoma HU (area under the curve = 0.638) has a significantly stronger predictive value ( Figure 3). The optimum cutoff value representing the CV of hematoma HU value for predicting hematoma growth was 18%, with a specificity of 81.9% and a sensitivity of 58.1%.
Discussion
Our study revealed that heterogeneous density of hematoma was a significant predictor of hematoma growth. The quantitative heterogeneity of hematomas as characterized by the CV of hematoma HUs was more predictive of hematoma growth than the traditional qualitative heterogeneity score. Diabetes mellitus, the black hole sign, a shorter time to baseline CT, and a smaller mean HU of hematoma were also found to be related to hematoma growth.
The reason for heterogeneity on NCCT is unclear. We postulate that this could be a sign of either persistent bleeding or following the rupture of a brain vessel (23). In the early stages of intracerebral hemorrhage, a hematoma is a heterogeneous mass composed of different blood cells, platelet thrombus, and protein-rich plasma with a relatively high density (24). Due to thrombus contraction and deposition of cell components, low-attenuation plasma is extruded, resulting in a rise in hematoma density (8). Hematoma growth may be cascaded, with increasing evidence supporting the notion of secondary shear hemorrhage with several ruptured vessels surrounding the first hematoma (8,22). Fresh blood coexists with a subacute blood clot in this model, and the mature area of early bleeding forms the high-attenuation area of hematoma, while the immature area of late hemorrhage forms the low-attenuation area of hematoma, resulting in hematoma heterogeneity (25). The presence of active contrast extravasation within a hematoma is referred to as a CTA spot sign, and it is frequently used to forecast hematoma growth (2,26). The frequency of CTA .
/fneur. . ROC curve analysis between the CV of hematoma HU and early hematoma growth. AUC was . , and the cuto point was % (solid line). ROC curve analysis between "heterogeneous" in the density category and early hematoma growth. AUC was .
(dotted line). AUC, area under the curve; ROC curve, receiver operating characteristic curve; HU, Hounsfield unit; CV, coe cient of variation. spot signs was found to be inversely proportional to the time of the beginning of cerebral bleeding, and the positive predictive value of speckle signs for substantial hematoma expansion declined as CTA time increased (27). Patients with hematoma growth have a shorter time to baseline CT in our study and in many other studies (13,24). We speculate that heterogeneity in hematomas is synonymous with the CTA spot sign, which may signify early persistent bleeding.
In comparison to the CTA spot sign, the NCCT is easier to obtain. As a result, NCCT markers have been routinely employed in clinical practice to predict hematoma growth. NCCT markers are classified into two groups based on their shape and density (17). The swirl sign, black hole sign, density heterogeneity scale, hypodensities, and blend sign all indicate hematoma density heterogeneity directly or indirectly (8). These markers have been demonstrated to be predictive of hematoma growth (11)(12)(13)(14)22). However, the current scoring methods lack standardization (17), and the degree of heterogeneity cannot describe the heterogeneity of hematoma (14), which limits the clinical application. In our study, the entire hematoma was considered the region of interest, and the quantitative hematoma heterogeneity index was obtained through automated segmentation tools. Unlike the NCCT marker, the CV HU of hematoma is objective and quantifiable.
The clinically relevant findings of our study are as follows: First, we identified a quantifiable objective predictor of hematoma growth, contrary to other NCCT markers that are subjective. We believe that our predictor may guide the stratification of hematoma risk. Second, our findings have translational potential for clinical applications. For example, our technology makes it possible to create relevant software that can be used to assimilate a large amount of data and establish predictive models via machine learning, thereby allowing for automatic recognition and segmentation of hematomas. Clinicians would be able to import the imaging data to extract critical hematoma characteristics such as hematoma volume and CV of hematoma HUs; it can also be incorporated into the imaging workstation as a useful tool for the radiologist. Further large-scale randomized control trials are necessary to further validate our findings and to inform policy guidelines to implement this promising idea of an open-source, freely available software.
This study has certain limitations. First, this is a retrospective analysis with a small sample size; therefore, our findings require further confirmation using the entire data from the Risa-MIS-ICH prospective trial. Second, we included patients with a 12-h baseline CT in our study, as opposed to patients with a shorter baseline CT, which may have resulted in missed cases of possible hematoma growth. Third, with an increase in sample size, the optimal cutoff representing the CV of the hematoma HU value for predicting hematoma growth may change. Finally, the segmentation and processing of hematoma require specialized software, which may be difficult to obtain in some hospitals and institutions.
In conclusion, our study established that the heterogeneity of hematomas may be a predictor of early hematoma growth in patients with ICH. Moreover, the quantitative hematoma heterogeneity index utilized in this study has a significantly greater predictive value than that of the conventionally used heterogeneous density markers on NCCT. At the next stage, the RIS-MIS-ICH project will validate the research findings using prospective multicenter large-sample size data.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
The studies involving human participants were reviewed and approved by the Ethics Committee of Fujian Medical University's First Affiliated Hospital (Ethical Approval Number: MRCTA, ECFAH of FMU [2018] 082-1). Written informed consent from the patients/participants or patients/participants' legal guardian/next of kin was not required to participate in this study in accordance with the national legislation and the institutional requirements.
Author contributions
MZ, WH, SH, and FL: acquisition of data and critical revision of the manuscript for intellectual content. QH, YZ, ZG, LC, and GY: study supervision. RC, WF, DW, and YL: study concept and design. SH and SW: guidance on statistics. DK and LY: analysis and interpretation of data and study supervision. All authors have reviewed the final version of the manuscript. . /fneur. . | 2022-10-21T14:12:54.618Z | 2022-10-21T00:00:00.000 | {
"year": 2022,
"sha1": "e1bd7e3b33322f87efff5c0818400a80c5036d78",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "e1bd7e3b33322f87efff5c0818400a80c5036d78",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15052893 | pes2o/s2orc | v3-fos-license | Immune dysfunction in diabetes-prone BB rats. Interleukin 2 production and other mitogen-induced responses are suppressed by activated macrophages.
Spleen cells of diabetes-prone BB Wistar rats were found to generate excessively low proliferative responses, and interleukin 2 (IL-2) levels in response to T-dependent mitogens. This abnormality was not due solely to abnormal T cell numbers since: (a) addition of BB spleen cells of BB splenic macrophages to normal major histocompatibility complex (MHC)-matched Wistar Furth (WF) spleen cells resulted in severe suppression of concanavalin A (Con A)-, phytohemagglutinin (PHA)-, and pokeweed mitogen (PWM)-mediated proliferation, and IL-2 production; (b) macrophage depletion from BB spleen cells, but not B cell or T cell depletion, removed completely the suppressive effects of BB cells on WF cells; (c) macrophage depletion greatly enhanced the response of BB lymphocytes to T-dependent mitogens. Although suppressor macrophages could also be found in the spleen of WF control rats they were present in much smaller numbers than in the spleen of BB rats. The suppressive effect of BB macrophages was partially reduced by addition of the prostaglandin synthetase inhibitor indomethacin to cultures. Furthermore, indomethacin (but not catalase or PMA) considerably augmented IL-2 secretion of Con A-stimulated BB spleen cells, but had little effect on WF spleen cells. In contrast, prostaglandins E1 and E2 (PGE1 and PGE2) suppressed IL-2 production. While IL-2 secretion was severely depressed in BB rats unstimulated and lipopolysaccharide (LPS)- stimulated IL-1 secretion by splenic macrophages was normal. BB macrophages did not inactivate IL-2. Low IL-2 production and macrophage- mediated suppression were features of all BB rats tested.
In this study we document deficient production of IL-2 by BB rat spleen cells in response to T cell mitogens. We present evidence that this defect is not due to an inability of macrophages to secrete IL-1, or to an inability of T cells to produce IL-2. Instead, the defect in IL-2 production is largely a consequence of macrophage suppression. Thus, the deficient production of IL-2 by BB spleen cells in response to Con A is greatly improved by partial depletion of macrophages. BB spleen cells and BB splenic macrophages strongly suppress the response of Wistar Furth (WF; MHC matched normal control rat) spleen cells to Con A. The prostaglandin synthetase inhibitor indomethacin partially reverses the suppressive effect of BB macrophages, while the enzyme catalase, and the tumor promoter phorbol myristate acetate have no effect. Furthermore, BB macrophages do not degrade IL-2. The possible significance of these findings to the disease complex of BB rats is discussed.
Material and Methods
Rats. Male and female BB rats were supplied by Dr. P. Thibert, Department of Animal Resources, Department of Health and Welfare, Ottawa, Canada. These rats express the RT 1 u MHC haplotype. The genetic, endocrine, and histologic characteristics of these rats have been described elsewhere (1,2,(4)(5)(6). The WF rats (RT 1 u) were bred and maintained in the McGill Cancer Centre animal colony. Rats used in this study ranged from 35 to 115 d of age.
Preparation of Cell Populations and Culture Conditions. Splenectomies, accompanied by pancreatic biopsies, were performed under ether anesthesia and aseptic conditions. After surgery the rats were kept alive and observed for development of diabetes mellitus by daily testing for glucosuria. This surgical procedure does not change the incidence of diabetes in BB rats (unpublished observation).
In some experiments B cells were depleted by panning twice on anti-rat immunoglobulin (Ig) coated plates as described by Wysocki and Sato (15). Spleen cells were incubated on these plates at 4°C at a concentration of 107 cells/plate for 90 min. Recovered unbound cells consisted of <5% B cells as determined by immunofluorescence.
T cells were depleted by treating spleen cells with biotin-conjugated W3/I 3 (Sera-lab, MAS 010C, a pan anti-T cell monoclonal antibody), followed by panning twice on avidincoated plates as described by Basch et al. (I 6). <5% of nonadherent cells were W3/13 + as determined by immunofluorescence. T-depleted populations failed to respond to concanavalin A (Con A).
Macrophages were depleted by passing spleen cells on Sephadex G-10 columns as described by Ly and Mishell (17). BB rat spleen cells consisted of 5-8% macrophages as determined by uptake of fluorescent microbeads (Polysciences Inc., Warrington, PA). Following passage on Sephadex G-10, < 1% of the cells could be identified as macrophages by this method.
In some experiments macrophages were enriched on the basis of cell density (18) and adherence in a two-step producedure. First, in order to deplete T cells, spleen cells were fractionated on a discontinuous Percoll (Pharmacia, Uppsala, Sweden) density gradient, consisting of three densities: 1.085, 1.052, and 1.030 g/ml. Spleen cells were layered on top of the gradient (at 4°C) and spun at 400 g for 30 min.
T cell-depleted fraction 1 was further enriched for macrophages on the basis of cell adherence. Fraction 1 cells were added in various numbers to the wells of 96-well plates and incubated at 37°C for 4 h. Nonadherent cells were removed by vigorous flushing with medium using a pasteur pipette (three washes). Macrophage numbers were estimated by the number of adherent cells per high power field (hpf, 40× objective). In some experiments indomethacin (Sigma 1-7378) 10 ug/ml, prostaglandin E1 and E2 (PGE1 and PGE2 respectively, Sigma P-5515 and P-5640), catalase (Sigma C-100), or phorbol myristate acetate (PMA, Sigma P8139) were added to the cultures.
MitogeJ~ Assays. Stimulation of spleen cells with T cell-dependent mitogens
Interleukin 2 (IL-2) Assay. The IL-2 assay was performed as described by Gillis et al. Incubation oflL-2 with Macrophages. Con A supernatant (CAS) was used as a source of I L-2. CAS was added to graded numbers of macrophages (prepared as described above) in 96-well plates, and incubated at 37°C for 24 h in the presence or absence of 20 mg/ml a-methyI-D-mannoside (MM, Sigma M-6882). Supernatants were then recovered, passed through 0.2-#m filters, and stored at -20°C until used. h~terleuhin 1 (IL-I) Assay. IL-1 was measured by the mouse thymocyte proliferation assay, as described by Mizel (20). Briefly, supernatants derived from spleen cells cultured with or without 25 ug/ml of LPS (E. coil 026:B6, Difco Laboratories,), for 48 h were added at various dilutions to 10 n BALB/c mouse thymocytes/250 /zl. [SH]Thymidine uptake by the thymocytes was determined after an 18-h incubation with 1 ttCi [3H]thymidine/well on day 3 of culture.
Respo~lse of BB Splee~ Cells to T-depe~ldent Mitogens. The proliferative responses
of BB rat spleen cells to Con A, PHA, and PWM were very low when compared with MHC-matched normal control WF rats (Fig. 1). In fact, addition of these mitogens to BB spleen cells yielded counts lower than with medium alone in several BB rats (hence a negative Acpm).
IL-2 Production by BB Spleen Cells. The mean IL-2 levels of 24-h Con A
supernatants of BB spleen cells (5 × 105 cells/well) were approximately eightfold lower than those of WF controls ( Fig. 2) (p < 0.001 by the Student's t-test). Only 2 out of 10 BB rats tested bad IL-2 levels above 10 U/ml, while WF IL-2 levels ranged from 64 to 144 U/ml. However, there was no apparent correlation between the clinical status of BB rats, or the subsequent development of IDDM, and IL-2 levels. Low IL-2 production appears to be a feature of all BB rats.
Suppressive Effects of BB Spleen Cells on WF Spleen Cells. The addition of BB spleen cells to WF spleen cells resulted in a marked suppression of proliferation in response to T cell-dependent mitogens (Figs. 3 and 4), while addition of similar numbers of WF spleen cells instead of BB cells, resulted in an increase in
Acpm, rather than suppression (data not shown). The suppressive effect was equally apparent on IL-2 secretion in response to Con A ( Table I). The level of suppression varied among BB rats but was always present. Less than I BB spleen cell per 10 WF spleen cells was sufficient to generate suppression of both proliferation and IL-2 production ( Fig. 5 and Table I).
Cell Population Responsible for Suppression. T cell depletion or B cell depletion had no effect on the level of suppression exerted by BB spleen cells. On the other hand, macrophage depletion by passage of spleen cells on Sephadex G-10 completely abrogated the suppressive effects of BB cells (Fig. 4). In fact, following macrophage depletion, enhancement of responses rather than suppression was observed. This effect was seen in both diabetic and nondiabetic BB rats. Furthermore, macrophage depletion greatly increased proliferation and IL-2 production of BB cells in response to Con A (Table II). It should be added that macrophages were not completely depleted by passage on Sephadex G-10 (see Materials and Methods). That macrophages were indeed the suppressor cell population (rather than some other adherent cell population) was further demonstrated by adding WF spleen cells to various numbers of BB splenic macrophages. Such macrophages were isolated on the basis of both cell density (on a Percoll gradient) and adherence and contained <5% contaminating T cells (see Materials and Methods). BB macrophages in small numbers (150-250 adherent cells per 10 high power fields) suppressed WF proliferative responses to Con A by >90%, and IL-2 production by >80% (Table III). Adherent cells derived from whole BB spleen cell populations (omitting the density gradient step) had a closely similar effect ruling out an effect of Percoll in this assay (data not shown). WF-derived splenic macrophages did not suppress in this assay in the cells doses shown in Table III. However, much larger numbers of WF splenic macrophages (800-1,200 macrophages per 10 high power fields) could suppress WF responses to Con A by ~50% (not shown). Thus even normal spleens contain a population of suppressor macrophages which, however, are clearly more numerous in BB spleens than in normal spleens.
Effect of lndomethacin and Catalase on the Suppressive Effect of BB Spleen Cells.
Indomethacin was effective in reducing partial suppression of proliferation (50% or less) by BB cells of WF cells, but had less effect in situations of complete suppression (Fig. 5). Comparatively, indomethacin was more effective in reducing severe suppression of IL-2 production than reducing severe suppression of Con A-induced proliferation (Table I and (Table I I) in high cell density cultures (5 x 105 cells/well), while having little or no effect on WF responses. On the other hand, addition of the enzyme catalase to cultures in amounts as high as 15,000 U/ml had no effect on BB spleen cells responses to Con A. In addition, catalase did not alter the effect of indomethacin (data not shown). It thus seems unlikely that the suppressive effect of BB macrophages is mediated by hydrogen peroxide generation. (21)(22)(23)(24). As can be seen from Fig. 6, both PGE1 and PGE2 can suppress Con A-induced proliferation and IL-2 production in rat cells. Physiological concentrations of PGE1 and PGE2 (1-10 ng/ml) caused 40-50% suppression of Con A-driven responses. PGE1 and PGE2 in doses ranging from 1 to 1,000 ng/ml did not affect the response of CTLL-2 cells to IL-2, in the IL-2 assay, and thus did not interfere with IL-2 determinations in prostaglandin-containing supernatants (data not shown).
BB Mac~vphages Do Not Inactivate IL-2. No loss of IL-2 activity was found when
IL-2 (from Con A supernatants) was incubated for 24 h with macrophages in numbers up to 800-1,200 macrophages per 10 hpf (Table IV). Similar results were obtained in the presence or absence of the Con A inhibitor a-methyl-o- mannoside, ruling out IL-2 production by possible residual T cells. Since such macrophage numbers had profound suppressive effects on IL-2 production these results rule out macrophage-mediated IL-2 inactivation as a suppressive mechanism. IL-I Production by BB Spleen Cells. The secretion of IL-I, as determined by the thymocyte proliferation assay, was as high in BB rats as in WF rats (Table V). This occurred despite the fact that the WF control rats tested produced levels of IL-2 severalfold higher than the BB rats. The addition of indomethacin to spleen cells often resulted in apparent increases in IL-1. However, it should be remembered that indomethacin will reduce the prostaglandin concentration of such supernatants. Prostaglandins interfere with thymocyte proliferation (20) since this process depends on a IL-1 ~ thymocyte ~ IL-2 sequence of events, and prostaglandins E1 and E2 inhibit IL-2 secretion. The addition to Con A-stimulated BB spleen cells of PMA in amounts ranging from 1 to 100 ng/ml did not improve proliferation or IL-2 production (data not shown). Since PMA is thought to act directly on T cells and to reduce the need for IL-1 in T cell activation (25), it seems unlikely that low IL-2 production in BB rats is secondary to an inability ofT cells to respond to IL-1.
Discussion
The BB rat represents one of the few animal models available for study that spontaneously develops IDDM (26). The autoimmune nature of the disease is supported by the findings of insulitis (1), the presence of anti-islet cell antibodies in the sera of the animals (11), the possibility of preventing the disease by immunosuppressive treatment (27), the strict association of the disease with the MHC RT1 u haplotype (2), and T cell lymphopenia (6). Studies showing a lower incidence of disease in neonatally thymectomized BB rats (28), and in rats treated in vivo with anti-lymphocyte antibodies (27) suggest an important role for T cells.
Paradoxically, BB rats are immune deficient, have low T cell numbers in blood and lymphoid organs (8), and respond poorly to T cell mitogens (7). In this study we document low production of IL-2 in BB rats following stimulation of BB spleen cells with Con A. Such a deficiency could be explained on the basis of T cell depletion, or changes in the Th/Ts ratio in BB rats. We find (unpublished observation), as have others (7,8), that BB spleen cells consist of approximately half as many T cells (percentage of total cells) as non-diabetes-prone controls, with a decrease in Th/Ts ratios. The findings in this study clearly indicate that alterations in T cell numbers can account for only part of the decrease in IL-2 production. Addition of spleen cells from BB rats (both diabetic and nondiabetic) to WF spleen cells strongly suppresses the proliferative responses to Con A, PWM, and PHA, as well as IL-2 production. This indicates that an active suppressive process is at work. Depletion of T cell or B cells from BB spleen cells had little effect on this suppressive process. On the other hand, macrophage depletion by passage through a Sephadex-G10 column completely abrogated the suppressive effect of BB spleen cells on the response of WF spleen cells to Con A. Furthermore, macrophage depletion greatly increased both proliferation and IL-2 production by BB cells in response to Con A. BB splenic macrophages (isolated on the basis of both cell density on Percoll gradients, and adherence) strongly suppressed WF spleen cells responses to Con A. Small numbers of BB macrophages were suppressive in this assay. On the other hand, up to 10-fold greater numbers of WF macrophages were necessary to obtain an equivalent degree of suppression. These results indicate that suppressor macrophages can be found even in normal spleens, however, this cell population is greatly increased in BB rats.
Although macrophage-mediated suppression is a well-known phenomenon, the mechanisms of suppression remain poorly understood. Metzger et al. (21) suggest that this phenomenon is mediated by activated macrophages. These authors found that following activation with thioglycollate, or C. parvum, peritoneal macrophages acquired the ability to suppress the response of lymphocytes to mitogens. This response could be partially reversed by a prostaglandin synthetase inhibitor (indomethacin), or catalase. They postulated that macrophage activation results in the production of prostaglandins and H202, both of which suppress lymphocyte responses. On the other hand, some authors believe that soluble protein mediators play an important role in macrophage-mediated suppression (22). We find that indomethacin partially reverses the suppressive effects of BB spleen cells, while catalase has no effect. Indomethacin also improves IL-2 production by unfractionated Con A-stimulated BB spleen cells. Furthermore, we find that the addition of PGE1 or PGE2 at physiological doses (1-10 ng/ml) to WF spleen cells suppresses Con A-induced proliferation and IL-2 production by ~50%. Clearly, PGE1 and PGE2 can strongly suppress T cell responses; however, other mediators are probably also involved in producing the marked suppressive effect (>90%) of BB macrophages. Interestingly, we find that BB macrophages have a strong cytostatic effect on tumor cells, 2 and can inhibit insulin secretion by a rat insulinoma cell line. This provides further evidence that BB rat spleens contain an unusually high number of activated macrophages.
Other possible mechanisms to explain macrophage suppression in BB rats have been considered in this study. BB splenic macrophages secrete normal levels of IL-1, and do not inactivate IL-2. We tested the effect of PMA in Con Astimulated cultures, since this substance has a direct effect on T cells and reduces or abolishes the need for IL-1 in T cell activation (25). However, PMA did not improve IL-2 production by BB Con A-stimulated spleen cells. It thus seems unlikely that low IL-2 secretion is due to an inability of BB T cells to respond to IL-1. Interestingly, PMA improves IL-2 secretion in some lupus-prone mouse strains (29).
The finding of excessive macrophage-mediated suppression is not unique to BB rats. "Suppressor" macrophages have been described in rheumatoid arthritis (30) and Hodgkin's disease (31), as well as in human (32,33) and murine (34) systemic lupus erythematosus (SLE, a systemic autoimmune disease). In fact, Gershwin et al. (34) demonstrated that the acquired Th deficiency of older SLEprone (NZB×NZW)FI mice is secondary to the presence of splenic macrophage suppressor cells, and is unrelated to the premature degeneration or involution of thymus or thymic epithelial elements in these mice. These findings are similar to our findings in autoimmune BB rats.
The association of low IL-2 production, and other T cell deficiencies, with autoimmune diseases has been difficult to explain. In fact, it is not clear that low IL-2 production contributes to the polyclonal B cell activation found in SLE (43,44), or to the occurrence of IDDM in BB rats. However, by crossing BB rats with other rat strains (e.g. Buffalo) we found that only animals with deficient T cell function developed IDDM (6). All diabetic offsprings of such crosses tested were found to be low producers of IL-2 (unpublished observation). It is clear that although deficient T cell function by itself does not cause IDDM in rats, this feature is nevertheless invariably associated with the disease. Expression of the disease only occurs if the RT1 u haplotype, and other yet poorly characterized genetic factors are also present (2,6).
The mechanism of macrophage activation in the BB rat has not been determined in this study. In experimental systems the signals that can activate macrophages are numerous and include microbial antigens, mitogens, and many chemicals, as well as iymphokines (22,(45)(46)(47). Studies are in progress to evaluate the possible presence of immune complexes or increased levels of interferon in the serum of BB rats. Conceivably, antigen-specific Ia-restricted T ceil-macrophage interactions to a putative islet cell antigen could lead to both T cell and macrophage activation with secretion of monokines (e.g. IL-1) and lymphokines (e.g. IL-2, and ~,-interferon); as well as, a whole battery of vasoactive, chemotactic, and cytotoxic products by macrophages. This model is supported by the prominent presence of both macrophages and T cells in islets of Langerhans during the acute phase of the disease.
Finally, the finding that indomethacin can increase in vitro IL-2 secretion in BB rats suggests that prostaglandin synthetase inhibitors may have useful immunopotentiating effects in diseases characterized by low IL-2 production.
Summary
Spleen cells of diabetes-prone BB Wistar rats were found to generate excessively low proliferative responses, and interleukin 2 (IL-2) levels in response to T-dependent mitogens. This abnormality was not due solely to abnormal T cell numbers since: (a) addition of BB spleen cells or BB splenic macrophages to normal major histocompatibility complex (MHC)-matched Wistar Furth (WF) spleen cells resulted in severe suppression of concanavalin A (Con A)-, phytohemagglutinin (PHA)-, and pokeweed mitogen (PWM)-mediated proliferation, and IL-2 production; (b) macrophage depletion from BB spleen cells, but not B cell or T cell depletion, removed completely the suppressive effects of BB cells on WF cells; (c) macrophage depletion greatly enhanced the response of BB lymphocytes to T-dependent mitogens. Although suppressor macrophages could also be found in the spleen of WF control rats they were present in much smaller numbers than in the spleen of BB rats. The suppressive effect of BB macrophages was partially reduced by addition of the prostaglandin synthetase inhibitor indomethacin to cultures. Furthermore, indomethacin (but not catalase or PMA) considerably augmented IL-2 secretion of Con A-stimulated BB spleen cells, but had little effect on WF spleen cells. In contrast, prostaglandins E1 and E2 (PGE1 and PGE2) suppressed IL-2 production. While IL-2 secretion was severely depressed in BB rats, unstimulated and lipopolysaccharide (LPS)-stimulated ILl secretion by splenic macrophages was normal. BB macrophages did not inactivate IL-2. Low IL-2 production and macrophage-mediated suppression were features of all BB rats tested.
Received for publication 29 August 1983 and in revised form 28 October 1983. | 2014-10-01T00:00:00.000Z | 1984-02-01T00:00:00.000 | {
"year": 1984,
"sha1": "0162adeb6a7ade003dd9946a2a3779ac6eb4d00b",
"oa_license": "CCBYNCSA",
"oa_url": "http://jem.rupress.org/content/159/2/463.full.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "a25c73247a729e0163ce4c5d831ecd316251ad94",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
18838996 | pes2o/s2orc | v3-fos-license | Physics of Large-x Nuclear Suppression
We discuss a common feature of all known reactions on nuclear targets - a significant suppression at large x. Simple interpretation of this effect is based on energy conservation restrictions in initial state parton rescatterings. Using the light-cone dipole approach this mechanism is shown to control variety of processes on nuclear targets: high-pT particle production at different rapidities as well as direct and virtual (Drell-Yan) photon production. We demonstrate universality and wide applicability of this mechanism allowing to describe large-x effects also at SPS and FNAL energies too low for the onset of coherent effects or shadowing.
Introduction
Recent measurements of high-p T hadrons produced in the beam fragmentation region in d + Au collisions at RHIC [1,2,3] allow to reach the smallest values of Bjorken x in the nucleus and thus to reach maximal coherence effects leading eventually to nuclear suppression. Observed suppression is usually interpreted within the models based on color glass condensate (CGC). However, such an interpretation misses a global applicability. For example, a similar suppression like at RHIC was measured also in p + Pb collisions at SPS where no effects of coherence are possible.
The rise of the suppression with Feynman x F for hadrons produced in p+Pb collisions at SPS [4] or for the Drell-Yan (DY) pairs at FNAL [5] has a similar pattern as seen at RHIC. All these examples and another reactions treated in [6] favor the same large-x mechanism independently of the energy and type of reaction.
Such a common mechanism was proposed in [6,7] where large-x F nuclear suppression was shown to be caused by the energy conservation in multiple parton rescatterings. This mechanism is a leading twist effect giving rise to the breakdown of QCD factorization and exhibits also the x F -scaling [7].
Another consequence of this treatment discussed in this paper is the manifestation of nuclear effects also at midrapidities, i.e. at large x T = 2 p T / √ s. We expect a suppression pattern similar to that at large x F and the nucleus-to-nucleon ratio below one. Similarly to x F -scaling at forward rapidities x T -scaling of this effect is predicted.
In this paper we further exploit proposed model [6,7] to analyze and quantify the nuclear suppression at large x for a variety of processes occurring in p(d) + A and A + B collisions.
Sudakov suppression, production cross section
In any hard reaction in the limit x → 1 gluon radiation is forbidden by the energy conservation. Then the probability to have a large rapidity gap (LRG) ∆y = − ln S (x) between the leading parton and rest of the system, where the Sudakov suppression factor S (x) = 1 − x [6].
Suppression at x → 1 can thus be formulated in terms of multiple interactions of projectile partons with the nucleus. Each of multiple interactions produces an extra suppression factor S (x) and corresponding weight factors are given by the AGK cutting rules [8]. Then, in terms of the nuclear thickness function T A (b) and the effective cross section σ e f f [6], the cross sections of hard reaction on a nuclear target A at impact parameter b and on a nucleon N are related as [9], Employing factorization, the hadron production cross section in d + A (p + p) collisions reads, where η is pseudorapidity, (2) are calculated in the light-cone (LC) dipole approach [6,9,10]. For parton distribution functions we use the parametrization from [11]. Fragmentation functions were taken from [12].
As first shown in [6,7] the effective projectile quark (gluon) distribution correlates with the target and corresponding quark (gluon) distribution in the nucleus reads For the quark part the normalization factor C in Eq. (3) is fixed by the Gottfried sum rule.
Hadron production at forward rapidities
In 2004 the BRAHMS Collaboration [1] observed a significant nuclear suppression of h − at η = 3.2. Much stronger nuclear effects was found later on by the STAR Collaboration [3] for π 0 production at η = 4.0. All these data are consistent with model calculations [6,7]. A strong rise of suppression with η reflects much smaller survival probability S (x) of a LRG at larger x.
Since parton energy loss is proportional to the initial energy the energy conservation restrictions in multiple parton rescatterings should also lead to x-scaling of the nuclear effects . A similarity of suppression at different energies and pseudorapidities was demonstrated in [6,7].
Hadron production at midrapidities
Another manifestation of the energy conservation in multiple parton rescatterings occurs at midrapidities. Here the corresponding values of p T should be high enough to keep x T = 2 p T / √ s on the same level as x F at the forward rapidities. This is supported by data from the PHENIX Collaboration [13] showing an evidence for suppression at large p T ∼ > 8 GeV/c (see Fig. 1). If effects of energy conservation are not included the p T dependence of R d+Au described by the dashed lines exhibits only a small suppression at large p T given by the isotopic effects (see Fig. 1). After inclusion of energy sharing in parton rescatterings we predict R d+Au < 1 at large p T presented by the solid lines. More precise data are needed for a clear manifestation of breakdown of QCD factorization.
Direct photon production in Au+Au collisions at RHIC
Direct photons in Au + Au collisions are also suppressed at large p T as was demonstrated by the PHENIX Collaboration [14]. Model predictions for the ratio R Au+Au as a function of p T are compared with data in Fig. 2. Expressions for production cross sections have been adopted from [15,16]. If the energy conservation in parton rescatterings is not taken into account model calculations depicted by the dashed line gives a value R Au+Au → 0.8 in accord with onset of isotopic effects. Inclusion of the energy conservation leads to strong nuclear effects at large p T as is demonstrated by thick and thin solid lines. [13]. Dashed line represents calculations without energy conservation. Thick and thin solid lines represent calculations in the limit of long (LCL) and short (SCL) coherence length, respectively.
Nuclear suppression at SPS and FNAL energies
The left panel of Fig. 3 clearly manifests that pions from p + Pb collisions at SPS energy exhibit the same suppression pattern as that in the RHIC kinematic range. Model predictions employ the dipole formalism for calculation of nuclear broadening using standard convolution expression based on QCD factorization [10]. Initial state multiple interactions leading to breakdown of QCD factorization are included as described in Sect. 2. One can see a reasonable agreement of our calculations with NA49 data [4]. The DY reaction is also known to be considerably suppressed at large x F (x 1 ) [17] (see the right panel of Fig. 3). Using the same mechanism as discussed in Sect. 2 one can explain a strong suppression at large x 1 . The differential cross section for the photon radiation in a quark-nucleus collision is calculated [18] using the LC Green function formalism [15]. The right panel of Fig. 3 demonstrates a good agreement of our calculations with E772 data [5].
Summary
Unified approach to large-x nuclear suppression based on the energy conservation restrictions in multiple parton rescatterings was presented. QCD factorization fails at the kinematic limit, x → 1. Universal suppression driven by Sudakov factor S (x) brings in the x-scaling of nuclear effects. The same formalism explains well available data from RHIC on suppression of high-p T hadrons and photons at different rapidities. This common mechanism explains also a suppression at low SPS and FNAL energies where no coherence effects are possible. | 2009-09-14T15:31:32.000Z | 2009-07-23T00:00:00.000 | {
"year": 2009,
"sha1": "11a8795037a283aad3bac096b9f8bea47c4094bf",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0907.4062",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "11a8795037a283aad3bac096b9f8bea47c4094bf",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
94339782 | pes2o/s2orc | v3-fos-license | Fabrication of TiO2–SiO2 glasses containing Ca-α-SiAlON:Eu2+ phosphor using the sol–gel process
We prepared titanosilicate glasses by the solgel method and studied its ability to disperse Ca-¡-SiAlON:Eu2+ phosphor, which is a typical yellow phosphor and applicable for white light-emitting diodes (LEDs). xTiO2(100 1 x)SiO2 glasses dispersed with SiAlON powders were obtained by sintering at 900°C with x ranging from 10 to 30; the glasses changed to black after sintering at 1000°C. Diffuse reflectance spectra suggested that the colorization was caused by the formation of Ti3+ ions in the samples. The local structures of Ti and Eu were measured by X-ray absorption fine structure spectroscopy. Ti K-edge spectra showed that the local structure of Ti was 5-coordinated in the glass where x = 10 and 6-coordinated in the glass where x = 30. Eu LIII-edge spectra indicated that both Eu2+ and Eu3+ were present and the ratio of Eu2+ to Eu3+ depended on the heating temperature. The reduction atmosphere might affect the colorization of the glasses sintered at 1000°C. The quantum efficiencies (QE) of the glasses sintered at 900°C were estimated and that of the glass with x = 10 was the highest among of the samples, higher than that of the SiAlON powders. It is suggested from the measurement of refractive index and XRD patterns that the improvement in QE is caused by the decrease in the light scattering at the interface between the phosphor and the glasses due to differences in refractive indices and the formation of crystals.
Introduction
Nitrides and oxynitrides doped with rare-earth ions have been studied for use as luminescent materials because of their nontoxicity, thermal stability, and interesting luminescent properties. In particular, they are applicable for light-emitting diodes (LEDs), in which phosphor powders imbedded in the organic resin are irradiated by blue or ultraviolet LED as a light source. 1) However, it has recently been reported that with an increase in the power of the source LED, the resulting heat generation lets the resin to deteriorate and the LED lifetime to shorten. Thus, in terms of thermal stability, glass 2)5) or glassceramic 6), 7) matrices are more suitable as packaging materials for dispersing phosphors than organic regins.
Among glass or glassceramics, our group has reported that borate and tellurite glasses are good candidates for dispersing Ca-¡-SiAlON doped with Eu 2+ ions (Ca-¡-SiAlON:Eu 2+ ), which is a phosphor that emits yellow light under blue light irradiation 8) and can be applied in pseudo-white LEDs. 4) The composites were prepared in two steps: first, glass was formed by melting a mixture of oxide powders; then, the glass was crushed into cullets, mixed with the phosphor, and remelted at low temperature in order to minimize the deterioration of the phosphor upon heating. In this process, a few glasses exhibited homogeneously dispersed phosphors without deterioration, but the phosphors reacted easily with most of the prepared glasses during the remelting.
On the other hand, glasses can be obtained by the solgel method, in which metal alkoxides are hydrolyzed and condensed in a solution. 9) In the case of the solgel method, a phosphor can be added to the sol during the reaction, which is generally carried out at approximately room temperature. Recently, silica glass dispersed with SiAlON was successfully obtained by controlling the drying process via the solgel method. 5) The chromaticity of the silica glass could be controlled by the SiAlON concentration and glass thickness, and white light was most closely achieved when the glass was irradiated by a light with a wavelength of 450 nm. The solgel method thus shows the potential to produce glass with dispersed phosphors without deterioration. However, the efficiency of silica glasses was lower than those of the borate and tellurite glasses. There are two possible reasons for this: one is the differences in the refractive indices between the phosphor and the glasses. The refractive index of silica glass is about 1.45, which is lower than those of borate and tellurite glasses. The refractive indices of the phosphors range from 1.855 to 1.897, depending on the composition, and the difference in the refractive indices between the silica glass and phosphors is the largest of all the glasses. The efficiency of the silica glass dispersed with phosphors was lower than those of the borate and tellurite glasses because of the significant light scattering at the interfaces. Another reason is the presence of pores in the silica glass. Generally, many pores are generated during the solgel process, and some of those pores might remain depending on the sintering temperature. The pores also might cause light scattering at the interface. Thus, it is important to increase the refractive index and reduce the number of pores. 10) To increase the refractive index, it is useful to add oxides such as TiO 2 11) and SnO 2 12) to the silica glasses. Among them, TiO 2 has a high refractive index and titanosilicate glasses could serve effectively as a matrix dispersed with SiAlON.
In this study, TiO 2 is added to silica glass to increase the refractive index of the glass matrix, and titanosilicate glasses dispersed with SiAlON were fabricated at different sintering temperatures. The effect of the addition of TiO 2 and the sintering temperatures on the optical properties was investigated using Xray absorption fine structure spectroscopy (XAFS) and diffuse reflectance spectra.
Experimental procedure
Titanosilicate was prepared using tetramethoxysilane (TMOS, Junsei Chemical Co., Ltd.), titanium isopropoxide [Ti(OPr) 4 , Kojundo Chemical Laboratory Co., Ltd.], and titanium chloride (TiCl 4 , Wako Pure Chemical Industries, Ltd.). 11) The molar ratio of Si(OCH 3 ) 4 :Ti(OPr) 4 :TiCl 4 was set to (100 ¹ x):0.7x:0.3x to obtain xTiO 2 (100 ¹ x)SiO 2 (mol %) glasses. The source materials were stirred in a capped PFA bottle, and methanol, HCl solution (pH = 1), and propylene carbonate (PC, Kanoto Chemical Co., Inc.) were added. The molar ratio of (Si+Ti): methanol:HCl:PC was set to 1:2.8:5:3. After stirring for 1 h, Ca-¡-SiAlON:Eu 2+ , which was prepared by gas-pressure sintering, 13) was added to the sol. The Ca-¡-SiAlON:Eu 2+ phosphor is described in detail in the paper of Xie et al. 8) The phosphor particles were angular, with an average size of about 10¯m. 4) The concentration of SiAlON was set to 5 mass % relative to the TiO 2 SiO 2 glass. The sol was rotated at 60 rpm and 30°C on a mix rotor (VMRC-5, AS ONE Corp.) until the sol was gelated. The wet gels were dried in the same bottles, which were covered with aluminum foil, from 25 to 120°C at a rate of 1.3°C·h ¹1 in an electric oven, resulting in dried gels. The produced dry gels were sintered by heating from room temperature to 400°C at a rate of 4.2°C·h ¹1 , from 400 to 900 or 1000°C at a rate of 8.3°C·h ¹1 , and maintained at 900 or 1000°C for 1 h in a furnace, followed by natural cooling.
The crystallinity was investigated by X-ray diffraction (XRD, RINT2200, Rigaku Co., Ltd.). The samples were crushed and the resultant powders were measured at room temperature, using Cu K¡ radiation (40 kV, 40 mA), in the angular range 1070°w ith a step of 1°/min. The photoluminesence (PL) spectra of the bulk samples were measured with a multichannel photodetector (MCPD-7000, Otsuka Electronics Co., Ltd.) to estimate their quantum efficiency (QE). The sample surfaces were irradiated by a wavelength from 220 to 800 nm as a pump source, and the PL spectra were collected by using an integrated sphere. QE for the excitation at 450 nm was calculated from the number of absorbed photons, N abs , and the number of emitted photons, N em , as QE = N em /N abs . The refractive index of the glass without SiAlON was measured for the D line (589 nm) using an Abbe Refractometer (NAR-1T, ATAGO Co., Ltd.).
The XAFS spectra [X-ray absorption near-edge structure (XANES) and extended X-ray absorption fine structure (EX-AFS)] of the samples were obtained at the BL-12C facility of the Photon Factory at the High Energy Accelerator Research Organization, Tsukuba, Japan. The X-ray radiation from the 2.5 GeV electron storage ring was monochromated using a Si(111) double crystal monochromator. The Ti K-edge absorption spectra of all samples and the Eu L III -edge absorption spectra of the reference samples were measured in transmission mode at room temperature. The Eu L III -edge absorption spectra of our samples and the SiAlON powder were collected in fluorescence mode because the concentration of Eu was insufficient to measure in transmission mode. Several kinds of Ti 4+ -containing oxides were selected as reference samples: anatase, rutile, FeTiO 3 2+ and Eu 3+ , respectively. These powders, prepared dried gels, and sintered samples were diluted by mixing with BN (Kojundo Chemical Laboratory Co., Ltd.) and pelletized by pressing for the measurement.
The Ti K-edge spectra were measured in the energy range 4459.56064.65 eV. The background was subtracted by fitting the spectral lower absorption edge (pre-edge) region from 4902.4 to 4952.4 eV. The spectra were then normalized for atomic absorption based on the average absorption coefficient of the spectral post-edge region from 5132.4 to 5482.4 eV. Eu L III -edge spectra were measured in the narrow energy range 68507050.5 eV because the flourescence mode needs more time to collect spectra than the transmittance mode. The background was subtracted by the fitting of the spectral pre-edge region from 6880 to 6945 eV.
Results and discussion
We attempted to prepare titanosilicate gels, xTiO 2 (100 ¹ x)-SiO 2 (mol %), without phosphors in preliminary experiments. Dried gels were obtained in the range from x = 10 to 50, but gelation did not occur when x was more than 60 mol %. Among the resulting samples, the x = 40 and 50 dried gels were broken into many small pieces. Thus, the dried gels of x = 10, 20, and 30 were sintered at 1000°C. The glasses were obtained, although there were cracks in the bulk samples. Images of the sintered samples of x = 10 and 30 are shown in Fig. 1. The x = 10 sintered sample is transparent without colorization, although cracks are partially formed at the top. The x = 30 sintered sample is transparent with a brownish color and is broken in small pieces. In the following sections, we focus on the samples of x = 10 and 30 to disperse SiAlON. Figure 2 shows images of the x = 10 and 30 dried gels and those same samples sintered at 900 and 1000°C. The x = 10 dried gel, denoted as 1D, was successfully obtained without cracks and it shrank isotropically without cracks by sintering at 900°C, denoted as 1S9. The color of the 1S9 sample was yellow and similar to 1D. However, the x = 10 sample sintered at 1000°C, denoted as 1S10, was broken in small pieces and the color changed to black, although the phosphors emitted slightly yellow light under UV irradiation. The x = 30 dried gel, denoted as 3D, was cracked in a few pieces but the gel was yellow, which indicates that the phosphor was homogeneously dispersed in the gel. After sintering at 900°C, the x = 30 sample, denoted as 3S9, was broken again in many pieces but the sample color was yellow, the same as 1S9 without deterioration. The color of the x = 30 sample sintered at 1000°C, denoted as 3S10, also changed to black, retaining the fluorescence. The diffuse reflectance spectra of the samples and SiAlON powders are shown in Fig. 3. In samples 1S9 and 3S9, the reflectance has a shoulder at the region from 350 and 500 nm. The reflectance of the SiAlON powder shows a peak at around 400 nm. Thus, the shoulder of 1S9 and 3S9 may be caused by the absorption of SiAlON. The reflectance of 1S10 and 3S10 is lower than that of 1S9 and 3S9. The edge of the reflectance of 1S10 and 3S10 shifts to the long wavelength, and the reflectance decreases with increasing sintering temperature over the whole range. In 3S10, the reflectance decreases gradually in the range from 400 to 800 nm as the wavelength increases. The absorbance of Ti 3+ is known to appear in the range between 400 and 800 nm, 15), 16) and the colorization of the samples sintered at 1000°C might be related to the formation of Ti 3+ .
The crystallinity of the sintered samples was investigated by XRD. Figure 4 shows the XRD patterns of the sintered samples. All spectra show a broad peak in the range from 10 to 30°, which is called a halo peak. The peak decreases with an increase in the amount of TiO 2 . The halo peak indicates that the samples are primarily amorphous, and thus, these sintered samples are characterized as glasses. In Fig. 4(a), there are several sharp peaks, which are assigned to ¡-SiAlON (JCPDS #033-0261) and are marked by open circles. In Figs. 4 (b)4(d), there are two kinds of peaks, which are assigned to ¡-SiAlON and anatase (JCPDS #070-7348) and are marked by closed circles. The intensity of the anatase peaks increases with increasing sintering temperature from 900 to 1000°C and with increasing concentration of TiO 2 . Figure 5(a) shows XANES Ti K-edge spectra at the pre-edge region. The intensity and energy of the spectra are known to depend on the coordination structure of Ti. 17) The height and energy of the highest pre-edge peaks are summarized in Table 1.
The edge energy of Ti 2 O 3 is significantly lower than that of the other samples, and the pre-edge peak is not clear. This suggests that most of the Ti in the samples is Ti 4+ and that Ti 3+ is poorly contained. From Table 1, 1S9, 1S10, and 1NS10 (a non-SiAlONdoped x = 10 glass sintered at 1000°C) are composed of [TiO 6 ] units, whereas 1D and 3S9 are composed of [TiO 5 ] units. Figure 5(b) shows the spectra of 1S9, 1S10, and 1NS10 and of [TiO 5 ] powders: Sr 2 TiSi 2 O 8 , K 2 Ti 4 O 9 , and Na 2 Ti 3 O 7 . It is apparent from the figure that the spectra of the samples are similar to each other. In particular, the spectrum of 1S10 is as same as that of 1NS10. This means that the local structure of Ti did not depend on whether SiAlON was doped or not. The peak intensities of the samples at around 4967 eV are between those of Sr 2 TiSi 2 O 8 and K 2 Ti 4 O 9 or Na 2 Ti 3 O 7 . This indicates that the Ti of the x = 10 glasses is 5-coordinated. The coordination number of the Ti in the gel glass heated at 400°C was reported to be 5 and [TiO 5 ] pyramids formed. 18) The spectra show that similar [TiO 5 ] structures formed during the sintering and remained in the glasses after calcination at 1000°C. The peak height of 1S10 is somewhat higher than that of 1S9. In Fig. 5(c), the spectra of 1D, 3S9, and a few [TiO 6 ] samples are shown. The spectrum of 1D is similar to that of FeTiO 3 . These results for the x = 10 samples indicate that the TiO 6 units change to TiO 5 units by calcination and that the TiO 5 units increased with increasing calcination temperature. The spectrum of 3S9 is similar to that of anatase or rutile. The x = 30 glass contained more anatase than did the x = 10 glass, indicating that the pre-edge spectrum was affected by the existence of anatase. Figure 6 shows the XANES Eu L III -edge spectra of the x = 10 samples, the SiAlON powder, and the reference samples EuCl 2 and Eu 2 O 3 . The height was normalized in a narrow region because the spectra assigned to Eu L II -edge and L I -edge absorptions appeared at the higher energy. Thus, a comparison of the height is difficult; however, the relative ratio of the two peaks at around 6974 and 6982 eV can be compared. The peaks are in the same positions as those of EuCl 2 and Eu 2 O 3 , respectively, and are assigned to Eu 2+ and Eu 3+ ions. From Fig. 6, the SiAlON pow- der contained both Eu 2+ and Eu 3+ ions, and the ratio is known to affect the PL properties. 19) The ratio of Eu 2+ to Eu 3+ of our samples is higher than that of the SiAlON powder, whereas that of 1D is the same as that of 1S9 but lower than that of 1S10. This means that the Eu 3+ ions in the SiAlON powders was partially reduced by the solgel process and that the reduction proceeded by sintering at 1000°C. ¡-SiAlON has a cage structure, 20) and the Eu ions are incorporated in the ¡-SiAlON lattice. 8) In the source materials, TiCl 4 was contained, and the Cl ¹ ions caused Eu 3+ ions to reduce partially during the solgel reaction; the Eu ions hardly changed when subjected to temperatures from 120 to 900°C, although the coordination number of Ti changed from 5 to 6. The Eu 3+ changed additionally to Eu 2+ , but the Ti coordination number did not change as the sintering temperature increased from 900 to 1000°C. Figure 7 shows PL and PLE spectra of the SiAlON and glass powders. The PLE and PL spectra were normalized by the highest intensity of the PLE. The broad PL spectra between 500 and 780 nm, which peaked at around 580 nm, is caused by SiAlON luminescence. 8) The PL intensity of the glasses sintered at 1000°C is smaller than that of the glasses sintered at 900°C. In particular, 3S10 showed the lowest PL intensity. The change in the Eu 2+ to Eu 3+ ratio, as shown Fig. 6, must cause the deterioration of the PL. This suggests that SiAlON deteriorated at the 1000°C sintering temperature and that the deterioration increased with an increase in TiO 2 concentration. The PLE spectrum of SiAlON powder is broad and the PLE band narrowed by both the sintering itself and the increase in the sintering temperature. The PLE bandnarrowing might be related to the absorption of the glasses.
The QE of the glasses sintered at 900°C is plotted in Fig. 8. The QE value of the SiAlON powders was measured from the surface of the powders packed in a box by the same method of the bulk samples and is shown as a dotted line. The QE value of silica glass dispersed with 5 mass % SiAlON 5) was measured and plotted at x = 0 mol % as a reference. The QE value of the titanosilicate glasses is larger than that of the source powders and silica glasses. This indicates that the QE was improved by the packing in the glasses. One reason for this may be differences in the refractive indices between the phosphors and the packaging materials. The difference in refractive index causes significant light scattering at the interfaces between the phosphor powders and the packaging material. The refractive indices of the glasses are higher than those of air or the silica glass. The refractive index of the x = 10 glass was 1.482, which is lower than the calculated value, 1.53, referred to as Appen's equation, 21) because the glass might contain some bubbles and TiO 2 sources might partially evaporate; however, the refractive index of the x = 10 glass is higher than that of silica glass, which is 1.45. The refractive index of SiAlON ranges from 1.855 to 1.897, depending on the composition. 22) Thus, the QE of the titanosilicate glasses increased by the addition of TiO 2 . However, the QE of the x = 30 glass, the refractive index of which must be higher than that of x = 10, although it could not be measured, was smaller than that of x = 10. In regards to the formation of anatase crystals, which collapsed into small pieces and resulted in colorization of the glass, as shown in Fig. 1, the crystals or many cracks in the samples might cause the scattering of the light, and the color of the packaging material may have affected the optical properties.
Conclusions
We investigated the ability of titanosilicate glass to disperse Ca-¡-SiAlON:Eu 2+ phosphor via the solgel method, aiming at white-LED applications. TiO 2 containing 10 and 30 mol % silica gel and glass were obtained. SiAlON was well dispersed in the glass sintered at 900°C without phosphor deterioration. However, the glasses turned black upon sintering at 1000°C. On the basis of the diffuse reflectrance spectra, this change in color is thought to originate from the formation of Ti 3+ . From the XANES spectra of the Ti K-edge, it was found that a 5-coordinated Ti was formed by sintering in 10 mol % TiO 2 glass and a 6-coordinated Ti is formed with 30 mol % TiO 2 glass. The Eu L III -edge spectra showed that the ratio of Eu 2+ to Eu 3+ increased with the solgel process and with sintering at 1000°C. This suggests that the gels might be sintered in a reduction atmosphere, resulting in the partial color change of Ti to a black-colored glass. The quantum efficiency of the titanosilicate glass was higher than those of both the original powder and the silica glass because of the decrease in the difference in the refractive indices of the phosphors and glasses. Control of the refractive index of the packaging materials such as glasses is thus clarified to be important for obtaining improved optical properties. | 2019-04-04T13:06:54.661Z | 2013-04-01T00:00:00.000 | {
"year": 2013,
"sha1": "2360eb2330820ce322d43bf083b60d093816c5b5",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/jcersj2/121/1412/121_JCSJ-GR12226/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "0f80cd0f0e04897bcc875d7af0c3eb35c171fe04",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
209371455 | pes2o/s2orc | v3-fos-license | Pan-European groundwater to atmosphere terrestrial systems climatology from a physically consistent simulation
Applying the Terrestrial Systems Modeling Platform, TSMP, this study provides the first simulated long-term (1996–2018), high-resolution (~12.5 km) terrestrial system climatology over Europe, which comprises variables from groundwater across the land surface to the top of the atmosphere (G2A). The data set offers an unprecedented opportunity to test hypotheses related to short- and long-range feedback processes in space and time between the different interacting compartments of the terrestrial system. The physical consistency of simulated states and fluxes in the terrestrial system constitutes the uniqueness of the data set: while most regional climate models (RCMs) have a tendency to simplify the soil moisture and groundwater representation, TSMP explicitly simulates a full 3D soil- and groundwater dynamics, closing the terrestrial water cycle from G2A. As anthopogenic impacts are excluded, the dataset may serve as a near-natural reference for global change simulations including human water use and climate change. The data set is available as netCDF files for the pan-European EURO-CORDEX domain.
Background & Summary
One of the main impacts of climate change highlighted by the 5th IPCC assessment report (AR5) is "the amplification of temperature extremes by changes in soil moisture" 1-3 , via a positive feedback mechanism that intensifies and increases the frequency of heat waves given the projected increase in summer drying conditions. The associated processes of the terrestrial water and energy cycle result from the interactions between the subsurface, the land surface and the atmosphere. These processes are essential to reproduce, predict and project climatic extreme events in simulations 4,5 . Because in most land surface models (LSMs) water transport and runoff has historically been treated in a simplified way, combined with free drainage lower boundary conditions in the subsurface, soil moisture states and fluxes and interactions between groundwater and soil moisture are biased with multiple impacts especially in areas with shallower groundwater, e.g., on the land-atmosphere coupling and the reproduction of extremes such as heat waves 6 . Recent LSM model improvements for regional climate models (RCMs) 7,8 can lead to physically consistent interactions between the groundwater, the vadose zone, and the land-surface. Yet, many RCMs and global climate models (GCMs), contributing to CMIP5 3 or CORDEX 9 still simplify these interactions. The AR5 acknowledges that the spread in regional climate projections over Europe is still substantial, due to large uncertainties related to natural heterogeneity and chaotic processes, but also due the inherent model structural deficiencies in fully representing two-way, non-linear feedbacks across the terrestrial system.
The dynamic feedbacks between the interacting compartments of the terrestrial system have been studied previously and corroborate the added value in coupling the respective compartment models to improve simulations 7 and forecasts. Research has focused on the interactions of the soil moisture state and the atmosphere 10 . In addition, the sensitivity of land surface fluxes on the depth of the groundwater table has been demonstrated 11 , particularly for a critical water www.nature.com/scientificdata www.nature.com/scientificdata/ depending on soil heterogeneity and land use type. Similar effects of water table depths on land surface fluxes have been found 5 using an idealized simulation set-up to infer the effects of topography, land cover, atmospheric forcing and subsurface heterogeneity. A study 12 on the feedback between groundwater table depth and energy fluxes under changing climate conditions showed that such interactions depend on the prevailing hydrological conditions (energy limited versus moisture limited).
Unlike previous approaches, in which groundwater dynamics are usually not interacting with the atmosphere, terrestrial models such as the Terrestrial Systems Modeling Platform (TSMP) can provide a fully coupled representation of the terrestrial water and energy cycles. The impact of the representation of groundwater in regional climate simulations, has been demonstrated in a number of studies 13,14 from the catchment to the continental scale. Previous TSMP simulations over Europe concentrated on the 2003 heat wave, showing a significant impact of groundwater states and the related land-atmosphere feedbacks 15 , and demonstrating far reaching impacts of human water use, beyond the local scale through atmospheric moisture transport 16 . However, no physically consistent climatology of the coupled terrestrial hydrologic and energy cycles from the groundwater into the atmosphere is currently available.
In this study, TSMP is run for the European CORDEX domain 17 as a first step to establish a terrestrial systems climatology for the past decades, with a focus on a physically consistent representation of variably saturated groundwater and overland flow coupled with land surface and atmospheric processes. The dataset we present features daily simulation results since January 1989, but as some grid cells only reach the groundwater equilibrium in 1995, the period applicable for analysis consists of 23 water-years from September 1996 to August 2018 of all essential variables to describe the terrestrial water and energy cycles (Online-only Table 1). The TSMP-G2A data set is a valuable innovative data set to analyze and understand the mechanisms and interactions of water and energy in the terrestrial system including extreme events such as heat waves and droughts.
Data generation method -the terrestrial systems modeling platform (TSMP). The Terrestrial
System Modeling Platform, TSMP 13,18 version 1.1 is a scale-consistent fully coupled regional Earth system model, comprising the numerical weather prediction model COSMO version 5.01 19 , the land surface model CLM version 3.5 20 The COnsortium for Small Scale MOdelling (COSMO) model system is used by several meteorological services for operational numerical weather prediction (NWP) 19 and for climate change research as the COSMO Climate Limited area Modelling system (CCLM) 25 . COSMO is a non-hydrostatic limited-area atmospheric model based on the primitive thermo-hydrodynamic Euler equations without scale-dependent approximations, describing fully-compressible flow in a moist atmosphere 19 . In COSMO, both adiabatic transport processes and diabatic processes, such as radiation, turbulence, cloud formation and precipitation are included. The prognostic atmospheric variables in this study encompass pressure, horizontal and vertical wind components, temperature, water vapour, cloud water, cloud ice, rain, snow and the turbulent kinetic energy. At land grid points, additional diagnostic variables, such as the 2 m air temperature and humidity and 10 m wind are provided.
In TSMP, the lower boundary information for COSMO is provided by the widely used Community Land Model (CLM). This boundary condition is composed of surface albedo, upward longwave radiation, sensible heat flux, latent heat flux, water vapor flux, and zonal and meridional surface stresses required by the atmospheric model 20 . These variables are determined by diverse eco-hydrological processes simulated by CLM, such as root water uptake and transpiration by plants. In turn, CLM receives the short-and long-wave radiation, near-surface temperature, barometric pressure, specific humidity, wind speeds, and precipitation from COSMO at each grid point.
In TSMP, the surface water and groundwater flow are calculated by ParFlow. In the coupling, CLM provides the sources and sinks for soil moisture to ParFlow; these are precipitation throughfall and depth-differentiated (root) water uptake from evapotranspiration. In turn, in order to calculate the land surface water and energy balances, CLM receives from ParFlow spatially distributed soil moisture and soil matric potential, which are calculated by ParFlow based on Richards equation and the appropriate initial and boundary conditions in a continuum approach 21,23 . Surface runoff is calculated by a kinematic wave equation in ParFlow 26 . This leads to a dynamic coupling of land surface processes and 3D variably saturated groundwater flow 3D heterogeneity in soil and hydrogeologic hydraulic properties.
The component models are coupled using OASIS3-MCT 13 , following a Multiple Program Multiple Data (MPMD) paradigm in an efficient parallel approach for massively parallel supercomputer environments 18 . Hence all simulations of this study are based on TSMP in fully coupled mode including ParFlow, CLM and COSMO.
Model setup. The model is set up over the European continent ( Fig. 1), using a rotated latitude-longitude model grid with a horizontal resolution of 0.11° (12.5 km, termed as the "EUR-11" grid) from the COordinated Regional Downscaling EXperiment (CORDEX) project 17,27,28 , to ensure consistency in comparison with the ensemble of CORDEX RCM experiments (Table 1).
In this setup, CLM has 10 soil layers with a total depth of 3 m. These layers correspond to the 10 top layers of ParFlow, which has 5 additional layers with increasing thickness towards the bottom of the model domain reaching a total depth of 57 m to represent most of the active aquifers in Europe 29 . Deeper multi-story confined aquifer system and very deep basin flow at a time scale of hundreds to thousands of years is not accounted for in the model. Along the coastlines the boundary condition is defined as a constant hydraulic pressure with a hydrostatic profile based on a shallow water table of 0.05 m below the land surface. The topography in ParFlow is represented by D4 slopes calculated from the USGS GTOPO30 digital elevation data set, and the terrain following grid transform 30 with variable vertical discretization in order to improve the simulations for large topographic gradients and coarse lateral resolutions. The time step for ParFlow and CLM is 15 minutes, while COSMO runs www.nature.com/scientificdata www.nature.com/scientificdata/ with a 60 seconds time step. The coupling frequency between the component models is 15 minutes using averaged values from COSMO.
The hydraulic conductivity parameters for ParFlow are estimated 31 based on the soil texture taken from the Food and Agricultural Organization (FAO) database 32 . Fifteen different soil types condition the permeability, so that, e.g., grid cells with a soil dominated by clay have a vertical permeability of 0.062 m/hr, while for those composed mainly of sand, a vertical permeability of 0.27 m/hr is defined. The horizontal permeability values are scaled by a factor of 1000, following the scaling effect of the hydraulic parameter according to the grid resolution, resulting from the loss of information of the terrain curvature as consequence of spatial aggregation 33 . The land cover data from the Moderate Resolution Imaging Spectroradiometer (MODIS 34 ) data is used to define the plant functional types (PFT) for CLM. The leaf area index, the stem area index, and the monthly bottom and top heights of each PFT are calculated based on the global CLM surface data set 20 . The COSMO model configuration resembles the settings of the CCLM community (https://www.clm-community.eu/).
In order to initialize the model with regard to land surface and subsurface hydrologic and energy states, a dynamic hydrologic equilibrium with the atmosphere must be obtained via a spinup of the model system (Fig. 2, top right boxes). In the spinup, the groundwater-land surface subsystem was simulated using ParFlow-CLM using a 1979-1989 climatologic atmospheric forcing derived from the ERA-Interim 35 reanalysis. This forcing consists of an annual time series of 6-hourly time steps at each grid point, averaged over 11 years (1979)(1980)(1981)(1982)(1983)(1984)(1985)(1986)(1987)(1988)(1989). The reanalysis data were retrieved from the European Centre for Medium-Range Forecasting (ECMWF) MARS archive. The ERA-Interim variables specific humidity, air temperature, 10 m wind speed, precipitation, long and short wave as well as the geopotential height at 0.7° lateral resolution at the lowest ERA-Interim model level were resampled to the EUR-11 grid of TSMP using the COSMO "int2lm" pre-processing software (Fig. 2, top left boxes). A stable dynamic equilibrium with regard to, e.g., soil moisture and groundwater states was achieved after running the ParFlow-CLM model system in a closed loop for 20 cycles. One cycle means one year simulation (January to December) driven by the above mentioned data set. After 20 cycles, the surface and subsurface model states converge and then constitute the initial surface and subsurface conditions for the fully coupled simulation starting from 1989-01-01. The model initialisation in 1989 makes the simulations compatible with the experiment protocol of the EURO-CORDEX RCM ensemble experiments. www.nature.com/scientificdata www.nature.com/scientificdata/ The ERA-Interim reanalysis data is also used for the COSMO model atmospheric initial and lateral boundary conditions for the EUR-11 domain throughout the fully coupled model simulations. In order to update the lateral boundaries more frequently than the available 00 and 12 UTC analyses, additional 3, 6, 9 and 15, 18 and 21 UTC forecasts from ERA-Interim were estimated by linear interpolation via the MARS retrieval system of ECMWF to inform the model every 3 h along the boundaries. TSMP is run transient from January 1989 to August 2018, with monthly restarts and no re-initializations of any compartment. Nudging is not used in order to let the evolution of feedback processes evolve freely.
The simulation workflow (Fig. 2) commences with the extraction of initial conditions (IC) and boundary conditions (BC) from ECMWF, followed by the pre-processing of the COSMO inputs with int2lm as a driver for the spin-up runs and as forcing data for the main TSMP simulations. The model spin-up of 20 cycles (years) is performed once before the actual climate simulations with TSMP (run TSMP in Fig. 2) can be launched. Concurrently to the model runs, TSMP outputs are continuously post-processed, analysed, visualized and stored at the Jülich Supercomputing Centre following a data-centric simulation and processing paradigm where data moving is kept to a minimum.
The raw outputs from the three component models are archived with 3 h frequency in monthly netCDF files. In addition to data format conversion (binary files from ParFlow to netCDF) and the reduction of the number of files by merging the data in monthly files, post processing also consists of temporal aggregation, calculating daily, monthly and seasonal temporal averages using primarily the Climate Data Operators (CDO, available at http:// www.mpimet.mpg.de/cdo). Boundary relaxation zones are removed from each side of the domain. In order to efficiently exchange data and as a means of data provenance tracking, final outputs are transferred to be as much as possible compliant with the CORDEX Archive Design, which in turn is derived from the CMIP specifications. In the process of "CMORization", data are stored using a predefined Data Reference Syntax (DRS) for the paths and filenames, and defined meta-data per variable as well as global attributes to describe the experiment. This ensures re-usability and interoperability.
Data records
The data set is available as netCDF V3 files without compression and is stored at a persistent data repository at the Jülich Supercomputing Centre 36 , as well as at PANGAEA 37 . The spatial resolution and grid specification corresponds to the EURO-CORDEX EUR-11 domain, according to the CORDEX data protocol specification (Version 3.1, 3 March 2014, http://is-enes-data.github.io/cordex_archive_specifications.pdf), with 424 × 412 grid elements on the rotated 0.11° grid. We provide time-series of daily means, aggregated into yearly files. The file www.nature.com/scientificdata www.nature.com/scientificdata/ names are structured according to the Data Reference Syntax as defined by the EURO-CORDEX archive design: <variable>_<spatial resolution>_<boundary conditions dataset>_<period>_<run identification>_<insti-tute>_<model>_<data version>_<time step>_<initial time step>_<final time step>.
For example, the file clw_EUR-11_ECMWF-ERAINT_evaluation_r1i1p1_FZJ-IBG3-TSMP11_v1_ day_20070101-20071231.nc is the vertical integrated cloud ice variable, at the EUR-11 resolution (12.5 km). The run used ECMWF ERA-Interim data, ensemble member r1i1p1, as boundary conditions during the evaluation period, performed at the Research Centre Jülich (FZJ) at IBG-3 Institute, using TSMP version 1.1. It corresponds to the first data-set version (v1), and the file contains daily values between 2007-01-01 and 2007-12-31. Each self-describing file also contains the definition of the geographical coordinate system of the grid (latitudes, longitudes and rotated pole).
Limitations. Due to the fact that the simulation was run transient after initialisation in 1989-01-01, without re-initializations, the model solution is expected to diverge from the forcing data as well as observations at the event scale. However, the regional anomalies compared to reference observational datasets show that the model captures the system dynamics and succession of dry/wet years, as well as heat waves and cold spells (Figs. 3 and 4). Furthermore, the 12.5 km resolution is not high enough to explicitly resolve convection and the development of convective precipitation 38,39 or the hydrology of smaller headwater catchments, including flash-flood prone watersheds.
technical Validation
The current experiment was designed to produce a near-natural climatology of the physical states of the terrestrial system, without the influence of, e.g., human water use. Accordingly, no real-world measurements are available for this type of system state for comparison and validation. Nevertheless, Figs. 3 and 4 show that the model reproduces the succession of warm/cold and wet/dry seasons on the regional scale for the PRUDENCE analysis regions 40 (boxes in Fig. 1), compared to the one of the commonly used reference datasets for temperature and precipitation, the 0. 25 The total column water storage simulated by TSMP was assessed by comparing to the observations of GRACE 42 (Fig. 5). The total column water storage i, j (L) from the land surface to the bottom of the aquifer constitutes an integrated measure of water resources calculated as follows 43 where sat i,j,k is the relative saturation (−), por i,j,k the porosity (−) for a pixel with indices i, j, k in the lateral and vertical direction, respectively, dz k is the extent of a vertical grid cell (L) and nz is the number of grid cells in the vertical direction. Monthly anomalies were calculated for each pixel and then within each PRUDENCE region. Figure 5 shows that the column water storage simulated by TSMP is in good agreement with the GRACE data in most PRUDENCE regions, but there are discrepancies in the Alpine region (AL) as well as in Scandinavia (SC). The model capability of reproducing water storage and water table depth have been discussed in previous studies focusing on the European heat wave of 2003 15,16 . The overall WTDs simulated with TSMP are comparable to the WTD composition by an observation based global gridded model of WTD 44 with large-scale patterns, following the terrain, representing a shallow WTD along the coastlines and in arid valleys and also inundated wetlands in lowland regions, e.g., Netherlands 15 .
Usage Notes
As most climate data in netCDF, starting with a quick visualization can be easily achieved with any netCDF viewer such as ncview (http://meteora.ucsd.edu/~pierce/ncview_home_page.html), meta data of the file may be best viewed using the ncdump command; a list of software tools and libraries for using netCDF is available from the developers of the netCDF format at UCAR (https://www.unidata.ucar.edu/software/netcdf/software.html). The CDO software is a collection of operators for standard processing of climate model data and can be directly used to work with this dataset taking into account the spatial reference of the data as well as the temporal information. The application is straightforward and allows a wide range of calculations from space-time aggregation to sophisticated climate index calculations. For more personalized analysis and visualization, we also recommend using Python with specific libraries such as Pandas or xarray. Codes created specifically for post-processing TSMP are also available with the TSMP release version.
Code availability
Stable release versions of TSMP are provided through a git development repository available at the model's website (https://www.terrsysmp.org). The release version includes extensive instructions for installing the system, including sample reference test cases for typical application examples, as well as a suite of pre-processing and post-processing tools. TSMP is essentially released without its component models, i.e., the release contains the built system, all configuration files, such as namelists, for the sample cases, the component model code patches and all coupler related modifications. The user must download the component models from their respective separate repositories: All ParFlow releases are available via GitHub (https://github.com/parflow/parflow). The official CLM website (http://www.cgd.ucar.edu/tss/clm/distribution/clm3.5/index.html) offers all links to Fig. 4 Seasonal anomalies of the simulated (EVAL) precipitation (mm) compared to the E-OBS v19 dataset and ERA-Interim over each PRUDENCE region. As in Fig. 3, only for precipitation instead of temperature data.
Fig. 5
Storage anomalies s calculated for the entire soil column including soil water, ss, and aquifer, as storages (ss + as) and surface water storage (ss + as + sws) on the time period used on the GRACE mascon data set, 2003-2011, over each PRUDENCE region (Fig. 1). | 2019-12-16T15:04:41.855Z | 2019-07-15T00:00:00.000 | {
"year": 2019,
"sha1": "3f2a7b31631fd302339953807769999ad75041f5",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41597-019-0328-7.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3f2a7b31631fd302339953807769999ad75041f5",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science",
"Medicine"
]
} |
238533414 | pes2o/s2orc | v3-fos-license | Extracorporeal Cardiac Shock Waves Therapy Improves the Function of Endothelial Progenitor Cells After Hypoxia Injury via Activating PI3K/Akt/eNOS Signal Pathway
Background: Extracorporeal cardiac shock waves (ECSW) have great potential in the treatment of coronary heart disease. Endothelial progenitor cells (EPCs) are a class of pluripotent progenitor cells derived from bone marrow or peripheral blood, which have the capacity to migrate to ischemic myocardium and differentiate into mature endothelial cells and play an important role in neovascularization and endothelial repair. In this study, we investigated whether ECSW therapy can improve EPCs dysfunction and apoptosis induced by hypoxia and explored the underlying mechanisms. Methods: EPCs were separated from ApoE gene knockout rat bone marrow and identified using flow cytometry and fluorescence staining. EPCs were used to produce in vitro hypoxia-injury models which were then divided into six groups: Control, Hypoxia, Hypoxia + ECSW, Hypoxia + LY294002 + ECSW, Hypoxia + MK-2206 + ECSW, and Hypoxia + L-NAME + ECSW. EPCs from the Control, Hypoxia, and Hypoxia + ECSW groups were used in mRNA sequencing reactions. mRNA and protein expression levels were analyzed using qRT-PCR and western blot analysis, respectively. Proliferation, apoptosis, adhesion, migration, and angiogenesis were measured using CCK-8, flow cytometry, gelatin, transwell, and tube formation, respectively. Nitric oxide (NO) levels were measured using an NO assay kit. Results: Kyoto encyclopedia of genes and genomes (KEGG) enrichment analysis showed that differentially expressed genes were enriched in cancer signaling, PI3K-Akt signaling, and Rap1 signaling pathways. We selected differentially expressed genes in the PI3K-Akt signaling pathway and verified them using a series of experiments. The results showed that ECSW therapy (500 shots at 0.09 mJ/mm2) significantly improved proliferation, adhesion, migration, and tube formation abilities of EPCs following hypoxic injury, accompanied by upregulation of p-PI3K, p-Akt, p-eNOS, Bcl-2 protein and NO, PI3K, and Akt mRNA expression, and downregulation of Bax and Caspase3 protein expression. All these effects of ECSW were eliminated using inhibitors specific to PI3K (LY294002), Akt (MK-2206), and eNOS (L-NAME). Conclusion: ECSW exerted a strong repaired effect on EPCs suffering inhibited hypoxia injury by inhibiting cell apoptosis and promoting angiogenesis, mainly through activating the PI3K/Akt/eNOS signaling pathway, which provide new evidence for ECSW therapy in CHD.
INTRODUCTION
Coronary heart disease (CHD) is the leading cause of death in adults worldwide (1,2). Although existing drug therapy, percutaneous coronary intervention (PCI) and coronary artery bypass grafting (CABG) have greatly improved the symptoms and prognosis of most patients with CHD. However, there is still a need to prevent myocardial ischemia and improve the quality of life of patients who cannot tolerate surgery or who continue to experience angina pectoris after receiving optimal drug or surgical treatment (3,4). The recently developed extracorporeal cardiac shock waves (ECSW) therapy is a new non-invasive treatment for CHD. Its safety and efficacy have been demonstrated in animal models and in clinical trials (5)(6)(7)(8). Current research suggests that, through tissue cavitation, ECSW produce a series of biochemical effects, including shear stress on cell membranes (9), increased endothelial nitric oxide synthase (eNOS) and nitric oxide (NO) synthesis, and upregulation of vascular endothelial growth factor (VEGF), which attenuates cell apoptosis, inflammatory responses and induces angiogenesis (10)(11)(12). Another potential cellular mechanism may involve ECSW inducing endothelial progenitor cells (EPCs) to home to ischemic sites, exerting pro-angiogenesis, anti-oxidative stress, anti-inflammation, and anti-fibrosis effects (13)(14)(15). However, the exact mechanism through which ECSW stimulate angiogenesis and improves myocardial function is still unknown.
EPCs are a class of pluripotent progenitor cells derived from bone marrow or peripheral blood. Studies have shown that EPCs can migrate from the bone marrow to the ischemic myocardium, where they differentiate into mature endothelial cells and participate in the repair of vascular endothelium and neovascularization in the injured site (16)(17)(18). Therefore, EPCs are ideal candidates for angiogenesis. Recent studies have found that the number and function of EPCs are impaired in patients with ischemic heart diseases such as coronary heart disease, indicating that CHD not only directly weakens the function of vascular endothelia but also retards the endothelial repair process mediated by EPCs (19,20). Consequently, ECSW treatment to improve EPCs function in patients with CHD may be an effective new strategy for preventing and treating ischemic heart disease.
In the past, it has been found that the combined treatment of ECSW with intracoronary administration of autologous bone marrow-derived EPCs showed convincing results in improving left ventricular ejection fraction in patients with chronic heart failure (21). Another evidence has shown that ECSW therapy can increase recruitment and homing of endogenous EPCs derived from autologous bone marrow to the damaged ischemic myocardium, promote angiogenesis, and improve myocardial ischemia (22). These results suggest that ECSW have great potential in activating EPCs in vivo, but the mechanism of ECSW to improve EPC function is still poorly understood. In this study, we focused on the effects of ECSW on EPCs under hypoxicischemic microenvironment, then used bioinformatics analyses and relevant pathway validation to investigate underlying mechanism, in order to gain further understanding of ECSW therapy for CHD.
Isolation and Culturing of EPCs
All procedures and protocols involving animals were approved by the Institutional Animal Care and Use Committee of the First Affiliated Hospital of Kunming Medical University (Yunnan, China) and performed in accordance with the Guide for the Care and Use of Laboratory Animals (Animal Ethics NO. Kmmu2021109). The main pathological basis of coronary heart disease (CHD) is the development of atherosclerosis. It has been found that ApoE gene knockout rats could spontaneously form hyperlipidemia and atherosclerosis in normal diet, which is similar to human atherosclerotic pathological process (23). On this basis, EPCs were isolated from ApoE gene to knockout rat bone marrow for closer to the pathological model of atherosclerosis in CHD. EPCs were isolated from 4-week-old ApoE gene knockout rats bone marrow via density-gradient centrifugation using Ficoll separating solution (Solarbio, Beijing, China). The EPCs were resuspended in Endothelial Cell Growth Medium (EGM-2 MV) (Lonza, Walkersville, Maryland, USA) containing 10% fetal bovine serum and cultivated at 37 • C and 5% CO 2 . After 3 days of culturing, non-adherent cells were removed by washing with phosphate buffered saline (PBS) and new medium was added every 3-4 days. The 2nd−6th generation cells were detached using trypsin and collected for further experiments.
Identification of EPCs
Following 14 days of culturing, the cells were incubated with 20 µg/ml Dil-acetylated low-density lipoprotein (Dil-Ac-LDL; Maokang Biotechnology, Shanghai, China) at 37 • C and 5% CO 2 for 4 h and then fixed at room temperature for 20 min using 4% paraformaldehyde before being incubated with 10 µg/ml FITC labeled Mlex Europaeus Agglutinin I (FITC-UEA-I; Maokang Biotechnology, Shanghai, China) at room temperature for 1 h. After staining and mounting with DAPI (Solarbio Biotechnology, Beijing, China), the slides were observed under a fluorescent microscope. Dual-stained cells (positive for Dil-Ac-LDL and FITC-UEA-I) were identified as EPCs. Furthermore, the EPCs marker profile was investigated using flow cytometry. EPCs were collected, digested, and fixed with 4% paraformaldehyde at room temperature for 20 min, then centrifuged at 1,000 rmp for 5 min and washed once with PBS. The cells were blocked at room temperature for 15 min using 5% BSA (Solarbio Biotechnology, Beijing, China), centrifuged at 1,000 rmp for 5 min and incubated overnight at 4 • C using 1:100 dilutions of either rabbit anti-CD34 antibody (Bioss Biotechnology, Beijing, China), rabbit anti-CD133 antibody (Bioss Biotechnology, Beijing, China), rabbit anti-CD31 antibody (Bioss Biotechnology, Beijing, China), or rabbit anti-VEGFR2 antibody (Bioss Biotechnology, Beijing, China) to allow hybridization. The cells were then centrifuged at 1,000 rmp for 5 min before being washed once with PBS and incubated with FITC-labeled goat anti-rabbit lgG antibody (Bioss Biotechnology, Beijing, China) at room temperature for 2 h. This was followed by centrifugation at 1,000 rmp for 5 min and wash with PBS. Finally, a cell suspension was prepared in PBS. Flow cytometry was used to detect the percentage of stained EPCs. Relative isotype controls were used as negative controls.
RNA-seq and Bioinformatics Analysis
For each cell sample, preparation of tagged mRNA sequencing libraries, sequencing, and data analysis were performed by LC Sciences (China). EPCs were cultured and divided into 3 experimental groups (n = 3): (1) control, (2) hypoxia, and (3) hypoxia + ECSW. Total RNA was extracted using TRIzol reagent (Invitrogen, CA, USA). The total RNA quantity and purity were analyzed using Bioanalyzer 2100 and RNA 6000 Nano LabChip Kit (Agilent, CA, USA) with RIN number >7.0. Following purification, the mRNA was fragmented into small pieces using divalent cations. The cleaved RNA fragments were then reversetranscribed to create the final cDNA library following the mRNA-seq sample preparation kit (Illumina, San Diego, USA) protocol. The average insert size for the paired-end libraries was (300 bp ± 50 bp). Paired-end sequencing was performed on an Illumina Hiseq 4000 at LC Sciences, USA, following the vendor's protocol. Prior to assembly, the low-quality reads (reads containing sequencing adaptors, reads containing sequencing primer, and nucleotides with quality scores lower than 20) were removed, leaving clean, paired-end reads.
Sample reads were aligned to the reference genome using HISAT package, which initially removes a portion of the reads based on the quality information accompanying each read and then maps them to the reference genome. HISAT allows multiple alignments per read (up to 20 by default) and a maximum of two mismatches when mapping the reads to the reference. HISAT generates a database of potential splice junctions and confirms these by comparing the previously unmapped reads against the database of putative junctions. The mapped reads from each sample were assembled using StringTie. All transcriptomes from samples were then merged to reconstruct a comprehensive transcriptome using perl scripts. After the final transcriptome was generated, StringTie and edgeR were used to estimate the expression levels of all transcripts. StringTie was used to analyze mRNA expression levels by calculating FPKM. Differentially expressed mRNAs and genes were selected based on log2 (fold change) >1 or log2 (fold change) <-1 and p < 0.05).
We performed GO enrichment analysis to predict the functions and mechanisms of mRNA and differentially expressed genes. Significant GO terms were defined as those with P < 0.05. In addition, Kyoto encyclopedia of genes and genomes (KEGG) enrichment analysis to predict the signaling pathways in which mRNA may participate was conducted using KEGG Orthology Based Annotation System (KOBAS) v3.0 software. Significant signaling pathways with P < 0.05 were included in the analysis.
Proliferation Assays
Cell Proliferation was evaluated using Cell Counting Kit-8 (CCK-8; Dojindo Co, Japan) assays, following the manufacturer's instructions. EPCs (1 × 10 3 ) from each group were cultured in two 96-well plates. Ten microliter of CCK-8 solution was added to each well in one of the 96-well plates and incubated for 2 h at 37 • C and 5% CO 2 . The absorbance at 450 nm (OD450) was measured using an enzyme labeling apparatus (Molecular Devices, USA). After 24 h, 10 µl of CCK8 solution was added to each well of the second 96-well plate and incubated for 2 h at 37 • C and 5% CO 2 . The absorbance of the solution in each well was measured with an enzyme labeling apparatus at 450 nm (OD450). The differences between 24 and 0 h absorbance values were used to determine EPCs proliferation ability in each group.
Apoptosis Assays
EPCs apoptosis was detected using Annexin V, FITC Apoptosis Detection Kit (Dojindo Co, Japan). EPCs from each group were digested and collected, washed twice with PBS, and resuspended in 500 µl of 1 × Annexin V Binding Solution. Hundred microliter cell suspension was added to new tubes, combined with 5 µl of Annexin V-FITC Binding and PI Solution for 15 min incubated at room temperature in the dark. Then, 400 µl of 1 × Annexin V Binding Solution was added and the mixture analyzed using flow cytometry within 1 h.
Adhesion Assay
2 × 10 4 cells from each EPCs group were inoculated in each well of a 6-well plate coated with 1% gelatin and incubated for 1 h at 37 • C and 5% CO 2 . The non-adherent cells were sucked out and washed three times with PBS. Five random regions were taken from each group and adherence in each group was counted under the microscope. The mean number of adherent cells was used to determine the adhesion ability of EPCs in each group.
Transwell Assay
EPCs in each group were collected, resuspended in EBM-2 medium, and 2 × 10 4 of the cells were added to each well in the upper chambers of transwell plates (Corning, New York, USA). EGM-2 MV medium containing 10% fetal bovine serum was added to the lower chambers to promote EPCs migration. The transwell plate was incubated for 24 h at 37 • C and 5% CO 2 . The transwell chamber was carefully removed, and the cells that had not passed through the membrane and the medium were washed out with PBS. The remaining cells were fixed with 4% paraformaldehyde for 20 min. The transwell chamber was stained with 1% crystal violet solution (Solarbio, Beijing, China) for 20 min, washed with PBS and the cells that had not passed through the membrane were removed with a cotton swab. The non-cellular inoculation side was photographed under an upright microscope.
Tube Formation Assay
EPCs tube formation assay was evaluated using Corning Matrigel Basement Membrane Matrix (Corning, New York, USA). The Matrigel matrix was melted overnight at 4 • C and added into a pre-cooled 96-well plate (50 µl/well), then incubated at 37 • C for 1 h to allow coagulation. EPCs in each group were inoculated into wells (2 × 10 4 cells/well) on top of the solidified Matrigel matrix. Hundred microliter of EGM-2 MV medium with 10% fetal bovine serum was added and the plates incubated for 8 h in an incubator. Tube formation was quantified by counting sprouting microcapillary-like structures with lengths four times their width using an inverted microscope.
Quantitative Real-Time Polymerase Chain Reaction
The levels of PI3K and Akt in EPCs from each group were determined using real-time reverse transcription polymerase chain reaction. The total RNA in each group was extracted using TRIzol (Thermo Fisher Scientific, Waltham, Massachusetts, USA). The extracted total RNA was purified and converted into cDNA using the TaKaRa PrimeScript TM RT reagent Kit with gDNA Eraser (TAKARA, Tokyo, Japan). The Life Technology company's applied biosystems QuantStudio 6 Flex real-time fluorescence PCR instrument and the TaKaRa TB Green Primix Ex Taq TM II Kit (TAKARA, Tokyo, Japan) were used for amplification. Relative quantities of PI3K and Akt were normalized to that of the housekeeping gene, GAPDH, and the relative changes in the expression of target genes were evaluated using the 2 − CT method. The primers used for PI3K, Akt, and GAPDH are as follows:
Measurement of Nitric Oxide Levels
Nitric Oxide (NO) levels in the cells were measured using the NO assay kit (Solarbio, Beijing, China) according to the manufacturer's instructions. Cells in each group were digested and collected, 800 µl extract added, and the cells crushed using an ice bath ultrasound (power 300 W, ultrasound 3S, interval 7s, total time 3 min). The cells were centrifuged at 4 • C for 15 min at 12,000 rmp. The precipitate was discarded and the supernatant measured. Hundred microliter of the supernatant was added to new tubes, 20 µl Reagent 1 added, the solution vortexed to mix. The solution was placed in a water bath for 60 min at 37 • C, 20 µl Reagent 2 added and vortexed to mix, reacted for 5 min at room temperature. The mixture was then centrifuged at 3,500 rmp for 10 min. Hundred microliter of supernatant was transferred to the 96-well plate, and 100 µl of chromogenic solution added, mixed, and incubated for 10 min at room temperature. Absorbance of the solution was measured with an enzyme labeling apparatus (Molecular Devices, USA) at 550 nm (OD550).
Statistical Analysis
All results are expressed as means ± SD of at least three repeated experiments and were analyzed using Graph Pad Prism 8.0 software (GraphPad Software, Inc., USA). One-way ANOVA followed by the Tukey's post-hoc test was conducted among multiple groups comparisons, and unpaired Student ttest was used for two-group comparisons. P < 0.05 was regarded statistically significant.
Characterization of EPCs
Cells were cultured in medium containing 10% FBS/EGM-2MV after isolation from 4-week-old ApoE gene knockout rat bone marrow. On day 3, non-adherent cells were removed and adherent, oval, or short spindle cells were observed. On day 7, some cells changed their appearance and formed colonies. On day 14, the cells showed typical colony growth and exhibited a cobblestone-like morphology. On day 21, the cells exhibited a fusiform shape, with a morphology similar to that of endothelial cells ( Figure 1A).
EPCs are characterized by their ability to take up Ac-LDL and bind to UEA-I. In our experiment, after coculturing with Dil-Ac-LDL and UEA-I, positive staining in these cells was observed for Dil-Ac-LDL and UEA-I using fluorescence microscopy ( Figure 1B). Furthermore, the surface antigens of these cells were investigated using flow cytometry. Because it is difficult to identify EPCs by staining using a single surface marker, staining with a combination of surface markers (e.g., CD31, CD34, CD133, and VEGFR2) was necessary. Flow cytometry showed that most of the cells were positive for CD31, CD34, CD133, and VEGFR2 ( Figure 1C). These basic characterizations indicate that EPCs were successfully isolated from ApoE gene knockout rat bone marrow.
Gene Ontology Analysis and KEGG Pathway Annotation of Differentially Expressed Genes
The differentially expressed mRNAs and the potential signaling pathways that are involved were analyzed using bioinformatics methods. Gene ontology (GO) analysis revealed that, compared with the control group, important GO terms were significantly enriched in the positive regulation of extracellular space, response to drug, and plasma membrane in the hypoxia group. The 20 most common GO categories that were enriched are shown in Figure 2A. Compared with the hypoxia group, some important GO terms were significantly enriched in the positive regulation of extracellular space, cell surface, and integral component of plasma membrane in the hypoxia +ECSW group. The 20 most common GO categories that were enriched are shown in Figure 2B. To distinguish the biological pathways that became active in EPCs of the hypoxia and hypoxia +ECSW groups, we investigated the differentially expressed mRNAs using term enrichment analysis to identify their possible targets using the KEGG annotation. The results showed that several important pathways were enriched in cancer signaling, PI3K-Akt signaling, and Rap1 signaling pathways. As shown in Figures 2C,D, the 20 most common pathways were enriched in EPCs from both the hypoxia and hypoxia +ECSW groups.
ECSW Inhibit Apoptosis and Promotes the Proliferation of EPCs Following Hypoxic Injury by Activating the PI3K/Akt/eNOS Signaling Pathway
The cell apoptosis assay using Annexin V-FITC and PI double staining (Figures 3A,B) showed that the percentage of apoptotic EPCs was significantly increased by hypoxic challenge (P < 0.05). After ECSW treatment, the apoptosis index was significantly lower than that in the hypoxia group (P < 0.05). The proliferation of EPCs in each group was detected using the CCK-8 assay. As shown in Figure 3C, hypoxia induced a significant decrease in cell proliferation compared with control cells (P < 0.05). ECSW treatment resulted in a significant increase in cell proliferation compared with the hypoxia group (P < 0.05). These results revealed that ECSW protected EPCs against hypoxic injury. However, pretreatment with the PI3K inhibitor, LY294002, the Akt inhibitor, MK2206, and the eNOS inhibitor, L-NAME, significantly attenuated the effects of ECSW in EPCs exposed to hypoxic conditions (P < 0.05).
ECSW Promote Adhesive, Migratory and Tube Formation Capacities of EPCs Following Hypoxic Injury by Activating the PI3K/Akt/eNOS Signaling Pathway
Cells adhering onto 1% gelatin-coated 6-well plates were quantified using microscopy and showed that hypoxia challenge induced a significant decrease in the number of EPCs compared with control cells (P < 0.05, Figures 4A,B). This impaired adhesion of EPCs exposed to hypoxia was improved following treatment with ECSW (P < 0.05, Figures 4A,B). In addition, pretreatment with the PI3K inhibitor, LY294002, the Akt inhibitor, MK2206, and the eNOS inhibitor, L-NAME, inhibited this effect of ECSW in EPCs exposed to hypoxia (P < 0.05, Figures 4A,B).
The in vitro migratory ability of EPCs was assessed as the ability of EPCs to invade the lower side of the transwell chamber. As shown in Figures 4C,D, the number of successfully migrated cells decreased when exposed to hypoxic conditions compared with control cells (P < 0.05). ECSW treatment had a beneficial effect on the migratory ability of EPCs exposed to hypoxic conditions (P < 0.05). However, pretreatment with the PI3K inhibitor, LY294002, the Akt inhibitor, MK2206, and the eNOS inhibitor, L-NAME, inhibited this effect of ECSW in EPCs exposed to hypoxia (P < 0.05).
EPCs tube formation was detected using a Matrigel assay, and angiogenesis was expressed based on tube length. As shown in Figures 4E,F, the angiogenic ability of EPCs significantly decreased when exposed to hypoxia compared with control cells (P < 0.05). The capillary-like vascular tube network became denser following ECSW treatment (P < 0.05). In addition, pretreatment with the PI3K inhibitor, LY294002, the Akt inhibitor, MK2206, and the eNOS inhibitor, L-NAME, inhibited this effect of ECSW in EPCs exposed to hypoxia (P < 0.05).
ECSW Activate the EPCs PI3K/Akt/eNOS Signaling Pathway Following Hypoxic Injury
To assess the mechanism underlying the injury of EPCs that is induced by hypoxia and the protective effect of ECSW on EPCs, PI3K/Akt/eNOS signaling was detected via western blotting and RT-PCR. As shown in Figures 5A-D, the expression of p-PI3K, p-Akt, and p-eNOS in EPCs decreased after exposure to hypoxic conditions for 24 h (P < 0.05), and the expression of PI3K, Akt, and eNOS proteins showed no significant changes. ECSW treatment increased the expression of p-PI3K, p-Akt, and p-eNOS (P < 0.05), but the levels of PI3K, Akt, and eNOS proteins showed no significant changes in the EPCs exposed to hypoxia. However, pretreatment with the PI3K inhibitor, LY294002, and the Akt inhibitor, MK-2206, inhibited this effect of ECSW in EPCs exposed to hypoxia (P < 0.05) (Figures 5G,H). Figures 5E,F, the mRNA levels of PI3K and Akt were downregulated in the EPCs exposed to hypoxia compared with control cells (P < 0.05). ECSW treatment upregulated the mRNA levels of PI3K and Akt in EPCs exposed to hypoxia (P < 0.05). However, pretreatment with the PI3K inhibitor, LY294002, inhibited this effect of ECSW in EPCs exposed to hypoxia (P < 0.05). These data demonstrate that the EPCs injury induced by hypoxia and the protective effect of ECSW on EPCs injured by hypoxia may be attributed to the regulation of PI3K/Akt/eNOS signaling.
ECSW Promote the Expression of Bcl-2, Increases NO Production, and Inhibits the Expression of Bax and Caspase3 in EPCs After Hypoxic Injury by Activating the PI3K/Akt/eNOS Signaling Pathways
Western blot assay was used to assess the expression of the downstream signaling molecules, Bcl-2, Bax and Caspase3, in the different groups. The results (Figures 6A-D) showed that the expression of Bcl-2 protein decreased (P < 0.05), but the expression of Bax and Caspase3 protein increased (P < 0.05) after exposing EPCs to hypoxic conditions for 24 h. After ECSW treatment, the expression of Bcl-2 protein significant increased (P < 0.05) and the expression of Bax and Caspase3 protein marked decreased in EPCs exposed to hypoxic conditions (P < 0.05). In addition, pretreatment with the PI3K inhibitor, LY294002, the Akt inhibitor, MK2206, and the eNOS inhibitor, L-NAME, inhibited the beneficial effects of ECSW in EPCs exposed to hypoxia (P < 0.05). As shown in Figure 6E, NO production was lower in the EPCs exposed to hypoxia compared with control cells (P < 0.05). Again, NO production in EPCs exposed to hypoxia improved after ECSW treatment (P < 0.05). In addition, pretreatment with the PI3K inhibitor, LY294002, the Akt inhibitor, MK2206, and the eNOS inhibitor, L-NAME, decreased NO production in EPCs exposed to hypoxia after ECSW treatment (P < 0.05).
DISCUSSION
The main pathological basis of (CHD) is the development of atherosclerosis, and the initial sign of atherosclerosis is damage to vascular endothelial cells (24,25). In most cases, myocardial infarction occurs due to disruption of a vulnerable atherosclerotic plaque or erosion of the coronary artery endothelium. Following myocardial ischemia, increasing damage-associated insufficient oxygen and energy supply results in microvascular dysfunction and metabolic disorder, even cell death (26,27). At the same time, it is accompanied by a series of parallel changes that include abnormal vascular wall tension balance associated with impaired nitric oxide synthesis and increased levels of angiotensin and endothelin to inhibit angiogenesis and vascular endothelial repair (28,29). Importantly, previous studies have found that the number of circulating EPCs was obviously reduced in patients with CHD with attenuated function in neovascularization (30,31), which were consistent with some related research on EPCs (32). Furthermore, lower levels and dysfunction of circulating EPCs were shown to be closely associated with poor prognosis in patients with CHD (33,34) or animal models (35,36).
To our best knowledge, the repair of damaged endothelium is of great worth in prevention and treatment of cardiovascular diseases. After sensing the damage to endothelial layer of arteries triggered by hypoxia injury, EPCs derived from the bone marrow or peripheral blood are recruited and home to sites of ischemia, then differentiate into mature endothelial cells to maintain the integrity of the vascular endothelium (37, 38). Therefore, EPCs as a group of stem cells with the potential of angiogenesis, play an essential role in endothelial repair. The overall increases in EPCs number and function have been proposed as an effective therapeutic means for CHD. In this study, we focused on the strategy for strengthening the functions of EPCs, and provided some basic evidence for extracorporeal cardiac shock waves (ECSW) therapy in CHD.
As a non-invasive physical stimulus, extracorporeal shock wave therapy has been widely used in clinical fields such as osteoarthritis (39), chronic pancreatitis (40), and renal calculus (41). In recent years, it has been found that ECSW can also promote angiogenesis, which not only provides a new idea for the treatment of CHD, but also provides a new choice for adjuvant therapy. Clinical studies have shown that ECSW treatment can improve the clinical symptoms and quality of life parameters of patients with CHD (8), but the exact mechanism of ECSW treatment has not yet been clarified. Previous studies have suggested that the main mechanism of ECSW therapy may be to bring about high expression of VEGF, eNOS, and other angiogenesis-related factors, resulting in enhanced mobilization of EPCs derived from autologous bone marrow to the damaged ischemic myocardium in vivo (21,22,42). However, the detailed mechanism in EPCs by which ECSW promotes angiogenesis remains unclear.
From the result of our previous exploratory experiment, ECSW has been found to result in satisfactory improvement the function of EPCs through activating of PI3K/Akt and MEK/ERK signaling pathways in vitro (43). However, above study has been limited by several experimental factors such as modeling condition and detailed pathway study. In the past, we did not consider the effect of hypoxic-ischemic microenvironment on EPCs while suffering CHD. ECSW was used only for EPCs in normoxic condition, which did not provide direct evidence for the application of ECSW for CHD. Therefore, in the present study, we firstly exposed EPCs to hypoxic conditions (1% O 2 , 95% N 2 , 5% CO 2 ) and starved with 1% fetal bovine serum in EBM-2 medium for 24 h to simulate a hypoxic-ischemic injury. Besides, the EPCs were isolated from ApoE gene to knockout rat bone marrow for closer to the pathological model of atherosclerosis in CHD, which was better than before in study protocol.
The results in this study showed that dysfunction and apoptosis of EPCs were induced by hypoxia in line with previous studies (44,45). Innovatively, ECSW can obviously promote the function of EPCs, especially angiogenesis, and inhibit the apoptosis of EPCs after hypoxic-ischemic injury. In order to elucidate the precise molecular mechanism of ECSW, we screened significant signaling pathways through bioinformatics data analysis. EPCs from the control, hypoxia, and hypoxia +ECSW groups were used for mRNA sequencing. KEGG enrichment analysis showed that differentially expressed genes were enriched in cancer signaling, PI3K-Akt signaling, and Rap1 signaling or other pathways (P < 0.01). What is different from our former experiment is that MEK/ERK signaling pathway was not statistically significant in this condition. We speculated probably a change of modeling condition. Another possible explanation for this might be weak activation of this pathway in ECSW ameliorating damaged EPCs, which may need further investigations. Inspired by the results of bioinformatics, we selected the differentially expressed genes related to PI3K-Akt signaling pathway and verified them in subsequent experiments. In great detail, PI3K inhibitor (LY294002), Akt inhibitor (MK-2206), and eNOS inhibitor (L-NAME) were used to block this signaling pathway when intervened by ECSW, showing a more complete mechanism study.
Here, we found that the hypoxia-induced EPCs dysfunction and apoptosis were consistent with the downregulation of p-PI3K, p-Akt, p-eNOS expression and decreased production of NO in EPCs, while ECSW treatment improved the function of EPCs after hypoxic injury with increased NO, p-PI3K, p-Akt, p-eNOS expression in EPCs. What's more, these pathway inhibitors impeded downstream signal in parallel with an elimination of ECSW effect on EPCs function. We observed that ECSWmediated increase in these proteins in EPCs were reversed by pathway inhibitors, accompanied with an inhibitory effect on EPCs function including migratory, proliferative, adhesive, and tube formation capacities. These results indicated that the inhibition of PI3K/Akt/eNOS signaling pathway in EPCs may be a pathological mechanism for the reduction of endogenous vascular repair in CHD, but ECSW have shown promoting effects on EPCs function after hypoxic injury by activating in PI3K/Akt/eNOS signaling pathway.
PI3K/Akt is a classical signaling pathway that plays an important role in cell proliferation, migration, apoptosis, angiogenesis, and other biological processes. A few studies have shown that PI3K/Akt signaling pathway plays an important role in mobilizing and improving the function of EPCs, and its role is mainly based on the activation of the PI3K/Akt signaling pathway to induce downstream eNOS phosphorylation and nitric oxide production (46,47), which were consistent with our findings. It has been noted that eNOS is potential for the regulation of EPC function, which can catalyze the production of NO from L-arginine, participating in regulating vascular homeostasis and arterial tone, also can promote angiogenesis in response to tissue ischemia (48,49). Thus, eNOS and NO appear to be pivotal indicator of the function of EPCs. We found that ECSW have enhanced the expression of eNOS and promoted the production of NO in damaged EPCs, exhibiting a promising target for angiogenesis therapy in CHD.
The Bcl-2 protein family is a group of important apoptotic regulatory factors that controls the mitochondrial apoptosis pathway and plays an anti-apoptotic role (50,51). Bax protein is found in the cytoplasm of cells and its structure is similar to that of Bcl-2, which contributes to cell apoptosis (52) under the stimulation of a series of apoptotic signals, Caspase-9 and Caspase-3 are activated to promote apoptosis. Bcl-2, Bax, and Caspase3, as downstream molecules, are involved in the regulation of cell apoptosis through the PI3K/Akt signaling pathway (53,54). In this study, it was observed that ECSW upregulated the expression of Bcl-2 protein and inhibited the expression of Bax and Cleaved caspase-3 protein in EPCs after hypoxia. Together, these findings suggest that ECSW may improve hypoxia-induced EPCs apoptosis by regulating PI3K/Akt signaling pathway and its downstream molecules, which provide strong evidence for ECSW therapy in rescuing impaired EPCs.
There are several limitations that should be considered in the present study. Firstly, given that ECSW has been proven to bring effective benefits in CHD mostly by mobilizing EPCs in human or animal studies, so we provided in-depth research on the function and molecular mechanism of ECSW in EPCs. This study is limited in vitro, future investigation in animal models and even clinical trials is constantly needed. Secondly, based on the results of bioinformatics data, there are multiple signaling pathways possibly related to the effects of ECSW. We preliminarily verified the PI3K/Akt/eNOS signaling pathway and obtained satisfactory outcomes, but the contribution of other pathways should not be neglected and remain to be further explored. Thirdly, we used pathway inhibitors in this study, the PI3K/Akt/eNOS genes knockout models are needed for further verification.
CONCLUSIONS
In conclusion, the findings of the present study provide valuable information. We found that the downregulation of PI3K/Akt/eNOS signaling pathway in EPCs were induced by hypoxic-ischemic injury with dysfunction and apoptosis of EPCs. ECSW can improve the function of EPCs including migratory, proliferative, adhesive, and tube formation capacities, and reduce the apoptosis of EPCs by activating PI3K/Akt/eNOS signaling pathway. After blocking this signaling pathway, the beneficial effects of ECSW on post-hypoxia EPCs were inhibited. Therefore, this work may provide new evidence for ECSW therapy in CHD through a potential mechanism on EPCs.
DATA AVAILABILITY STATEMENT
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: NCBI SRA BioProject, accession no: PRJNA752783.
ETHICS STATEMENT
The animal study was reviewed and approved by The Institutional Animal Care and Use Committee (IACUC) of the Institutional Ethics Committee at the First Affiliated Hospital of Kunming Medical University.
AUTHOR CONTRIBUTIONS
HongyC designed and supervised the study. TG and HongbC supervised the study and critically revised the draft. XC and YM performed the statistical analyses and drafted the manuscript. MW, DY, ZH, and YS performed the experiments. All authors contributed to the article and approved the submitted version. | 2021-10-11T13:10:56.976Z | 2021-10-11T00:00:00.000 | {
"year": 2021,
"sha1": "a56e1d870affac0925d1b60d4d5b82c5048c4394",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fcvm.2021.747497/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6719e42842bbc0fc3f7bb029d90d936d228e636e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247042849 | pes2o/s2orc | v3-fos-license | Evidence of Impact from a National Digital Entrepreneurship Apprentice Program in Malaysia
Impact Digital Entrepreneurship Apprentice Program (IDEA@KPT) at Ministry of Higher Education Malaysia 2021 is a comprehensive nationwide six-month program. Forty-three teams consist of 43 Academic Supervisors, 129 institutes of Higher Learning students, and 43 Micro & Small Enterprises (MSE) owners conducted by Universiti Teknologi Malaysia. The program is aimed to develop capable students in maneuvering the digital business world. Students underwent an online Business and Digital Training, with apprenticeship and formal reportings. This article aims to present the impact of IDEA@KPT activities by analyzing 43 case studies produced in the program. A pre-codification scheme that concentrates on the study goals was the method for data collection. Before the program, all the teams were informed of the required components to ensure uniformity of the report. The evidence of significant gain and impact on the MSEs businesses was drawn from the components. Other than the components, analytics hindsight, visual appeal, persuasion ability, perception on paid ads, posting timing, and synergies beyond the digital world activities were gathered, providing richer information and insights that increase business value. Such lessons are beneficial to all parties as all businesses are demanded to utilize digital platforms nowadays.
INTRODUCTION
The unemployment rate for the graduating students from the Institute of Higher Learning in Malaysia is worrying. From the data published by PENJANA (Pelan Jana Semula Ekonomi Negara) in June 2020 (BERNAMA, 2020), unemployment in Malaysia is expected to increase to 5.5% or more than 860,000 jobless in 2020. Furthermore, the Ministry of Higher Education (MOHE) added that 75,000 graduates from the Institute of Higher learning would be unemployed due to the Covid-19 pandemic (Amir, 2020).
On the other hand, there is a significant increase in Internet usage and online expenses by consumers. (Media Baharu, 2020) digital news reported only in the first six months in 2020, 72,274 Small Medium Enterprises (SMEs) recorded an increase in digital engagement, which goes more than the targeted 50,000 SMEs by the government.
Also, there was an increase of 39.3% in e-Commerce activities in May 2020 from May 2019.
Looking at the opportunity in digital businesses and the likelihood of solving the unemployment issue, the Malaysia Ministry of Higher Education has developed a program called IDEA@KPT (UTM, 2021). IDEA@KPT or Impact Digital Entrepreneurship Apprentice Program at Ministry of Higher Education Malaysia 2021 is a comprehensive six-month program. Forty-three teams consist of 43 Academic Supervisors, 129 students from the institutes of Higher Learning, and 43 Micro & Small Enterprises (MSE) all over Malaysia. The program aimed to develop students to maneuver the digital business world by becoming an apprentice to business owners. This, in a way, will increase their intention to become entrepreneurs equipped with practical digital marketing knowledge and skills.
The IDEA@KPT program utilizes the apprenticeship principle because of the strength it brings, and it has been known to many. The primary purpose of the apprenticeship is to provide knowledge transfer or skillset from one expertise to another party. The ultimate goal is to achieve a mutually beneficial relationship (Ryan & Tom, 2018) between the parties involved. It becomes very demanding when digital skills are taken into account. This is due to the flow of knowledge that could be traditionally running from a more elderly and wiser to a young and less mature person. One must be open to the flow of knowledge from both sides of the parties that could benefit both. Why is this for sure? The digital efficacy from the younger generations could provide new input and ideas to the organizations or businesses owned by a more senior person. Plus, business knowledge is being transferred to the young generations, which shows a higher percentage of youth entering into an apprenticeship program (Nicholas, 2014). In this program, both the apprentice (students) and the business owners (Micro Small Enterprise owners) develop symbiotic relationships. Specifically, in IDEA@KPT, students reported a substantial increase in MSE performance even during the Covid-19 pandemic, plus the rise of students' business knowledge.
This article aims to present the impact of IDEA@KPT by extracting the essence of 43 case studies provided to the author. The case studies hold significant insights and lessons learned, particularly on the challenges, solutions, and impact recorded throughout the 6-months program.
IDEA@KPT program is a three phases program that runs in 6- The last part after the third phase is for all the participants to attend an IDEA@KPT Summit, which will be held in 2022 with the honorable Malaysia Higher Education Minister present. The purpose of the summit is to allow the participants to showcase their findings and present their Digital Posters for the audience to learn and grasp their experience. This phase is also where the group winner will be provided with prizes and recognition for a job very well done. As with many programs, IDEA@KPT also faced several challenges. The first was related to the effect of the Covid-19 pandemic. This is specifically on the virtual and physical mode for the apprenticeship. The main challenge for a longduration program is to maintain its control. Students must attend 24 hours of Business Strategy, Business Model, and Digital Marketing training within a month. After the successful completion, they are required to attach to a Micro Small Enterprise for a total of 40 hours within a spread of 2 months. The 40 hours apprenticeship was designed to be a physical session. However, three changes occurred related to this matter. So, the program must have the flexibility to ensure that the program is still successful with all the changes.
The first version was designed to be a 40-hour physical Face-to-Face apprenticeship, where the students are required to be present physically in the MSE premise/factory/site. However, due to the Movement Control Order (MCO) that was still in place and not uniformly implemented throughout Malaysia, the apprenticeship session was shifted to 60% Physical and 40% Virtual. In this case, 24 hours should be a physically present attendance and the balance of 16 hours are virtual. However, this setup was not helpful as operating in a mixed-mode is even more challenging. Finally, with the advice from several parties, IDEA@KPT 40 hours apprenticeship is decided to be entirely virtual.
Studies have shown that an entirely virtual apprenticeship, internship, or industrial training will save time and expenditure, increase the working experience while the student is still in their own house, learn the communications in a newly accepted working style during pandemic and postpandemic (Chloe, 2021). However, there are drawbacks, firstly, in learning to understand the communication style and expressions from the business owners to the students and between the students themselves. The second is the lack of experience gained from the business's environment. The students will not feel the fast pace and hectic atmosphere, which would impact their feeling towards being an entrepreneur themselves.
The second challenge was the attempt to optimize the experience through virtual apprenticeships. (Pretti, Etmanski, & Durston, 2020) stated that students should also associate with productivity, meaningful work, and socialization when working remotely or virtually. From the point of productivity, this plays an essential role in the success of the IDEA@KPT program. The 40 effective hours are logged through the students' reporting and verified by their academic supervisors. Information systems modules were developed to support this feature and several other reporting tools (https://ideakpt.my/book-keeping-and-logbook-system/). This reporting is done individually to provide a self-direction to the students to plan their day, set their schedule, and complete their tasks.
Meaningful work is measured by how the task provided affects or impacts the business owner's perspectives. Although the ultimate purpose of performing Digital Marketing for a business is to make maximum profit, in IDEA@KPT, the journey is considered short for the students when they focus more on the indicators measured as digital impact. This is commonly shown in reporting insights from tools like Facebook Likes, Reach, and Engagements.
The third main challenge was gathering firsthand information from business owners. Socialization involves formal and informal communication between several parties, the students and the business owners. They may be minimal opportunities to be observing and interact with many in the work setting. Thus, this might be a big challenge for both parties. For example, the known challenge found is the bottleneck of information given by the businessperson. What does it mean? When there are opportunities to experience the work setting and environment, students usually will extract and provide insights using their senses. However, in such a limited setting, students entirely depend on the business owners or staff to provide the necessary information regarding the running of the business. Secondary information would help students gather information about the company through the Internet, social media, and other available sources. These are critical to acquiring; unfortunately, they are often not adequate and updated. Primary information is still essential due to the crucial, up-to-date, and latest strategy are drawn from the business owner. With such dependencies, owners will often be overwhelmed by the students' questions and inquiries. It has shown that many MSEs could not accommodate much information timely to the students. This issue is reported based on the delays in responding to ongoing requests by the apprentice.
METHODS
The paper aims to present the impact of IDEA@KPT activities by analyzing 43 case studies produced in the program. The case studies hold significant insights and lessons learned, particularly on the type of business, challenges, solution, and impact recorded throughout the 6-months program.
All the submitted cases were thoroughly studied and arranged based on their impact or implications to determine each case's contribution and gather insights to improve future programs (Leedy and Ormrod, 2001). This is in line with the research cycle and problem-solving cycle that are highly suggested by McKay& Marshall (2001).
The authors recommended a pre-codification scheme that concentrates on the study goals (Wasana, Miskon, and Fielt, 2011). This involves capturing the definitions and objectives. The definitions are intended to sense an agreement of shared understanding of the phenomena (Wasana, Miskon, and Fielt, 2011), while the objectives are meant to convey the focus of the activities (Miskon et al., 2010). In the setting of this study, all 43 cases have a pre-determined codification scheme. Before the program, a list of required components is provided to all the teams to ensure uniformity of the report. Somehow, several cases were written with more information and insights, thus increasing their richness and value.
The standard components were provided as a guideline to focus mainly on the i) background of the business, ii) challenges and problems faced particularly from the aspect of digital marketing, iii) solution proposed and implemented, iv) impact, and v) conclusion and recommendations. Therefore, these written reports are accepted as formal written documents that manifest the writer's situation.
Finally, the author focuses on the conclusion and recommendations to highlight and extract vital information mainly related to the unique situation faced by the team. This is compiled as insights and lessons learned at the end of the results and discussion section.
RESULTS AND DISCUSSION
The programs' expectations have been communicated to all participants before the program commence. These are the Digital marketing indicators such as the likes, reach, engagements (Tiago & Veríssimo, 2014). These indicators are critical and are accepted as the source of insights for most digital marketing programs and initiatives. The ultimate measure has always been the sales and profit it brings to the business. However, a new measurement of Return on Investment (ROI) is often proposed almost as quickly as a new social media platform appears (Fisher, 2009). This is a challenge as the reporting of sales and expenses is not based on the apprentice's sales of products/services but the business, they are attached to. Although there are sales reported through the IDEA@KPT programs, the numbers are only reported if the owner trusts them. In addition, several other pieces of evidence are practically viable to be captured, such as the changes on the posts in the businesses' social media.
Demographic Information
The IDEA@KPT program consists of 43 Academic Supervisors from 17 Public Institute of Higher Learning, 43 Micro Small Enterprises business owners, and 129 students as an apprentice. A total of 129 students were involved in the sixmonth program. There were 33 students ages 20 and below, 86 with age 21-24 and 10 who are age 25 and above. More than 63% of the apprentice are female. Figure 2 presents both the age distribution and gender. Figure 2
. Age Distribution and Gender of Apprentice
The type of business categories among the 43 MSEs are shown in Table 2. The highest categories fell under food, followed by Clothes/Fashion/ Beauty, which is 27.9%. The rest are Construction, Car Sales & Services and as listed. Evidence of Impact from all 43 cases Table 3 shows the business type, problems, and impacts recorded to every business. Table 3 has generally shown impact to the MSE businesses from the perspective of social media analytics, the increase of sales, better branding, and even the marketing structure of the organization.
Several insights and lessons learned can be extracted from the 43 cases other than the items reported the Table 3. First, the need to understand the analytics that is provided automatically by the platform used. This is often an excellent input for entrepreneurs to gain insights and make sound business decisions (Johnson, E. 2021).
Second, the visual appeal of postings (teaser, soft-sell & hard-sell) contributed significantly to customer engagements. The flow of words and the blend of colors are commonly the best way to lure visitors to their platform. This has strong persuasion ability (Youjae, Y, 1990).
Third, soft-selling copywriting would also increase customers' trust towards the online seller. This is because, in a soft-selling, the author will focus on delivering knowledge of the products/services, which would increase the positive perception of viewers and incline them to believe that the author is an expert in the creation of products and services. Thus, the qualities of copywriters demanded are a good command of the language use and their imagination (Dandeswar, B., Preeti, Y., Utpal, B., 2015).
Fourth, there was also a misperception of organic ads versus paid ads. Organic ads have their limitation with the low penetration towards targeted customers, their time, and interest compared to paid ads. However, paid ads should be set to function as what is intended by the digital marketer. Unable to do this will waste the investment and lose the targeted prospects.
Fifth, a posting should be done in a particular schedule. It is proven that social media users' purchasing behavior patterns are related to their time. A general rule for posting is around 1-2 pm, and the best time to get likes and engagements will be approximately 5 pm.
Sixth, any Digital Marketing initiatives are not limited to online activities alone. There are high opportunities for strategic collaboration with the community and institute of higher learnings as offline programs also contribute to community engagements and better trust between all players. This benefits the MSE owners, student apprentices, and academic supervisors.
Finally, the most critical element for the success of IDEA@KPT or any digital marketing apprentice program is the trust amongst all the participants. The business is searching for a change to be highly visible, liked, engaged, and becomes profitable online. With the tweaking performed by the apprentice on the marketing element, the MSE owners must be prepared to take over the role and become digitally transformed. This is critical to sustaining business as it is deemed that higher success can be realized, particularly during and after the Covid-19 pandemic. Which is one of the most important aspects of a business.
CONCLUSION
With the very dynamic nature of conducting such programs, particularly during the Covid-19 pandemic, flexibility between the organizers/ secretariat and the participants are deemed to be crucial. Furthermore, the support provided by the administration throughout the six months programs must be top-notch to get the motivation going throughout the period.
The most challenging lesson learned was the development of trust among the three parties -the MSE owners, the student apprentice, and the academic supervisors. In the future, establishing trust between these parties will be an area of investigation as they are several aspects of trust that may demand further efforts in such a limited-time relationship among them. | 2022-02-23T16:30:11.479Z | 2022-02-20T00:00:00.000 | {
"year": 2022,
"sha1": "09c7bf54febb10ba9c98789621d373ecba6e2fec",
"oa_license": "CCBYSA",
"oa_url": "https://ojs.literacyinstitute.org/index.php/ijias/article/download/408/157",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a7f4f0d48d6a4b9f34db7cc832b008bda483a9ae",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
269292392 | pes2o/s2orc | v3-fos-license | Multiresolution molecular dynamics simulations reveal the interplay between conformational variability and functional interactions in membrane‐bound cytochrome P450 2B4
Abstract Cytochrome P450 2B4 (CYP 2B4) is one of the best‐characterized CYPs and serves as a key model system for understanding the mechanisms of microsomal class II CYPs, which metabolize most known drugs. The highly flexible nature of CYP 2B4 is apparent from crystal structures that show the active site with either a wide open or a closed heme binding cavity. Here, we investigated the conformational ensemble of the full‐length CYP 2B4 in a phospholipid bilayer, using multiresolution molecular dynamics (MD) simulations. Coarse‐grained MD simulations revealed two predominant orientations of CYP 2B4's globular domain with respect to the bilayer. Their refinement by atomistic resolution MD showed adaptation of the enzyme's interaction with the lipid bilayer, leading to open configurations that facilitate ligand access to the heme binding cavity. CAVER analysis of enzyme tunnels, AquaDuct analysis of water routes, and Random Acceleration Molecular Dynamics simulations of ligand dissociation support the conformation‐dependent passage of molecules between the active site and the protein surroundings. Furthermore, simulation of the re‐entry of the inhibitor bifonazole into the open conformation of CYP 2B4 resulted in binding at a transient hydrophobic pocket within the active site cavity that may play a role in substrate binding or allosteric regulation. Together, these results show how the open conformation of CYP 2B4 facilitates the binding of substrates from and release of products to the membrane, whereas the closed conformation prolongs the residence time of substrates or inhibitors and selectively allows the passage of smaller reactants via the solvent and water channels.
binding of substrates from and release of products to the membrane, whereas the closed conformation prolongs the residence time of substrates or inhibitors and selectively allows the passage of smaller reactants via the solvent and water channels.
K E Y W O R D S
coarse-grained simulation, cytochrome P450 2B4, enzyme catalysis, ligand tunnels, membrane protein, molecular dynamics simulation, protein dynamics, protein-ligand interactions, transient binding pocket
| INTRODUCTION
Cytochrome P450 (CYP) enzymes form a superfamily of heme-containing monooxygenases that play a crucial role in the metabolism of a wide array of both endogenous and exogenous compounds (Coon, 2005).Rabbit CYP 2B4, a member of the CYP 2B subfamily, has been extensively studied as a model for mammalian xenobioticmetabolizing enzymes since its initial isolation from phenobarbital-induced rabbit liver microsomes (Coon et al., 1973;Jansson et al., 1995).Its prominence in research stems from its utility in elucidating CYP interactions with lipids and redox protein partners, as well as in providing insights into mechanistic aspects of mammalian CYP enzyme catalysis.A hallmark of CYP 2B4 is its high conformational plasticity, which enables the enzyme to accommodate a diverse array of substrates with varying sizes, shapes, and chemical properties (Manikandan & Nagini, 2017;McDonnell & Dang, 2013).The human homolog of CYP 2B4 is CYP 2B6, which shares a sequence identity of 78.7% (Seal et al., 2023).
Microsomal P450s, including CYP 2B4, exhibit a conserved domain architecture characterized by a globular heme-binding domain tethered to the membrane via an N-terminal anchor.The membrane tethering facilitates the enzyme's interaction with its lipid environment and its redox protein partners, CYP reductase (CPR) and cyt b5, which is essential for electron transfer during the catalytic cycle (Zhang et al., 2015).The membrane association also influences the enzyme's accessibility to substrates, which may come from the membrane or from the cytosol (Williams et al., 2000).
CYP 2B4 displays the prototypical CYP topology, composed of a single polypeptide chain that arranges into 12 principal α-helices (labeled A-L), interspersed with additional helical segments (B 0 , F 0 , and G 0 ), and four β-sheets (numbered 1-4) (see Figure 1 for a depiction of CYP 2B4 with the secondary structures that are most important for this study).These elements form a globular domain that is tethered to an N-terminal transmembrane (TM) helix by a flexible linker (Manikandan & Nagini, 2017;McDonnell & Dang, 2013).The ligand-free crystal structure of CYP 2B4 (PDB ID: 1PO5, resolution: 1.6 Å) unveiled a large hydrophobic binding cavity in the core of the enzyme (Scott et al., 2003).The binding site is wide open to solvent and hosts a cysteine-bound heme cofactor, the position and coordination of which are modulated by the surrounding active site residues.These residues, along with the B 0 and F 0 -G 0 regions, are implicated in forming the substrate access channel to the active site and exhibit large conformational variations among different crystal structures (Cojocaru et al., 2007;Schlichting et al., 2000;Scott et al., 2002).The binding pocket's conformation is dynamic, with the heme proximal side being relatively rigid and the distal side, where the substrate binds, being more flexible, allowing the enzyme to adapt to various substrate geometries (Halpert, 2011).
The diversity of CYP 2B4's ligand-binding capabilities is evident from available crystal structures.These structures reveal different ligand binding modes and different conformations of the globular domain, such as the binding of the antiplatelet drug ticlopidine in a hydrophobic pocket near the heme group (Gay et al., 2010), and the closed conformation induced by the heme-coordinating inhibitor (4-(4-chlorophenyl)imidazole) (CPI) (Scott et al., 2004).Ligand binding can also lead to conformational changes that affect the enzyme's specificity, as seen with the calcium channel blocker amlodipine, which induces an open configuration of the B 0 , F 0 , G 0 , and G helices around the binding cavity (Shah et al., 2012).The crystal structure of CYP 2B4 bound to the bulkier antifungal drug bifonazole (BIF) reveals an even more open conformation of the enzyme, with a wider active site cavity compared to other ligand-bound structures and the I-helix bent to accommodate the ligand (Zhao et al., 2006) (see Figure S9).Comparison of different crystal structures highlights the plastic regions of CYP 2B4 that undergo conformational changes upon ligand binding, showing how different substrates affect the active site cavity (Zhao et al., 2006).
The conformational plasticity of CYP 2B4 is of significant interest for xenobiotic metabolism and has implications for understanding the wider CYP superfamily, which includes human drug-metabolizing enzymes such as CYP 3A4, CYP 2D6, and CYP 2C9, which also exhibit conformational variations that contribute to their broad substrate specificity and metabolic capabilities.Understanding the conformational dynamics of these enzymes can provide insights into their individual functions and aid in the design of isoform-selective inhibitors or activators (Poulos, 2005).Computational approaches, including molecular dynamics (MD) simulations, have been extensively applied to study the conformational landscape of CYP 2B4 and other CYPs.These simulations have revealed the existence of multiple substrate access channels and the impact of the membrane environment on the structure and dynamics of CYP 2B4 (Hendrychova et al., 2012;Lüdemann et al., 2000aLüdemann et al., , 2000b;;Urban et al., 2018).
The aim of this study is to provide a thorough understanding of the implications of the structural dynamics of membrane-bound CYP 2B4 for ligand access to and egress from the active site.For this purpose, we employed a multi-resolution MD simulation approach to build and simulate models of CYP 2B4 in a phospholipid bilayer in different conformational states of the enzyme and with different ligands.Initial coarse-grained (CG) MD simulations provided a broad sampling of the conformational landscape, which was then refined in all-atom (AA) MD simulations.CAVER (Chovancova et al., 2012;Pavelka et al., 2016) and AQUADUCT (Magdziarz et al., 2020) analyses were carried out to characterize active site tunnels and water channels, respectively, and random Acceleration MD (RAMD) simulations were performed to identify pathways for ligand egress.Finally, simulations were performed of the re-entry of an inhibitor from the membrane into the heme binding cavity.Together, this combination of simulation and analysis techniques provides the most comprehensive understanding to date of the interplay between the conformational dynamics of membranebound CYP 2B4 and substrate, product, inhibitor, and water passage to and from the enzyme's active site.
| Membrane insertion CG MD simulations reveal two predominant orientations of the CYP 2B4 globular domain
CG MD simulations of CYP 2B4 were performed with the Martini 2.2 force field (Marrink et al., 2007) to efficiently sample the configurational space of the full-length protein in a phospholipid bilayer.The simulations started with either the closed or the open conformation of the globular domain resolved by X-ray crystallography (PDB ID: 1SUO, 1PO5; Scott et al., 2003Scott et al., , 2004)), see Figure 1.These two conformations differ most in the region distal to the heme.The F 0 /G 0 and G-helices, along with the BC loop containing the B 0 helix, are more tightly packed near the ligand binding cavity in the closed inhibitor-bound structure (PDB ID: 1SUO) than in the open ligand-free CYP 2B4 structure (PDB ID: 1PO5).The ligand-freestructure has the F 0 /G 0 helices positioned further from the I-helix and the heme, resulting in a more open heme cavity.For each simulation replica, the globular domain was initially positioned in a random orientation above the lipid bilayer and connected to the N-terminal TM helix via a 29-amino-acid residue long peptidic linker.The initial structures of the linker were assigned diverse conformations to ensure sampling of a wide range of possible arrangements of the CYP 2B4 domains and the membrane bilayer, see Figure S1.For each of the two structures of the globular domain, 10 replica CG MD simulations were run, each of 7-12 μs duration.The CG MD simulations of the closed and open CYP 2B4 structures were compared by analyzing the distribution of the two tilt angles that describe the orientation of the globular domain in the phospholipid bilayer and the axial distance between the center of mass (CoM) of the globular domain and the center of the lipid membrane, see Figure 2.
For the closed conformation, the α angle, which tracks the orientation of the I-helix in the globular domain with respect to the membrane normal, converged to around 100 , with a smaller population at 75 .The sub-population of the α angle was contributed from replicas 3, 8, and 9, which also converged to relatively lower values between 80 and 100 of the β angle, measuring the orientation of the globular domain with a reference vector from the C to F helices (Figure S2).Overall, the β angle converged to a distribution around 100 , with a right shoulder near 115 from replicas 1, 2, and 7. Simulations of the open structure of the globular domain revealed a tendency to lower α and β values than those for the closed structure (Figure 2).For the open conformation, the α angle converged to around 65 , with a smaller population at approximately 95 .Simultaneously, the β angle converged with a major population at around 80 and a minor population at around 135 .This minor population of the β angle corresponded to the right shoulder of the α angle distribution, both originating from replica 3 (Figure S3).Thus, while the simulations of the open conformer converged to have 9 out of the 10 replicas in the distribution with α angle around 65 and β angle around 80 , the simulations of the closed conformer showed three clusters in the orientation of the globular domain on the membrane with the largest having converged α and β angles around 100 (replicas 4, 5, 6, and 10).These CG MD simulations thus indicate higher variation of the orientation of the closed globular domain on the phospholipid bilayer.One reason could be the higher axial distance of the globular domain CoM from the center of the phospholipid membrane in the CG MD simulations for the closed structure at 46.4 ± 3.4 Å, compared to 44.1 ± 3.9 Å for the open structure.
During all the CG MD simulations, the α and β angles underwent changes, showing the highly flexible nature of CYP 2B4, see Figures S2 and S3.The simulations for the closed conformation of the globular domain showed α and β angles converging at higher values than for the open conformation, around 100 or higher.In the closed conformation, the BC loop (containing the B 0 helix) remains further from the lipid bilayer than in the open conformation, with most replicas having axial distances of the CoM of the BC loop to the lipid bilayer center that converge around 40 Å or higher.Although the distance of the B 0 helix from the membrane decreases during the simulations, only the F 0 -G 0 helices establish a stable direct interaction with the lipid membrane.
In one replica (replica 9, see Figure S2), however, the distance of the BC loop to the lipid bilayer decreased over time to around 35 Å, along with the convergence of the α and β angles to lower values around 75 .The closed conformation can thus adopt a similar orientation of the globular domain on the membrane to the predominant orientation of the open conformation, but this is a rare occurrence.The most commonly observed membranebound configuration for the closed conformation is with a highly tilted globular domain with the F 0 -G 0 helix region interacting with the membrane while the side of the protein containing the B 0 helix region stays further away from the membrane surface.
For the open conformation of the globular domain, the F 0 -G 0 helices consistently moved closer to the lipid bilayer over time, resulting in direct interactions with the lipid bilayer in most replicas and the shortest distance of the CoM of the F 0 -G 0 helices to the bilayer center being about 25 Å.In contrast, the BC loop only interacts directly with the lipid bilayer in half of the replica simulations, but in these cases (replicas 4, 5, 7, 9, and 10 in Figure S3), the hydrophobic residues of the BC loop enter inside the upper layer of the phospholipid membrane.Insertion of the BC loop (and B 0 helix) into the membrane is accompanied by the C helix orienting to be nearly perpendicular to the membrane surface and with the globular domain tilted with a lower α angle.The replicas that have a higher distance between the BC loop and the lipid bilayer (replicas 3, 6, and 8 in Figure S3) tend to have higher α and β angles of around 80 or higher.In the replica in which the BC loop is furthest from the lipid bilayer after convergence (replica 3 in Figure S3), the α and β angles were 100 and 135 , respectively.These values are more similar to those for the closed conformation and to those previously observed in studies in which the closed conformation of the globular domain of different CYPs was simulated (Cojocaru et al., 2011;Mukherjee et al., 2021).However, for the set of CG simulations for the CYP 2B4 open conformation, membrane-bound conformations with lower α and β angles predominate (corresponding to a lower heme tilt angle), and these have both the BC loop and the F 0 -G' helices interacting with the lipid bilayer.
Overall, the CG simulations of CYP 2B4 starting with random orientations of the globular domain at a distance from the membrane show that the globular domain consistently migrates toward the membrane surface and that this motion is accompanied by major movements of the flexible linker region.Both conformations of the CYP 2B4 globular domain show preferred membraneinteracting configurations with the F 0 -G 0 helices in or near the bilayer surface in agreement with experimental crystallographic and H/D exchange mass spectrometry (Treuheit et al., 2016) studies.These configurations could allow ligand access into the heme binding cavity from the phospholipid membrane.As the CG simulations with the open and closed conformations showed distinct membrane-bound configurations, dependent on their globular domain conformations, but also showed the ability to adopt the membrane-bound orientation observed for the other conformation, we selected three membrane-bound CYP 2B4 configurations from the CG simulations for further simulation in atomic detail (see Methods for details).We refer to these as closed, open, and alternative-open.The structures of the globular domain of CYP 2B4 in the lipid bilayer environment were rather stable in the AA simulations, see RMSD values in Figure 3a,b, secondary structures in Figure S2, and 3D structures in Figure 4a.For all three conformers, the distance of the globular domain from the membrane agreed with experimental data on the height of CYP above the membrane (Bayburt & Sligar, 2002) and with other simulations of CYP-membrane systems on the distance of the globular domain CoM from the bilayer center (Mustafa et al., 2019;Yamamoto et al., 2013), which converged to a value of about 45 Å, see Figure 4b and Table S1.The TM helix tilt angle fluctuated around 18 ± 10 (Figure 3i), which matched the value of 17 reported from a solidstate NMR study of this region of CYP 2B4 (Yamamoto et al., 2013).The most notable difference between the three conformations was in the local stability of the B S1.
Similarly to Li et al. (2020), the distances between three pairs of clusters of residues were computed to track the movements around the entrance to the active site cavity: between the A 0 and F 0 helices, the B 0 helix region and G helix, and the B 0 and C-terminal loops, see Figure 4f-h Hydrogen-bonds between the heme propionates and the heme binding site residues in the AA MD simulations of the three conformers of CYP 2B4 were monitored with the aid of an MD-IFP analysis, see Figure 5 heme propionates to H369, and relatively close proximity with occasional contacts to R98 and S430.The interaction profile of the alternative open trajectory was the most dynamic along the trajectory with the formation of stable contacts with H369 and R98 and a transition to form transient contacts with S430 and W121 over the trajectory.
The closed conformer showed values of the α and β angles that are within the range of those observed in our simulations of other full-length microsomal ligand-free CYPs (CYPs 2C9, 2C19, 17, 19, 1A1) in a 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (POPC) bilayer using the same CG + AA MD simulation protocol starting from crystal structures with the globular domains in a closed conformation (Cojocaru et al., 2011;Mukherjee et al., 2021;Mustafa et al., 2020).There was also little movement in the F 0 -G 0 region compared to the open form.However, the C-terminal end of the F helix and the N-terminal end of the F 0 helix formed an interaction with the C-terminal loop that resulted in the loss of helicity around the F-F 0 junction.This reorientation of the F helix was accompanied by the repositioning of the G and H helices in which the H helix positioned closer to the C helix, destabilizing the C and G helices (Figure 4a).In addition, the B 0 helix and C helix stayed away from the membrane due to the B 0 helix region interacting with the G and I helices.The stable interaction between the B 0 loop and the G helix correlated with a noticeably lower fluctuation in the B 0 loop and an RMSD of around 2 Å, compared to 7-8 Å in the open and alternative-open forms.The resulting globular domain configuration in the closed CYP 2B4 has a heme tilt angle of 67.9 ± 4 , which is higher than for the open and alternative-open structures (Figure 3j).Furthermore, it is also somewhat higher than obtained in simulations of other CYPs for which the heme tilt angle ranged from 40 to 61 (Mustafa et al., 2020) and it is also higher than the values of 57 -62 measured by linear dichroism for CYP 17, CYP 19, and CYP 3A4 in POPC nanodiscs (Mustafa et al., 2019).A reason for the higher tilt angle could be the tight packing of the B 0 helix against the G-helix above the membrane in the closed conformation of CYP 2B4.
In the open CYP 2B4 state, the B 0 loop immersed into the phospholipid bilayer while bringing the adjacent C helix along, which shifted the end of the I helix neighboring the C helix to a position closer to the membrane surface to accommodate a globular domain orientation with a low heme tilt angle of 39.9 ± 4 (Figures 4a and 3j).This is at the lower end of the range observed previously in similar simulations of other CYP isoforms (Mustafa et al., 2020), none of which had such an open active site as these conformations of CYP 2B4.Cojocaru et al. (2011).previously observed that CYP 2C9 tends to adopt a lower heme tilt angle when an F-G loop is present rather than the F 0 -G 0 helices.Both the F 0 and the G 0 helices are present during the simulations of the closed conformation but the G 0 helix unwinds during the simulation of the open conformation of CYP 2B4, which might contribute to its low heme tilt angle.A similar but less pronounced trend is visible in the alternative-open conformation.The heme tilt angle for the open conformation is also lower than the heme tilt angle obtained by Berka et al. (2013) for the six major human drugmetabolizing CYPs in a DOPC bilayer in AA MD simulations employing a different protocol which gave heme tilt angles ranging from 55 to 72 .The crystallographic structure of "open" CYP 2B4 (1PO5) is stabilized by the FG-loop region of a symmetry-related neighbor that enters into the "funnel-like" opening surrounding the active-site (Scott et al., 2003).To rule out simulation artifacts caused by removing this interacting molecule, we compared the final structure from this simulation to the crystallographic structure, which aligned well with a soluble-domain RMSD of 2.29 Å.Over the course of the simulations, the F 0 and G 0 helices came closer to the BCloop region.Such a repositioning of the FG-loop is not unexpected as the "funnel" in the crystal structure (1PO5) forms a large, hydrophobic surface area (see Figure 1), indicating that closer packing of the adjacent secondary structure elements upon removal of the symmetry-related protein monomer is reasonable.
In the alternative-open CYP 2B4 state, the heme tilt and α angles resembled those of the open conformation, whereas the β angle stabilized at a value between that of the open and closed conformations.While the B 0 loop stabilized in a position closer to the membrane than in the closed form, the F 0 -G 0 helices did not approach the membrane as much as in either the closed or the open conformations.Furthermore, the distance of the globular domain CoM to the membrane center was highest in the alternative-open conformer.This was because the linker between the TM-helix and the globular domain was located at the active site cavity-membrane interface, thereby separating the residues at the entrance to the active site cavity as discussed above.In addition, the alternative-open conformer showed an extensive kink in the middle of the I-helix and an unusual position of the TM helix adjacent to the globular domain.This apparent strain indicates that it would be possible for the globular domain to reorient on the membrane to allow repositioning of the linker loop away from the active-site cavity pocket.It is also possible that the lipid-bound orientation of the protein from the CG simulation is a result of the Martini 2 force field's tendency to overestimate protein-protein interactions (Javanainen et al., 2017;Lamprakis et al., 2021;Stark et al., 2013).Although it is unclear whether this unusual positioning of the loop serves any physiological relevance, this alternative-open conformation shows the transient nature of the CYP 2B4 intermediates between closed and open states.
The position of the globular domain above the lipid bilayer in all the AA MD simulations was consistent with atomic force microscopy measurements with ligand-free CYP 2B4 reconstituted in rHDL, from which the protein height was reported to be above the membrane of 35 ± 9 Å (Bayburt & Sligar, 2002), which corresponds well to our simulation result of $41-44 Å for the CoM axial distance.The TM helix tilt angle for the closed, open, and alternative open conformers was 10.1 ± 4.8 , 14.1 ± 6.1 , and 15.8 ± 4.9 , respectively, which is similar to the value of 17 ± 3 measured by solid-state NMR for CYP 2B4 reconstituted in DLPC/DHPC bicelles (Yamamoto et al., 2013).
As captured by the lower α, β, and heme tilt angles in the open and alternative-open forms and the higher values in the closed forms of CYP 2B4, the degree to which the B 0 and F 0 -G 0 helices interact with lipids play a role in determining the overall tilt of the globular domain toward the membrane.More hydrophobic patches are visible in the crystal structure of the open form (Figure 1b) because the heme and active site cavity are exposed toward the membrane.Comparison of the BC loop sequence of CYP 2B4 to selected drug-metabolizing CYPs shows that CYP 2B4 has fewer hydrophilic residues and a more clustered distribution of hydrophobic residues in this region (Figure S6), which could contribute to the BC loop establishing more stable lipid interactions.As a result, the orientation obtained in the open conformation places the active site cavity toward the lipid bilayer, which could allow for the entry of lipid-resident substrates.
Furthermore, it can be expected that the positioning of CYP 2B4 in membranes will be affected by the substrate, favoring the closed active-site conformation, and redox protein partner binding.Indeed, in MD simulations of membrane-bound CYP 1A1 starting from a crystal structure with a closed conformation, we observed a reduction in heme tilt angle upon complexation with CPR to approximately 40 (Mukherjee et al., 2021), and NMR studies and MD simulations have shown stabilizing interactions between the CYP 2B4 and cyt b5 TM helices, affecting their orientation in the membrane (Sahoo & Ramamoorthy, 2023).In addition, the CYP positioning may be affected by interactions with different components of biological membranes as CYP 2B4 has been reported to interact with sphingomyelin and induce lipid raft formation in nanodiscs (Barnaba et al., 2018).
| Analysis of tunnels and water paths reveals the altered specificity of functional routes for solvent molecules in different conformational states of CYP 2B4
To analyze protein tunnel formation in the simulated systems, we applied the CAVER software (Chovancova et al., 2012;Pavelka et al., 2016).This analysis was only possible for the closed conformation of CYP 2B4 as CAVER was not able to correctly define the surface of the wide-open active site in the open structure.In the closed conformation, only the solvent channel (following the nomenclature of Cojocaru et al., 2007) was present in all the analyzed frames.The corresponding "solvent" pathway was also observed as a minor pathway in the RAMD simulations for closed CYP 2B4 (see Section 4).Its bottleneck radius of 1.416 ± 0.003 Å and length of 22.4 ± 0.6 Å indicate a relatively long, narrow tunnel.This shape, together with its exit toward the aqueous solvent (see Figure 6a), suggests it could be suited to the passage of very small molecular species, like water, ions, or oxygen, rather than substrates, inhibitors, or products (see Table S2 for details).
To identify routes taken by water molecules and tunnels in the simulations of the closed and open conformations of CYP 2B4, we tracked water molecules entering and exiting the convex hull of CYP 2B4 using the Aqua-Duct software (Magdziarz et al., 2020).For the trajectory of the closed CYP 2B4, results consistent with those of CAVER were obtained.The majority (Cl.1, 65%) of surface entry and exit points (referred to as "inlets" by Aqua-Duct) of water molecules that pass through the active site belong to a cluster corresponding to the solvent channel, see Figure 6a.Thus, the hypothesis that this channel is mostly utilized by water molecules is supported by our simulations.A second cluster of openings was detected at a much lower size (Cl.2,14%) corresponding to the "water" channel (see Table S3 for full results).
The CAVER showed how the opening of CYP 2B4 to an intermediate structure (PDB ID: 2BDM) results in an increased count of tunnels and that there is a merging of the pathways 2a, 2ac, and 2c in the open structure (PDB ID: 1PO5), which is held open by interaction with a second CYP 2B4 molecule in the crystal that coordinates the heme with a histidine.Our analysis of tunnels and water routes using CAVER and AquaDuct likewise shows that the open conformation of CYP 2B4 is highly permeable, allowing significant solvent exchange with the active site via several routes, potentially enabling the entry of small ions or molecules such as oxygen.In contrast, the observations with both CAVER and AquaDuct for the closed conformation (lower accessibility and almost exclusively via the solvent channel), indicate a more selective route for passage of small molecules.Considering that of the two simulated crystal structures, only the closed conformation contains a ligand at the required location in the active site for catalysis, the opening of a rather rigid and narrow, tunnel to the solvent could channel the entrance and exit of reactants like water, protons, or oxygen upon substrate binding, which may be important for catalytic efficiency and specificity.
| Different preferred routes for ligand egress from the closed and open states of CYP2B4
We performed a total of 195 RAMD simulations to further investigate how the conformation of CYP 2B4, either closed or open, impacts ligand release from the active site.For this, we initially selected benzphetamine (BZP), a weight-loss causing drug and substrate of CYP 2B4 that is commonly used as a reference substrate in biochemical studies of CYP 2B4 (Sheng et al., 2009), which we docked in the CYP 2B4 active site (see Table S4, Figure S7).We then selected its N-demethylated product of CYP S2.The exit points of water routes detected with AquaDuct are displayed as spheres on the protein structure from the initial frame of the analyzed part of the trajectory.The spheres are colored by cluster (Cl.) number and clusters are numbered by size with Cl.1 being the largest, see Table S3 for details.The directions of the routes followed by the clusters are indicated by arrows.The approximate position of the surface of the membrane is shown by a dashed line.
2B4-mediated turnover norbenzphetamine (NZP) to investigate whether the egress routes and mechanisms differ between a substrate and a product of CYP 2B4.Both compounds are relatively small and displayed high mobility within the active site of open CYP 2B4 during bound-state simulations, with BZP almost leaving the binding site and attaching to the G helix surface.We therefore decided to also simulate the bulkier antifungal drug and monooxygenase inhibitor bifonazole (BIF), which displays specific interactions with the active site as well as imidazole-heme coordination in the crystal structure of its complex with CYP 2B4 determined by Zhao et al. (2006).
Due to the application of a randomly oriented force to the CoM of the ligand, the RAMD approach is aimed at the enzyme for tunnel or channel opening induced by an accelerated small molecule, rather than sampling all the dynamic motions associated with the unbinding process.A total of 30 or 45 RAMD simulations were carried out per system.To achieve appropriate sampling of the egress routes of BZP and NZP within a maximal RAMD trajectory length of 4 ns, the random force magnitude had to be 2-fold higher for the closed than the open CYP 2B4 conformation.This indicates that the compounds have a longer residence time in the closed conformation, even though the percentage of RAMD trajectories in which no egress was observed is higher in the open than in the closed CYP 2B4 conformation (see Figure 7a, Table S5).The distribution of egress routes differs markedly between the two conformational states but not between the substrate BZP and product NZP.This suggests that the ligand entry and egress route is determined largely by the conformational state of CYP 2B4 rather than the reaction stage of the ligand (substrate/ product).This observation is further supported by the results for the inhibitor BIF, for which RAMD simulations were only carried out for the open CYP 2B4 conformation due to its bulkier structure than BZP or NZP.Despite requiring a higher random force magnitude for egress (8 rather than 6 kcal/mol/Å), the distribution of BIF egress routes from the open conformation is similar to that of the other ligands.
For the open conformation, the dominant route for all three compounds is pathway 2c, passing through the interface between the BC loop and the I and G helices, and leading to ligand exit close to or into the phospholipid bilayer.Pathway 2ac was also often observed and was spatially partially overlapping with pathway 2c due to the rearrangement of the BC-loop region in the open conformation relative to the closed conformation.While the merging of channel 2 subclasses in "open" CYPs has previously been described (Yu et al., 2013), we distinguished between pathways 2c and 2ac in these simulations by assigning those in which the ligand formed transient contacts with the I-helix during egress to pathway 2c.The highly hydrophobic nature of the three compounds (QPlogPo/w = 3.8 for BZP and NZP, and 4.8 for BIF) indicates that they tend to be located within the microsomal membrane rather than in the cytosol.Thus, the observed egress via pathway 2c or 2ac toward the membrane indicates the likely dominant way by which substrates and inhibitors enter and products leave the active site of CYP 2B4.
For closed CYP 2B4, in contrast, BZP and NZP tended to egress mostly via pathway 2e, going through the BC loop and exiting above the membrane surface.While egress via pathway 2c toward the membrane was occasionally observed, a significant portion of the trajectories had ligands exiting via pathway 1 on the membraneopposed face of the protein.The broader distribution of egress routes from the closed conformation, combined with the need for a higher magnitude of the random force and the lack of possibilities to enter and exit from/toward the membrane, indicate that directed and specific substrate uptake and product release to/from the membrane occur predominantly in the open conformation of CYP 2B4.Conversely, the adoption of the closed conformation may serve to increase ligand residence time and retain the substrate or inhibitor in proximity to the catalytic heme.
| Simulation of re-entry of bifonazole into CYP 2B4 reveals binding to an additional transient subpocket in the active site cavity
The observation of generally high motility of the ligands within the binding site of the open conformation of CYP 2B4 during the conventional MD simulations led us to simulate the re-entry of a compound into the active site of CYP 2B4 after almost complete egress in a RAMD simulation.Considering that in open CYP 2B4 the surface lining the channels 2c/2ac leading toward the catalytic heme has large patches of hydrophobic residues (see Figure 1), we selected the inhibitor BIF for most of these re-entry simulations due to its longer residence time in RAMD simulations, its bulky nature, and its high hydrophobicity (see Table S6).
As a starting point for subsequent, conventional MD simulations, a frame was selected from near the end of a RAMD simulation trajectory (at 1.288 ns before full egress at 1.374 ns) in which BIF exited via channel 2c when all but a single, unspecific contact with residue K225 had been lost and the compound had started to immerse into the phospholipid bilayer.From this starting position, three trajectories with randomly initiated velocities were generated.In two of these, BIF moved back into the funnel toward the active site.However, even after simulating each of these two replicas for 770 or 834 ns, respectively, BIF did not reach a position adjacent to the heme.Instead, in both replicas, BIF moved to a different position located between the I-helix on one side and the F-and G-helices on the other side, where it remained for at least the remaining $720 ns of the trajectories, see Figure 8a.In the third simulation, there was a complete unbinding of BIF after 60 ns.In addition, in several other simulations starting with a ligand position from the end of a RAMD egress trajectory, no re-entry was observed (see Table S6).Nevertheless, the observation of two ligand re-entries supports a 2-way route for substrate access and product egress via pathway 2c, although it does not exclude the use of separate 1-way routes for substrate entry and product release (Schleinkofer et al., 2005).
The additional subpocket observed in the BIF re-entry simulations was not present in the initial protein structure and its formation required reorientation of several residues including F296 (I-helix), which rotated toward the active site, H172 (E-helix), and L201 and F202 (both in the F-helix).Notably, the opened subpocket is more hydrophobic and tightly packed than the region next to the heme (see Figure S8) and can thus coordinate the hydrophobic moieties of BIF better.From a superposition of all 21 available crystal structures of CYP 2B4 and related proteins using the PDBe-KB website (Varadi et al., 2022), this subpocket is only occupied in one structure (PDB ID: 2BDM; Zhao et al., 2006), which has a relatively open, intermediate conformation without contact between the B 0 and G helices, and in which three BIF molecules are bound to each CYP 2B4 in the crystallographic unit cell.In addition to one BIF molecule in the canonical active site, there are two alternative binding sites that were suggested to be crystallographic artifacts due to the high inhibitor concentrations used for crystallization (see Figure 8b).One of these sites is located at the interface between two crystallographic unit cells and is at the same location as the poses identified in our simulations.Thus, the present simulations indicate that this subpocket can be formed under non-crystallographic conditions and that it could be transiently occupied upon ligand binding.
A comparison of the last frames of the two simulations showing re-entry with this crystal structure reveals that the orientation of BIF varies.It inserts in the interface between the F, G, and I helices with its biphenyl moiety in the crystal structure and also in the Replica I simulation after some reorientations.On the other hand, it inserts stably with its monophenyl moiety in the Replica II simulation (see Figure 8b).During the initial binding of BIF to the subpocket in Replica I, a similar pose to that in Replica II was generated yet the inhibitor remained very unstable and, eventually, reoriented to bury the biphenyl instead (see Figure 8c).The apparent difficulty of opening the transient subpocket wide enough for the biphenyl moiety in Replica II is indicative of tighter packing of the two helices in the simulations in the absence of the Cymal-5 detergent crystallization aide, of which one molecule is located near the G helix and forms additional contacts with BIF in the crystal structure.The conformation of the subpocket between the final frame of Replica I and the crystal structure also differs and the pocket is more open in the crystal structure with BIF located deeper inside.Thus, the orientation of BIF at this site in the crystal structure may be influenced by the Cymal-5 compound and by the formation of contacts to the protein in the neighboring unit cell.However, it is also possible that the interface region between the helices may widen on timescales beyond those of our atomistic simulations.Gay et al. (2009) compared the crystal structures of the complexes of CYP 2B4 with three differently sized inhibitors, including BIF, identifying major structural rearrangements of the protein, including in the region of the additional subpocket.Remarkably, our simulations show the formation of this subpocket by an induced-fit mechanism during simulations that were based on the open conformation of ligand-free CYP 2B4 (PDB-ID: 1PO5) in which the subpocket region between the F, G, and I-helices is completely closed.Unlike the crystal structure of the BIF-CYP 2B4 complex, the simulated protein did not display a rotation of the I-helix upon BIF binding but was still able to accommodate BIF in the additional subpocket indicating that this pocket is not only present in the BIF-bound intermediate conformation in the 2BDM crystal structure.
The agreement between the crystal structure of an intermediate state of CYP 2B4 (PDB: 2BDM), which has BIF bound at this subpocket, and the simulations suggests that this subpocket may act as a transient binding site or participate in allosteric regulation.Binding at this region could be part of the substrate uptake process, with the substrate waiting in this position until the closing motion of the protein brings it toward the heme for reaction.Alternatively, it is possible that the binding of a compound at this site has an allosteric role, reducing the solvent-filled volume of the active-site access tunnel and providing additional hydrophobic contact points for a second molecule entering the active site from the membrane.To test the latter hypothesis, we performed further conventional MD simulations with one BIF molecule in the additional pocket and a second BIF molecule positioned as in the re-entry simulations described above at the opening of channel 2c (data not shown).In almost all cases, the second BIF remained at the entry site forming hydrophobic contacts with the BIF inside the additional subpocket, indicating a stabilizing effect of the inhibitor in the additional subpocket on the second entering inhibitor molecule.
Cooperative mechanisms of CYP catalysis have been widely discussed, most notably for CYP 3A4 (Davydov & Halpert, 2008).Simultaneous binding of multiple ligands, not only to the active site but also to additional sites, may assist in keeping the substrate in position for turnover by enhancing packing in the wider and less specific binding sites of more promiscuous CYP isoforms (Davydov & Halpert, 2008).An additional "peripheral" pocket in CYP 2B4 adjacent to the subpocket identified here, also located behind the F and G helices, has been investigated by Jang et al. (2015), who found that mutation to tryptophan of F202 or I241, which form the inner surface of this pocket and the additional subpocket at which BIF binds in our simulations, led to impaired turnover of 7-ethoxy-4-(trifluoromethyl)coumarin and 7-benzyloxyresorufin.A third mutation, F195W at the entry of the "peripheral" pocket but far away from BIF in our simulations, had no such effect.Thus, a role for the additional subpocket occupied by BIF in our simulations in CYP 2B4 catalysis seems likely.However, Zhao et al.
did not detect the binding of multiple molecules of BIF to CYP 2B4 by isothermal titration calorimetry, leading them to surmise that the two additional sites in the crystal structure (PDB-ID 2BDM) that are not located in the active site are low-affinity binding sites for BIF.
Thus, it is possible that the observed binding of BIF to the additional subpocket in CYP 2B4 may not represent a mechanism in which multiple ligands bind simultaneously but rather the first step in a complex liganduptake mechanism.For instance, Isin and Guengerich (2006) proposed a three-step substrate binding process for CYP 3A4 in which an initial noncatalytic "encounter" complex between the substrate and enzyme is rapidly formed and this is then followed by conformational transitions relocating the substrate closer to the heme and facilitating catalysis.Similar results have been obtained by Davydov et al. (2008) for bacterial P450eryF with the fluorescent substrate Fluorol-7GA using FRET and pressure perturbation.The observed rapid association of BIF from a membrane-bound exit/entry point to the additional subpocket followed by relatively stable occupancy of this binding mode over 700 ns of simulation time points toward a similar mechanism in CYP 2B4.
More recently, Hackett simulated the entry of the substrate testosterone from a membrane into membranebound CYP 3A4 using accelerated MD and adaptive biasing force simulations.Ligand ingress via pathway 2 (2a, 2ac, or 2d/f) was observed with a 2-step entry process with an intermediate state where testosterone was stabilized by aromatic residues along the entry pathway located between the I-helix and the FG and BC loops (Hackett, 2018).Furthermore, it was suggested that both the intermediate and active site poses may be simultaneously occupied upon high substrate concentrations.This study also identified F304 in CYP 3A4 to be important in formation of the metastable state.This residue has previously been found to be important for cooperativity in CYP 3A4 (Domanski et al., 1998) and is homologous to F297 in CYP 2B4.A conformational change of this residue is essential for the observed opening of the additional subpocket in our simulations.Among the major drugmetabolizing CYPs (1A2, 2A6, 2C9, 2D6, 2E1, and 3A4), this position is only conserved in CYPs 2C9 and 3A4, but it is also present in CYP 2B6, the human homolog of CYP 2B4.While the study by Hackett was done on a different CYP isoform in a different (closed) conformational state and depicted a different entry channel, the concept of a two-step entry mechanism starting with a membranebound ligand is similar.
F297 in CYP 2B4 has been found to display a degree of flexibility within crystallographic structures of closed CYP 2B4 depending on the ligand bound in the active site (Halpert, 2011).The F297A mutation resulted in roughly 1.6-and 1.2-fold decreases, respectively, of the K m and k cat values, yet no notable change in catalytic efficiency (Shah et al., 2013).This was, however, linked to the role of this residue in coordinating the substrate in the active site in the closed conformation as a change between an inward and outward facing orientation of F297 in CYP 2B4 occurred upon opening of the peripheral pocket.
To the best of our knowledge, our simulations provide the first observation of unbiased ligand reentry into the active site from RAMD-derived egress points.In previous work applying a similar approach to investigate ligand orientations within the membrane after egress from CYP51, re-entry events were not observed (Yu et al., 2016).This approach of simulating ligand entry in conventional MD simulations using input structures generated from RAMD simulations can be expected to be applicable to other systems.
The entire process, depicting the RAMD trajectory chosen as a starting point for this approach, the selected point of last contact, and the subsequent reentry simulation (Replica I) are depicted in the Movie S1.
| CONCLUSIONS
Our integrative simulation study, employing coarse-grained and all-atom conventional MD simulations complemented by RAMD simulations, provides a detailed picture of the conformational dynamics and ligand interaction mechanisms of CYP 2B4 immersed in a phospholipid bilayer.The results elucidate the protein conformation-dependent modulation of ligand and solvent pathways and the enzyme's interaction with the lipid membrane, all of which are important for the enzyme's biological function.
| Dependence on globular domain conformation of the positioning of CYP 2B4 in the membrane
The CG simulations revealed that depending on whether it has a conformation with an open or a closed active site cavity, the globular domain of CYP 2B4 adopts one of two major orientations when interacting with the lipid bilayer, with the BC loop and F 0 -G 0 helices playing a central role in this interaction.Furthermore, we observed minor orientations, representing intermediate orienta-
| Dependence of ligand and solvent access and egress on globular domain conformation of membrane-bound CYP 2B4
We carried out a detailed analysis of protein permeability and ligand passage from the active site of the membranebound full-length CYP 2B4 by analyzing: (1) tunnel occurrence in conventional MD simulations by CAVER, (2) passage of water molecules through preformed and induced tunnels in conventional MD simulations by AQUADUCT, and (3) ligand egress through existent and induced tunnels by RAMD simulations that enhance ligand motion and thereby accelerate ligand egress.The results show the influence of the conformation of the globular domain on the egress of both water molecules and substrate, product, and inhibitor molecules from the active site.The open state of the enzyme supports a range of ligand egress routes, including those leading directly to the lipid bilayer, indicating that the open state supports both substrate entry and product egress, as well as being highly permeable to water molecules.In contrast, the closed state prolongs the residence time of the ligands, thereby enhancing the efficiency of catalysis by maintaining substrates in proximity to the catalytic heme center and channeling the access and egress of water via the solvent tunnel.
We observed the re-entry of the hydrophobic inhibitor BIF from an exit point to an additional transient hydrophobic subpocket in the CYP 2B4 active site cavity in conventional MD simulations starting from a RAMD simulation snapshot with the ligand outside the protein.These results support ligand uptake/release by a single 2-way route and suggest a functional role of the transient subpocket, in a two-step entry mechanism or as an allosteric site.These insights advance our understanding of CYP 2B4's functionality and have broader implications as regards the pharmacological role of CYP enzymes in drug metabolism.
| Preparation of full length CYP 2B4 models
Full length models of CYP 2B4 were constructed using the following high-resolution crystal structures deposited in the Protein Data Bank (PDB): "open" ligand-free CYP 2B4 (PDB ID 1PO5, Scott et al., 2003;1.60Å resolution; N-terminal residues 3-21 truncated) and "closed" CYP 2B4 bound to 4-(4-chlorophenyl)imidazole (PDB ID 1SUO, Scott et al., 2004; 1.90 Å resolution, N-terminal residues 3-21 truncated).The globular domains from these experimental structures were used for homology modeling based on the UniProt sequence P00178, in which the missing residues (1-48) were built using Modeller v9.23 (Eswar et al., 2007), while the secondary structure prediction tool TMPred (Hofmann & Stoffel, 1993) was used to model the missing transmembrane helix (residues 1-20).For both "open" and "closed" CYP 2B4 models, 10 different configurations were generated with varying orientations and distances to the lipid bilayer (Figure S1).After removing the heme cofactor, each model was converted to the CG MARTINI representation after being inserted into a pre-equilibrated CG POPC bilayer as described previously (Cojocaru et al., 2011;Mukherjee et al., 2021).
| CG MD simulation
A total of 20 individual CG MD simulations were performed for 7-12 μs each, using the GROMACS (Abraham et al., 2015) v5.0.6 software package with the Martini2.2(De Jong et al., 2013) CG force field for the protein, lipid, and water, under periodic boundary conditions.Harmonic restraints were applied on the backbone atoms of the protein except the flexible linker region (residues 21-50) with an elastic force constant of 500 kJ/mol/nm 2 and a distance cutoff of 5-9 Å to retain the overall secondary and tertiary structure during the simulations based on the secondary structure information obtained from the DSSP (Kabsch & Sander, 1983) server.The steepestdescent algorithm was used for energy minimization for 10,000 steps with a maximum force of 5 kJ/mol/nm.A short equilibration of 20 ns with a time step of 20 fs was performed at a constant pressure of 1 bar at 310 K. Temperatures of the protein with POPC and solvent with ions were controlled separately with a velocity rescale thermostat with a coupling constant of 1 ps.A Berendsen barostat was used to achieve constant pressure using alternative isotropic pressure coupling with a compressibility of 3.0 Â 10 À4 bar À1 , a coupling constant of 3.0 ps, and a reference pressure of 1 bar.The long-range nonbonded interactions were computed using a reaction field (RF) with a dielectric constant of 15.The neighbor-list was updated every 10 steps.The Verlet pairlist algorithm was used for calculating non-bonded forces with a cutoff of 1.1 nm and a Verlet-buffer-tolerance (verlet-bufferdrift) value of 0.005 kJ/mol/ps.During the production simulations of around 7-12 μs, the pressure coupling method was switched to the Parrinello-Rahman barostat with a coupling constant of 15 ps and the non-bonded interactions were calculated using an RF with a cutoff distance of 1.2 nm.
The CG MD simulation trajectories were analyzed for convergence by computing structural angle and distance parameters to track the orientation of the CYP 2B4 globular domain with respect to the membrane during the simulations as previously described (Cojocaru et al., 2011;Mukherjee et al., 2021).Distributions were computed of the α and β angles and the globular domain CoMmembrane CoM axial distance over the last 2 μs of the trajectories when the positioning of the CYP 2B4 in the bilayer was converged.Three representative structures were selected for conversion to atomic detail for AA MD.For the closed and open conformers, the representative structures were taken from the frames in the last 2 μs of the simulations that matched the average α and β angles over this time scale from all replicas of the respective CG simulation sets, see Table S1
| AA MD simulation
The protein and lipid membrane in the three CYP 2B4 structures from the CG simulation were converted to atomic detail as previously described (Cojocaru et al., 2011;Mukherjee et al., 2021).The crystal structures of the open and closed globular domain conformations were superimposed on the converted structures to introduce the heme in the binding site.AA MD simulations were conducted using the AMBER ff14SB (Maier et al., 2015) force field for protein and the LIPID14 (Dickson et al., 2014) force field for the POPC membrane, and GAFF-parameters for ferric, low spin, hexa-(water-) coordinated heme ("resting state") from Harris et al. (2004).
The systems were immersed in a periodic box of TIP3P water molecules with 150 mM ionic concentration.They were energy minimized using AMBER20 (Case et al., 2005) and applying harmonic restraints with a force constant decreasing from 1000 to 0 kcal/mol/Å 2 on the heavy atoms of the protein, as described in previous studies (Cojocaru et al., 2011;Mukherjee et al., 2021).The NAMD 2.14 (Phillips et al., 2020) package was used for equilibration and production simulation runs.The NPAT ensemble was used during equilibration with constant surface area, constant pressure of 1 bar, and constant temperature of 310 K.A time step of 1 fs was used for the initial 2.8 ns equilibration run with harmonic restraints with a force constant decreasing from 100 to 0 kcal/mol/Å 2 .The pressure was controlled using the Nosé-Hoover Langevin piston method with an oscillation time of 100 fs and a damping time of 50 fs.Temperature was controlled by Langevin dynamics with a damping coefficient of 5.0 ps À1 .The system was equilibrated further for 12.5 ns without harmonic restraints using a time step of 1 fs for the first 2.5 ns and a time step of 2 fs for the remaining 10 ns.The Nosé-Hoover Langevin piston method was used with an oscillation time of 200 fs and a damping time of 500 fs and the temperature was kept constant by Langevin dynamics with a damping coefficient of 1.0 ps À1 .For the production simulations, the NPT isobaric-isothermal ensemble was used with a time step of 2 fs.The electrostatic interactions were calculated using the Particle Mesh Ewald (PME) method and all bonds to hydrogen atoms were constrained using the SHAKE algorithm.Temperature was controlled by Langevin dynamics with a damping coefficient of 0.5 ps À1 at 310 K on non-hydrogen atoms.Constant pressure was achieved using the Nosé-Hoover Langevin piston method with an oscillation time of 1000 fs and a damping time of 1000 fs.
The production runs of the three sets of simulations were analyzed using the same structural parameter values used for the CG MD trajectory analysis.The heme tilt angle was also monitored during the simulations with the tilt being the angle between the four porphyrin nitrogens defining the heme plane and the membrane normal (z-axis).
| Secondary structure analysis
Residue-wise secondary structure classification was performed using the DSSP algorithm (Kabsch & Sander, 1983) with the "timeline" feature in VMD (Humphrey et al., 1996).Since the assignment of the heme and covalently bound proximal cysteine to one "residue" caused this method to fail and the cysteine is located within a long loop-region, it was removed from the analyzed trajectory with CPPTRAJ (Roe & Cheatham, 2013).The analysis was then performed at time intervals of 200 ps.
| Analysis of heme interactions
Residue-wise interaction patterns of the heme and proximal cysteine were identified with the MD-IFP workflow (Kokh et al., 2020).Potential interacting residues were selected as the globular domain of CYP 2B4 and the IFP generation was set to include nonspecific interactions.The results from this analysis were combined with visual analyses of the trajectories using VMD to identify the hydrogen-bonding partners of the heme propionates.All hydrogen-bonding contacts, but not those formed by the proximal cysteine residue, were recorded and the minimal distance between any of the propionate oxygens and hydrogen-bonding competent residues of the recorded residues were computed in a separate step using the MDAnalysis library (Michaud-Agrawal et al., 2011).All interactions were analyzed in steps of 200 ps.
| AquaDuct and CAVER analysis
Protein tunnels in the trajectories of the closed conformation of CYP 2B4 were identified with CAVER 3.0 (Chovancova et al., 2012;Pavelka et al., 2016).Analyzing the open trajectory with CAVER did not yield meaningful results as the protein surface was detected to be formed by the heme residue itself and the open "funnel" leading toward the active site was classified as the protein surroundings.
For the closed conformation, a trajectory file of the last 100 ns of each trajectory with frames saved at intervals of 20 ps was prepared.Each frame of this trajectory was then aligned to the initial frame by superposition of the globular domain residues and exported as a pdbformat file for CAVER.Then CAVER was run with default parameters using a probe radius of 1.4 Å and shell radius and shell-depth of 4 Å.The starting points were defined as the heme-iron and heme-alpha nitrogen (NA) atoms.
Water routes in the trajectories of closed and open CYP 2B4 were analyzed with the AquaDuct software (v.1.0.11) (Magdziarz et al., 2020).Trajectory files of the last 100 ns of each simulation were generated, including water molecules, ions, and the phospholipid bilayer, with snapshots at intervals of 6 ps.These trajectories were then centered on the catalytic heme-cysteine residue with GROMACS 2020.The AquaDuct analysis was mostly run with default parameters: the "scope" was defined as the globular domain of CYP 2B4 (residues 20-492) and the "object" was defined as water molecules within 5 Å of the iron atom of the heme.The option "scope_everyframe" was inactivated due to centering issues and clusters were manually combined in a second run of the program to yield intuitive and biologically meaningful insights.An additional recursive clustering using the "birch" method was utilized in the analysis of the open conformation to subdivide a large cluster into two.For this, the "recursive_threshold" parameter was set to <0.5.
The pose to be used in subsequent AA MD simulations was selected by requiring (1) a good docking score, (2) the reactive methyl group of BZP to be close to the heme iron, and (3) the pose to be close to the I-helix where most ligands in crystal structures of CYP 2B4-ligand complexes form contacts.Accordingly, poses #2 (closed) and #8 (open) were selected for further simulation (see Table S4, Figure S7).
Due to the very high similarity between the two compounds, NZP was inserted into the active sites of the closed and open crystal structures by superposition on the selected binding poses of BZP.Since docking of BIF with Autodock Vina did not yield satisfactory results and due to the availability of a crystal structure of BIF bound to an intermediate conformation of CYP 2B4 (PDB:2BDM; Zhao et al., 2006), this was used as a template for the binding pose of BIF.The ligand was inserted into the open crystal structure (PDB:1PO5) by superimposing the two crystal structures by aligning the heme cofactors (Kokh et al., 2020;Lüdemann et al., 2000a).
For subsequent MD simulations, GAFF force-field parameters for the three compounds were derived with the Antechamber program in AMBER Tools (Case et al., 2005).RESP partial atomic charges were derived from quantum-chemical calculations using Gaussian09 (Frisch et al., 2013).For this, the structures of the compounds were geometry-optimized at the B3LYP level using a 6-31G* basis set and then the Hessian was computed (by computation of vibrational frequencies using the freq-keyword) to rule out the identification of saddlepoint geometries.Subsequently, molecular electrostatic potentials were computed for the optimized geometries at the Hartree-Fock level with a 6-31G* basis set using the iop(6/50 = 1) and pop = mk keywords.The docked complexes were energy minimized and then converged in short MD simulations of 100-200 ns in NAMD using the protocol described above.
| Random acceleration molecular dynamics simulations of ligand egress and re-entry
The egress of BZP, NZP, and BIF from the CYP active site was simulated by the RAMD-method using a modified GROMACS 2020 version: GROMACS-RAMD 2.0 available at: (https://github.com/HITS-MCM/gromacs-ramd).Input structures were generated by extracting the final frames from the previous MD simulations run with NAMD (see above) after 113.3 ns (BZP, closed), 116.1 ns (NZP, closed), 217.7 ns (NZP,open),and 127.6 ns (open,BIF).Since BZP displayed very high mobility in the active site in the open conformation, an earlier frame at 20 ns, which more closely represented its binding pose, was utilized.GROMACS-compatible structures and topology files were generated by conversion using ParmEd (Shirts et al., 2017).All systems were equilibrated and subsequently sampled in two consecutive, unrestrained simulations in the NPT ensemble with a timestep of 2 fs.The Particle Mesh Ewald method using a Verlet-cutoff scheme (using a cut-off distance of 1.1 nm for the shortrange neighbor list) was applied for computing longrange electrostatic interactions.Bonds to hydrogen atoms were constrained using the LINCS algorithm (Hess et al., 1997).The first simulation (equilibration 1) utilized the Berendsen thermostat (τ = 1.0 ps) and a semiisotropic Berendsen barostat (τ = 5.0 ps, compressibility = 4.5 Â 10 À5 bar À1 ) to heat the system to 310 K at a constant pressure of 1 bar.Three separate molecular groups consisting of (1) the protein and ligands, (2) the phosphatidylcholine lipid bilayer, and (3) the water and ions were used for thermal coupling.To achieve increased sampling of the bound state, the second equilibration of 20 ns was run with randomly initiated velocities.This simulation was repeated once (open: BZP/NZP) or twice (open: BIF; closed 2B4: BZP/NZP) with modified random number generation seeds.In these simulations, a semi-isotropic Parrinello-Rahman barostat (τ = 5.0 ps, compressibility = 4.5 Â 10 À5 bar À1 ) and a Nosé-Hoover thermostat (τ = 1.0 ps) were utilized to equilibrate the systems to 310 K and 1 bar.Each resulting system was then subjected to 15 RAMD simulations in the NPT ensemble using identical parameters to those used in the previous simulations.The RAMD simulations were stopped after the ligand CoM had reached a distance of 60 Å from the heme-iron atom of CYP or after 4 ns of simulation time.The random force magnitude was varied according to the system simulated as follows: closed 2B4: 501 kJ/mol/nm (12 kcal/mol/Å), open 2B4: NZP: 250 kJ/mol/nm (6 kcal/mol/Å), BZP: 251 kJ/ mol/Å (6 kcal/mol/Å), and BIF: 334 kJ/mol/Å (8 kcal/ mol/Å).The randomly chosen direction of the force was evaluated every 50 simulation steps and the direction changed randomly if the ligand did not move further than the distance-traveled criterion of 0.0025 nm.The number of RAMD simulations per system depended on the results of the first two sets of 15 simulations, each of which was started from two preceding independent, conventional MD simulations of the bound state.If more than three unique egress routes, that is, routes that were only sampled in one of the two sets were observed, a third set of 15 trajectories was simulated.In addition, three sets of 15 trajectories were generated for BIF, despite high agreement between the first two sets, so as to yield more and diverse poses at the point of last contact for subsequent simulations of the reentry of BIF into the protein.
The ligand egress routes were analyzed by visual inspection using VMD 1.9 (Humphrey et al., 1996).
Entry of BIF to CYP 2B4 was simulated by extraction of frames from the corresponding RAMD trajectories at a point where some of the last contacts were formed between the protein and ligand.These frames were then subjected to conventional MD simulations using identical parameters to those for the RAMD simulations (see above).
F
I G U R E 1 Crystal structures of CYP 2B4 in (a) closed (PDB-ID: 1SUO; Scott et al., 2004) and (b) open (PDB-ID: 1PO5; Scott et al., 2003) conformations.Important secondary structure elements are highlighted in color on the cartoon representations of the protein.The catalytic heme is displayed in blue stick representation.In the upper panels, the missing N-terminal transmembrane helix is indicated by a gray cylinder and the expected location of the surface of the membrane is indicated by a dashed line.These are absent from the lower panels since they show the view from the membrane.The protein surface is shown as colored by hydrophobicity according to the residue-wise Eisenberg hydrophobicity scale.In the open conformation (b), a hydrophobic funnel-like opening toward the membrane is visible as a continuous red area.
F
I G U R E 2 Positioning of the globular domain of CYP 2B4 with respect to the membrane bilayer in coarse-grained (CG) molecular dynamics simulations.(a) Representative structures extracted from the CG simulations of closed (left) and open (right) CYP 2B4 (see Section 4 for details).Tilt angles are defined according to the angle between the z-axis perpendicular to the membrane and pre-defined vectors: α angle, for the vector between the center of masses (CoM) of the backbone atoms of the first four and last four residues of the I-helix; β angle, for the vector between the CoMs of the backbone atoms of the first four residues of the C-helix and the last four residues of the F-helix; N-terminal transmembrane (TM) tilt angle, for the vector between the CoMs of the backbone atoms of the first and last four residues of the TM helix.Phosphate atoms of the phosphocholine are shown as orange spheres to represent the phospholipid bilayer.(b) Probability densities of the α and β angles and the axial distance of the CoM of the globular domain to the center of the lipid bilayer for the converged last 2 μs of all the CG simulations for the closed (red) and open (blue) conformations of the CYP 2B4 globular domain.
2. 2 |
All-atom MD simulations of closed, open, and alternative-open states of CYP 2B4 reveal different interactions with the membrane Due to the elastic network model applied on the globular domain and TM helix, their overall conformations were retained during the CG simulations.To sample the motions within the domains as well as in the flexible regions of the protein, AA MD simulations were performed starting from the three selected structures from the CG simulations-closed, open, and alternative-open.Considering the converged α and β angle values, these were two structures representing the predominant F I G U R E 3 Time evolution of the structure of CYP 2B4 and the positioning of the globular domain and the N-terminal transmembrane (TM) helix with respect to the lipid bilayer during all-atom molecular dynamics simulations of CYP 2B4 with three initial conformations of the globular domain: closed (yellow), open (cyan), and alternative-open (green).The Cα RMSD values, calculated with respect to the initial frame of the production run, are shown for (a) the globular domain (residues 51-492), (b) the heme+Cys436, (c) the BC loop (residues 97-118), (d) the FG loop (residues 208-229), and (e) the linker between the TM helix and the globular domain (residues 21-50).Water contacts to the heme ligand, quantified by the number of water molecules within 5 Å of the heme, are given in (f).The α (g), β (h), TM helix tilt (i), and heme tilt (j) angles are defined in Figure 2.
arrangements observed in the CG simulations for the closed and open globular domain conformations and an additional structure (alternative-open) from the CG simulations with the open conformation that showed α and β angles between those of the predominant configurations of the closed and open conformations.
0 and F 0 -G 0 regions as the RMSD values of these parts of the open and alternative-open structures were in the 4-8 Å range whereas the closed structure showed stable B 0 F I G U R E 4 Structural adjustments of CYP 2B4 in the membrane bilayer environment during all-atom molecular dynamics simulations.(a) Snapshots of the last frames of the CYP 2B4 trajectories started with closed, open, and alternative-open structures.A 0 helix in orange, B 0 region in red, C helix in cyan, F 0 helix in magenta, G helix in green, I helix in yellow, and C-terminal loop in blue.The approximate position of the membrane surface is indicated by dashed lines.(b-h) Time evolution of distances characterizing the structures during the simulations of the open, alternative-open, and closed structures in cyan, green, and yellow, respectively.Center-of-mass (CoM) axial distances are shown between the lipid bilayer and (b) the globular domain (residues 51-492), (c) the BC loop (residues 97-118), (d) the F 0 -G 0 helix (residues 208-229), and (e) the linker between the N-terminal transmembrane helix and the globular domain (residues 21-50).CoM distances are also shown between (f) the A 0 helix (residues 52-55) and the F 0 helix (residues 213-218), (g) the B 0 region (residues 102-106) and the G helix (residues 231-235), and (h) the B 0 region (residues 102-106) and the C-terminal loop (residues 476-478).and F 0 -G 0 regions with RMSD values around 2 Å, see Figure3c,d.The orientations of the globular domain in the lipid bilayer differed significantly between the three conformers as the closed CYP 2B4 shows elevated levels of α and β angles compared to the open and alternativeopen structures, see Figure 3g,h.The tilt of the globular domain relative to the membrane in the open and alternative-open structures remained lower, with the α angle around 70 than in the closed form with α $ 100 .The β angle was the lowest in the open form, converging at around 74 .The closed form showed a much higher β angle of 132 , and the alternative-open form converged to an intermediate value of 98 , see also Table . The most notable difference in the alternative-open conformation was the positioning of the F 0 -G 0 helical region with a distance between the A 0 and F 0 helices of around 30 Å, whereas this distance was about 15 Å in the open and closed conformations.The two monitored distance pairs for the B 0 loop, to the G helix and to the C-terminal loop, fluctuated at elevated levels above 20 Å in both the open and alternative-open states, whereas in the closed state, the distances fluctuated less and were in the 10-15 Å range.Overall, the B 0 loop and F 0 -G 0 helices, located near the substrate binding pocket on the distal side of the heme, had shorter distances to the respective opposing side of the pocket in the closed conformation than in the open and alternativeopen conformations, which showed more flexible and diverse arrangements of these moieties.
and Figure S5.The closed conformation of CYP 2B4 displayed the most stable conserved hydrogen-bond interactions, where the heme propionate formed stable contacts with W121, R125, and H369 throughout the simulation.Other hydrogen-bonding residues include R98, to which a heme propionate established contact only after a local conformational change, and S430, which made occasional contacts.The open conformation showed the broadest range of interactions with many alterations over the course of the trajectory but maintaining stable contacts of the F I G U R E 5 Heme-interaction patterns of the closed, open, and alternative-open states of CYP 2B4 during the all-atom molecular dynamics simulations.Residues that form hydrogen-bond interactions with the heme propionates in more than 5% of the analyzed frames are colored by percentage occupancy (according to a distance threshold of 3.3 Å).The heme is displayed in a white carbon stick representation with the propionate oxygens as spheres.
overall count of water molecule passages through the protein ("separate pathways" in AquaDuct) in the trajectories increased almost 40-fold from 56-fold for the closed conformation to 1997 for the open conformation.This increase indicates that the active site cavity is in constant exchange with the solvent in the open CYP 2B4.The dominant entry-point clusters in the trajectory for the open conformation correspond to pathways 2c/2ac (Cl.1, 48.6%), the water channel (Cl.2, 44.2%), channels 2f (Cl.3, 3.8%), and a channel similar to pathway 5 (Cl.4,1.5%), see Figure 6b.These results highlight how different the closed and open conformations of CYP 2B4 are regarding solvent accessibility and exchange.Previous analysis (Cojocaru et al., 2007) of three crystal structures of CYP 2B4 with
F
I G U R E 6 Open tunnels leading to and from the active site of CYP 2B4 in the conventional all-atom molecular dynamics trajectories of the closed (a) and open (b) conformations as identified by analysis with CAVER and AquaDuct.The solvent channel detected by CAVER is shown by blue probe spheres in the protein structure from the final frame of the trajectory and its properties are given in Table
F
I G U R E 7 Distribution of egress routes from the active site for benzphetamine (BZP), norbenzphetamine (NZP), and bifonazole (BIF) in RAMD simulations starting from the last snapshots of conventional molecular dynamics simulations of closed and open conformations of CYP 2B4.(a) Relative occurrence of each pathway over all the trajectories.(b) Chemical structures of the three ligands and random force magnitudes applied during the RAMD simulations.(c) The most common egress routes.Snapshots of egressing BZP molecules are shown as magenta van der Waals spheres with black arrows indicating the direction of egress.Pathways 1 and 2e are shown for the closed conformation of CYP 2B4 whereas pathways 2ac and 2c are shown for the open conformation.The protein is shown in cartoon representation with flanking secondary structure elements identifying the pathways colored and the N-terminal transmembrane helix is indicated by a cylinder.The approximate position of the surface of the membrane is shown by a dashed line.
F
I G U R E 8 Conventional all-atom molecular dynamics simulation of the re-entry of bifonazole into CYP 2B4 resulting in binding at an additional transient subpocket.(a) Depiction of the reentry process from superimposed simulation frames from the start of the Replica I simulation.Bifonazole (BIF) is colored according to time.Later frames, displaying the internal reorientation of BIF within the subpocket, are not shown.(b) Comparison of the positions of BIF in the last frames of the two simulations in which BIF reentered CYP 2B4 and a crystal structure of CYP 2B4 in an intermediate conformation (PDB ID: 2BDM) with three BIF molecules bound at different positions.The simulated BIF molecules and the BIF in the crystal structure in the position of interest are colored magenta.A Cymal-5 compound interacting with this BIF in the crystal structure is colored pale pink.The other two BIF molecules in the crystal structure are shown in blue.The protein is shown in cartoon representation with secondary structures of interest colored and labeled.Residues F202 and 297 are explicitly displayed in stick representation.The approximate position of the surface of the membrane is shown by a dashed line.(c) Reorientation of BIF in the additional transient subpocket during the simulations as monitored by the distances between the three "edge" atoms of BIF: C8, C14, and N2, and the CZ atom of F202, a residue deep inside the subpocket.A dynamic depiction of the egress and subsequent reentry is provided by the Movie S1.
tions and the respective opposite orientation, that is, the closed conformation of the globular domain adopting the major orientation of the open conformation and vice versa.Thus, in addition to one orientation for the open and one for the closed state, we selected one intermediate orientation, labeled alternative-open, from the CG simulations for the open CYP 2B4 for subsequent AA MD.In the AA MD of the closed, open, and alternativeopen CYP 2B4, the geometric parameters characterizing the position of the globular domain on the membrane span the range of membrane orientations of microsomal CYPs observed previously, likely due to the high degree of flexibility of the CYP 2B4 globular domain in which the active site conformations range from tightly closed to wide open.
. The alternativeopen conformer representative structure was selected from the open CYP 2B4 set to have intermediate values of the α and β angles between those of the open and closed conformations.The closed representative conformer is a frame from replica 10 from the CG simulation set initiated from the closed CYP 2B4 globular domain crystal structure, frames from replica 9 and replica 8 from the CG simulation set initiated from the open CYP 2B4 globular domain crystal structure were chosen for the open and alternative-open conformers, respectively. | 2024-04-23T13:08:03.174Z | 2024-07-12T00:00:00.000 | {
"year": 2024,
"sha1": "36a0a00cadc4810926774108c6648866a790ae79",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1002/pro.5165",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "fbdd280a2908a18c28731770110963a00041dfad",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
258367406 | pes2o/s2orc | v3-fos-license | Establishing new cutoffs for Cohen's d: An application using known effect sizes from trials for improving sleep quality on composite mental health
Abstract Objective Cohen's d conventional effect size cutoffs [small (0.2), medium (0.5), and large (0.8)] might not be representative of the reported distribution of effect sizes across the different areas of health. Effect size cutoffs might vary not only depending on the area of research, but also on the type of intervention and population. That is, they are context dependent. Therefore, we present strategies to redefine small, medium, and large effect size based on 25, 50, and 75th percentile, respectively. Methods We illustrate these techniques applying them to 72 effect sizes, derived from 65 randomized controlled trials described in a recent meta‐analysis (10.1016/j.smrv.2021.101556) of improving sleep quality on composite mental health. Such percentiles are equally distanced from the average effect size as suggested by Jacob Cohen and checked for potential attenuation effects (via weight selection model) and outliers (via OutRules). Results new cutoffs for effect size distribution of −0.177, −0.329, and −0.557, for small, medium, and large effect size were found, respectively. applying Cohen's effect size thresholds (0.2, 0.5, and 0.8) for trials of improving sleep quality on composite mental health might over‐estimate effect sizes compared to the real‐world context, especially around medium and large effect sizes.
effect (effect size of −0.53; 95% CI, −0.69 to −0.38). This interpretation of effect sizes is based on a rule of thumb where 0.2, 0.5, and 0.8 are considered as small, medium, and large effect sizes (Cohen, 2013). A medium-sized effect was originally suggested by Jacob Cohen to represent the average effect for a field if too few studies were available to calculate a distribution of effect within that respective field, whereas small and large effects were supposed to be equidistant from this average effect (Cohen, 2013).
Cohen proposed a medium effect as an effect size observable to the naked eye. For example, a change in a treatment group that was clinically meaningful when comparing it to the control group. In other words, when improving sleep quality on overall composite mental health reports a Cohen's d of 0.53 (Scott et al., 2021), there is a 64.6% chance that a person picked at random from the treatment group will have a higher score than a person picked at random from the control group (probability of superiority). Furthermore, in order to have one more favorable outcome in the treatment group compared to the control group, we would need to treat 5.63 people on average. Cohen's guidelines were originally intended to be used when effect size distributions (ESD) are unknown (Cohen, 2013;Glass et al., 1981;Thompson, 2009). Hence, we here introduce an easy way to determine small, medium, and large effect sizes by calculating the ESD within the RCTs using improving sleep quality on composite mental health. The ESD's provided in this article can be applied as a guidance for better planning studies in interpreting the magnitude of the effects. Researchers can find the data and R code to perform ESD analysis in the supplementary materials.
| METHODS
To illustrate the approach, we use a recent meta-analysis of RCTs (Scott et al., 2021) that included 72 effect sizes that used composite mental health as an outcome. Medium effect (50th percentile) depicting the average effect size was calculated, as well as small (25th) and large effect sizes (75th) as they are equally distanced from the average effect size (Cohen, 1992), also reported by Quintana (2017), which used the percentiles above described but applied to case-control studies, serving as inspiration for this work.
We applied the GOSH plot (i.e., a graphical display of study heterogeneity) to detect effect sizes that could overtly influence the ESD. A series of meta-analyses based on all possible combinations of studies are conducted through this approach. Therefore, it is detectable if a single study or distinct subgroup of studies influences the summary effect size estimate. 50,000 random subset models were set because of computational restrictions. A weight selection model was used to detect publication bias)see Vevea and Hedges (1995) for more information about weight selection model). The weight selection model assumes that the possibility of being published for studies with non-significant p-values is less than those with significant p-values, hence the former studies will be given greater weight in the model. A likelihood ratio test was used to assess whether the model adjusted for publication bias and the unadjusted model were significantly different. A threshold of 0.1 considered according to Begg and Mazumdar (1994). Scott et al. (2021) reported effect size of improving sleep interventions before and after excluding 11 outlier effect sizes (i.e., −0.53 and −0.42, respectively), hence, we also calculated effect sizes after removing those 11 effect sizes. Analyses were run on the RStudio version April 1, 1103 and Weka version 3.9.5.
We also reran the weight selection model after excluding outliers reported by Scott et al. (2021). Result showed that for the 61 studies,
| DISCUSSION
This study redefines traditional effect size cutoffs for group differences in sleep improving interventions on composite mental health.
By calculating the 25th (small effect), 50th (medium effect), and 75th (large effect) percentiles based on 72 effect sizes derived from 65 RCTs we found that −0.177, −0.329, and −0.557 represent small, medium, and large effect sizes, respectively, after considering the attenuation effect due to publication bias.
Hence, using Cohen's conventional rule of thumb would overestimate expected effect sizes in the context of sleep improving in- terventions. An effect size of 0.5, for instance, would traditionally be considered a medium effect, whereas a medium effect is associated with −0.329 (after considering the attenuation effect due to publication bias) based on our empirically derived effect size threshold.
The similar results were obtained from the sensitivity analysis after excluding outlier effect sizes, specifically around medium and large effect sizes.
Notably, a selection model revealed evidence of publication bias, which would lead to inflated effect sizes. Therefore, we applied such attenuation to the larger amount of effect sizes (n = 72). In terms of limitations, when discussing the analysis involving the 72 effect sizes, weight selection model assumes that the effect sizes are independent. However, the systematic review we used had eight effect sizes from the same RCTs (e.g., studies with more than two arms), therefore, caution must be taken considering our approach. There are two ways available to deal with such a multilevel issue (i.e., effect sizes nested in the same RCT given more than two arms design): (1) excluding dependent effect sizes leaving only one effect size per study, or (2) consider all the effect sizes as independent measures.
Considering the first option, regardless of the rule of exclusion criterion (i.e., excluding the largest effect or the lowest effects), we are introducing publication bias. In regard to the second option, we are not accounting for the multilevel design. Given that there is no available weight selection model for multilevel designs, one might introduce bias related to underestimation of the standard errors of the regression coefficients in the selection models, because we are not considering the Multilevel/hierarchical model with clusteredrobust standard errors. The robust standard errors are known as Huber-White or Huber-WhiteEiker or "sandwich" estimation (White, 1980). They are more accurate under model and distributional misspecification, which can be applied to any model (i.e., multilevel or multivariate etc.). New improvement in the selection model might incorporate such features allowing it to deal with multiple effect sizes derived from the same RCT. T A B L E 1 Effect size percentiles of 72 and 61 studies (studies without outliers reported by original meta-analysis (Scott et al., 2021) -3 of 4 The approach used here for redefining effect size cutoffs can be applied to different research areas (Nordahl-Hansen et al., 2022;Panjeh et al., 2023;Quintana, 2017). We encourage researchers to use the code of ESD analysis in their own field of study to plan future research and to better understand the magnitude of effects.
CONFLICT OF INTEREST STATEMENT
Hugo Cogo-Moreira, Sareh Panjeh and Anders Nordahl-Hansen declared no conflict of interest.
DATA AVAILABILITY STATEMENT
The data that supports the findings of this study are available in the supplementary material of this article. | 2023-04-28T15:19:38.105Z | 2023-04-25T00:00:00.000 | {
"year": 2023,
"sha1": "a3f3096c698fd6e2cb11d4231221e8fb5eba9b96",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/mpr.1969",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "83c5b398c184ae4fdfa7b8bf0b03462cf71b6be5",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
} |
19975220 | pes2o/s2orc | v3-fos-license | Disorders of branched chain amino acid metabolism
The three essential branched-chain amino acids (BCAAs), leucine, isoleucine and valine, share the first enzymatic steps in their metabolic pathways, including a reversible transamination followed by an irreversible oxidative decarboxylation to coenzyme-A derivatives. The respective oxidative pathways subsequently diverge and at the final steps yield acetyl- and/or propionyl-CoA that enter the Krebs cycle. Many disorders in these pathways are diagnosed through expanded newborn screening by tandem mass spectrometry. Maple syrup urine disease (MSUD) is the only disorder of the group that is associated with elevated body fluid levels of the BCAAs. Due to the irreversible oxidative decarboxylation step distal enzymatic blocks in the pathways do not result in the accumulation of amino acids, but rather to CoA-activated small carboxylic acids identified by gas chromatography mass spectrometry analysis of urine and are therefore classified as organic acidurias. Disorders in these pathways can present with a neonatal onset severe-, or chronic intermittent- or progressive forms. Metabolic instability and increased morbidity and mortality are shared between inborn errors in the BCAA pathways, while treatment options remain limited, comprised mainly of dietary management and in some cases solid organ transplantation.
Overview of branched chain amino acids metabolism and regulation
Branched chain amino acids (BCAAs), leucine, isoleucine and valine, are three of the nine essential amino acids and account for 35-40% of the dietary indispensable amino acids in body protein and 14% of the total amino acids in skeletal muscle. They share common membrane transport systems and enzymes for their transamination and irreversible oxidation. A detailed biochemical pathway is provided in Fig. 1. They can be glucogenic (valine), ketogenic (leucine and isoleucine) or both (isoleucine), since their end products, succinyl-CoA and/or acetyl-CoA can enter the Krebs cycle for energy generation and gluconeogenesis or act as precursors for lipogenesis and ketone body production through acetyl-CoA and acetoacetate [1].
BCAAs first undergo a reversible transamination by BCAA aminotransferases (BCAT), a pyridoxalphosphate dependent enzyme with cytosolic (BCATc) and mitochondrial (BCATm) isoforms, followed by the irreversible oxidative decarboxylation and coupled thioesterification of the respective ketoacids by the single mitochondrial branched chain keto-acid dehydrogenase (BCKDH) complex to form coenzyme A derivatives. The BCKDH multienzyme complex consists of E 1 , a thiamine pyrophosphatedependent decarboxylase; E 2 , a lipoate-dependent transacylase; and E3, a dehydrogenase, the subunit 92 I. Manoli of which is shared by pyruvate and ␣-ketoglutarate dehydrogenases. The oxidation of BCAAs and branched chain keto-acids (BCKAs) is tightly regulated primarily at the BCKD step [1][2][3], which commits BCAAs to oxidative metabolism. The next step in the BCAA metabolic pathway is dehydrogenation of the activated ketoacid by either isovaleryl-CoA dehydrogenase (leucine metabolism) or the ␣-methyl-branched chain dehydrogenase (isoleucine and valine metabolism). After these first three steps, the metabolism of each of the BCAAs diverges and eventually yields acetyl-CoA and/or propionyl-CoA. Terminal valine metabolism is unique because a free acid, 3-hydroxyisobutyric acid forms after the hydrolysis of the corresponding thioester. 3-hydroxyisobutyric acid is dehydrogenated, then reacylated to complete metabolism.
Similar to other large neutral amino acids (phenylalanine, tryptophan, leucine, methionine, isoleucine, tyrosine, histidine, valine, and threonine), they are transported into the brain and other organs primarily by the L1-neutral amino acid transporter (LAT1). Therefore, the relative concentrations of Fig. 2. Leucine metabolic effects in multiple organ systems. Leucine displays a multitude of effects in various organs: enhances protein synthesis, inhibits muscle protein breakdown, stimulates insulin secretion and plays a role in central nervous system food intake regulatory circuits and feeding behavior. Leucine is transported via the large neutral amino acid transporter LAT1 at the blood-brain barrier (BBB), among other transporters, and can compete with other large neutral amino acids for uptake/transport affecting neurotransmitter biosynthesis. Lastly, leucine-derived ␣-ketoisocaproate is a potent inhibitor of the branched-chain ketoacid dehydrogenase-kinase resulting in activation of branched-chain ketoacid dehydrogenase and increased BCAA (valine and isoleucine) oxidation. each amino acid and their competition for the same transporter affect brain amino acid uptake and the downstream synthesis of various neurotransmitters [4], an effect with significant pathophysiological and therapeutic implications for the diseases in the BCAA metabolic pathway. Moreover, leucine plays a central role in metabolism and participates in numerous signaling pathways, as summarized in Fig. 2. It is a potent stimulator of the mammalian target of rapamycin complex 1 and downstream targets that enhance translation elongation and protein synthesis [5][6][7]. In addition, leucine may act as an inhibitor of muscle protein breakdown, via interactions with the ubiquitin-proteasome and the autophagy-lysosome system [8]. Furthermore, leucine stimulates insulin secretion from the pancreatic -cell serving as metabolic fuel as well as an allosteric activator of glutamate dehydrogenase [9][10][11][12][13]. Lastly, it also plays a role in central nervous system food intake regulatory circuits and feeding behavior [14,15].
MSUD is the only one of the BCAA disorders that can be detected by plasma amino acid analysis, as it is caused by a defect in the second step of BCAA metabolism, resulting in massive plasma elevations of primarily leucine, but also isoleucine and valine, as well as their BCKAs, ␣-ketoisocaproic (KIC), ␣keto--methylvaleric (KMV) and ␣-ketoisovaleric acid (KIV), respectively. For the rest of the BCAA metabolism disorders, the defect lies distal to the non-reversible BCKD reaction, and therefore amino acids and 2-oxo-acids do not accumulate. Each of the blocks in the subsequent steps leads to the accumulation of intermediates in the respective pathway, resulting in unique, identifiable patterns for each enzyme defect. Only few relatively more common inborn errors, such as MSUD, IVA, 3MCC, 3MGA, PA and MMA, will be described in some detail, while descriptions of the clinical presentation, outcomes and treatment for all other disorders are listed briefly in Table 1.
Clinical presentation and disease pathophysiology
Detection of classic MSUD infants through newborn screening, comprising about 80% of the MSUD cases, has led to markedly improved treatment and neurological outcomes [16][17][18]. These earlydetected patients have significantly lower concentrations of leucine at presentation and require less extracorporeal detoxification methods for initial treatment [18,19]. It is possible though that newborn screening is missing milder variants of the disease, due to insufficient accumulation of toxic metabolites in the immediate neonatal period [20][21][22]. An increased ratio of Leu/Phe or Leu/Ala and second tier testing for allo-isoleucine have been proposed to increase sensitivity, but some of the intermediate and intermittent variants of MSUD are expected to escape detection, and serve to emphasize the limitations of expanded newborn screening in the diagnosis of all affecteds. Patients with IVA or beta-ketothiolase deficiency can be detected by newborn screening prior to the onset of symptoms. For both disorders, early initiation of appropriate preventative therapy is expected to significantly reduce morbidity and mortality.
The situation is more complicated for the disorders in the propionate pathway, where newborn screening may help change the course in the milder patients with methylmalonic (mut-MMA, cobalamin A and B of isolated MMA or other types of combined MMA and homocystinuria of the cobalamin metabolism pathways) and propionic acidemia, but will have little effect on the most severe end of the spectrum. Early onset MMA and PA patients may present before screening is initiated in the immediate neonatal period with massive metabolite elevations and hyperammonemia resulting in demise or significant brain damage [23,24]. Milder variants can be missed by newborn screening (false negatives) and efforts to develop methods that improve sensitivity and specificity, and integrate secondary markers into screening are underway [23,25,26].
In MSUD, the acute elevations of leucine and alpha-ketoisocaproic (␣KIC) during intercurrent illness or other physiologic stress can cause severe acute metabolic encephalopathy and life-threatening cerebral edema, while chronic imbalance in the plasma levels of branched chain amino acids or protein over-restriction can lead to abnormal brain amino acid uptake with subsequent decreased myelin and neurotransmitter synthesis, causing further brain damage manifesting as chronic encephalopathy [27].
One hypothesis about the metabolism of leucine in the CNS is that leucine is transaminated in the astrocytes with ␣-ketoglutarate to ␣-ketoisocaproate and glutamate (Glu) via the mitochondrial BCAA transaminase reaction. Glutamate is subsequently converted to glutamine (Gln). The glutamine and ␣ketoisocaproate (KIC) are released from the astrocytes and taken up in the neurons, where glutamine is converted to glutamate via phosphate-dependent glutaminase and ␣-ketoisocaproate is converted back to leucine and pyruvate by reversal of a cytosolic, in this case, BCAA transamination reaction, which is released and transported back to astrocytes, completing the so-called "leucine-glutamate cycle" [28][29][30][31]. Glutamine produced in glia is thus an essential precursor for the production of glutamate and GABA in the glutamate/GABAergic neurons.
In MSUD accumulation of branched-chain ketoacids (␣KIC) within astrocytes and neurons may drive the reverse transamination toward ␣-ketoglutarate, resulting in increased ␣KG/glutamate ratios. This mechanism may underlie the deficiencies in cerebral glutamate, GABA, glutamine and aspartate that have been described in MSUD mouse models, as well as post-mortem brain of an infant with MSUD [32,33]. This can also inhibit the malate/aspartate shuttle and result in an increased NADH/NAD+ ratio and impaired conversion of lactate to pyruvate [34], while high ␣KIC can inhibit pyruvate dehydrogenase (PDH) and ␣-ketoglutarate dehydrogenase (␣KGDH) resulting in Krebs cycle dysfunction [35,36]. Defective oxidative phosphorylation is consistent with the high cerebral lactate levels observed in mice and humans during a metabolic crisis [32,37].
High plasma levels of leucine result in competitive inhibition of the transport of other large neutral amino acids (tyrosine, phenylalanine, tryptophan, isoleucine, valine, histidine, methionine, glutamine and threonine) across the blood-brain-barrier through their shared transporter (LAT1) [32,38,39]. Reduced levels of these essential amino acids inhibit protein and neurotransmitter synthesis, such as dopamine, serotonin, histamine and S-AdoMet, by limiting available precursors. Furthermore, deficiency of branched-chain ketoacid dehydrogenase impairs the production of ketone bodies from leucine that are essential for myelin synthesis. Combined with impaired protein synthesis this leads to severe dysmyelination [40,41].
The illustration of the above mechanisms for the pathogenesis of acute and chronic brain injury in MSUD, and more importantly the implementation of that knowledge for the development of improved special metabolic formulas has yielded evidence-based guidelines for the management of this challenging condition. [16,42,43] In contrast to the "intoxication" from excess leucine that plays a primary role in the pathogenesis of MSUD, the pathophysiology of each of the subsequent blocks in the BCAA metabolic pathways is less well characterized. Following the same paradigm, initial studies have focused on establishing the effects of the accumulation of toxic metabolites proximal to the block in many of the remaining disorders.
In methylmalonic acidemia, deficient activity of methylmalonyl-CoA mutase results in significant accumulation of methylmalonic acid, as well as of propionyl-CoA derived metabolites, such as 3-OH-propionate, 2-methylcitrate (product of the reaction of propionyl-CoA with oxaloacetate), and propionylglycine. Original studies focused on methylmalonic acid as the primary toxin, while subsequent studies suggested a key role for 3-OH-propionate and 2-methylcitrate for the various secondary biochemical alterations seen in MMA, including lactic acidosis, hyperglycinemia, hyperammonemia and hypoglycemia. It has been proposed that methylmalonyl-CoA a known inhibitor of pyruvate carboxylase, blocks the formation of oxaloacetate and phosphoenolpyruvate, an important substrate for gluconeogenesis in the liver, thereby increasing lipid catabolism resulting in hypoglycemia and ketoacidosis [44]; MMA has structural similarities to malonate, a known inhibitor of complex II of the respiratory chain (succinate dehydrogenase) and was shown to induce neuronal damage in vitro, as well as striatal lesions and seizures after intrastriatal administration in rats [45][46][47][48][49]; MMA was also shown to impair the transmitochondrial malate shuttle, another key step in gluconeogenesis, while the formation of methylcitrate disrupts the Krebs cycle, further contributing to the bioenergetic problems that manifest in MMA patients [50][51][52]. Propionyl-CoA accretion leads to a competitive inhibition of NAGS through the excess production of N-propionylglutamate, which in turn leads to the failure of CPS-1 to synthesize carbamoylphosphate, the first step in the urea cycle. Furthermore, Krebs cycle dysfunction caused by a) the decreased production of succinate due to the block and b) the depletion of oxaloacetate via the formation of 2-methylcitrate, may result in insufficient ␣-ketoglutarate (glutamate precursor) production and underlie the paradoxical hyperammonemia in the presence of low glutamate/glutamine that is observed in propionic and methylmalonic acidemia patients during metabolic crises [53][54][55]. Toxic metabolites also cause decreased H-protein activity resulting in inhibition of the glycine cleavage system and hyperglycinemia [56,57].
Based on the "toxic metabolite" hypothesis, treatment with protein restriction to reduce the load of the offensive metabolites, as well as supplementation with glycine and /or carnitine to promote the synthesis and excretion of less toxic conjugates have been the mainstay of treatment for organic acidemias. Disorders for which such conjugation occurs more efficiently, like isovaleric acidemia, are therefore more biochemically responsive compared to defects like propionic or methylmalonic acidemia.
It seems though that deficiencies of intermediates downstream the metabolic block, as well as other secondary effects on associated pathways, like the Krebs cycle and oxidative phosphorylation, may have a more significant role in the pathogenesis of these group of disorders. Moreover, although they are often grouped together and managed in a similar fashion, it is obvious that they are very different diseases with unique characteristics distinguishing even defects in nearby metabolic steps, such as propionic and methylmalonic acidemia. Despite the similarities in their clinical phenotypes, propionic and methylmalonic acidemia patients have notable differences, with propionic acidemia commonly associated with dilated cardiomyopathy and methylmalonic acidemia with early onset chronic kidney failure characterized by tubulointerstitial nephritis, both not as common occurrences in the other disorder. The difference in the renal phenotype has led to theories about more nephrotoxic effects of MMA compared to the other metabolic intermediates shared by the two diseases, or effects involving antagonism and inhibition of glutathione uptake by MMA via the dicaboxyclic acid transporter in the proximal tubules leading to glutathione depletion in the mitochondria and increased oxidative stress [58][59][60].
Intracellular CoA ester accumulation is considered a key pathogenetic mechanism for many of the organic acidemias and other disorders leading to Coenzyme A Sequestration, Toxicity or Redistribution (CASTOR) [61]. The high concentration of the acyl-CoA substrate of the respective enzyme and/or the subsequent depletion of acetyl-or CoA-SH species may lead to detrimental effects that are primarily localized inside the mitochondria and are characterized by cell-autonomy and organ specificity [61].
Studies in the Mut -/knock-out and transgenic mice have established that cell-specific mitochondrial ultrastructural changes in the liver, the proximal tubules and the pancreas are present. Moreover, structural pathology was associated with decreased complex IV (cytochrome c oxidase) enzymatic activity in the liver or the proximal tubules, and increased serum and/or urine markers of oxidative stress, both in mice and patients with MMA [60,62,63]. Subsequent studies confirmed increased oxidative stress (decreased glutathione levels in tissue and plasma levels), decreased OXPHOS enzymatic activities and mtDNA levels in patients with propionic or mut 0 methylmalonic acidemia [64][65][66][67][68] and suggested a benefit of antioxidants or other mitochondria-targeted therapies in these patients (Fig. 3).
A similar picture, where secondary mitochondrial dysfunction is considered a key player in the pathophysiology of the disorder, is observed in the late defects of each of the three biochemical pathways of BCAA metabolism, including 3-MGA, MHBD, HIBCH and HIBA, suggesting a need to study these disorders outside the classic biochemical pathway of BCAA metabolism and rather focus on the intramitochondrial effects caused by the metabolic block. This would also move the treatment target away from dietary modifications and cofactors, and into mitochondria-targeted therapeutic approaches.
3-methylglutaconic aciduria caused by a deficiency in 3-methylglutaconyl-CoA hydratase, the enzyme converting 3-methylglutaconyl-CoA to 3-hydroxy-3-methylglutaryl-CoA in the last step of leucine metabolism, is the cause for 3-MGA type I. Four more types of 3-MGA have been described and are characterized by various degrees of mitochondrial dysfunction and are very remotely if at all linked to leucine metabolism [69,70]. How mitochondrial dysfunction leads to increased 3-MGA is also unclear. Type I 3-MGA presents with non-specific symptoms, such as seizures and mental retardation in childhood, while recently an adult onset slowly progressive leukoencephalopathy was added to the clinical manifestations of this poorly defined entity [71]. 3-MGA type II or Barth syndrome is an X-linked recessive form of the disease, caused by mutations in the TAZ gene, encoding the protein taffazin. The pathognomonic finding in the disease is the abnormal cardiolipin profile in the patient's cells. Clinical symptoms include cardiomyopathy, cyclic neutropenia, skeletal myopathy associated with mitochondrial OXPHOS dysfunction, increased 3-MGA excretion and low plasma cholesterol levels [72]. In these patients, 3-MGA excretion is not related to protein loading and it is thought to derive through a mevalonate shunt [73].
3-MGA type III or Costeff syndrome is characterized by infantile bilateral optic nerve atrophy, dysarthria, ataxia, extrapyramidal signs and is caused by mutations in OPA3, a protein localized in the outer membrane of the mitochondrion with critical role in mitochondrial fission and apoptosis [74]. 3-MGA type V is caused by mutations in a mitochondrial inner membrane translocase, DNAJC19, and presents as dilated cardiomyopathy with ataxia and in the Canadian Dariusleut Hutterite population, with testicular dysgenesis and growth failure. The remaining undefined cases with 3-MGA secretion are classified as type IV, and encompass a variety of disorders affecting mitochondrial function that have as a secondary marker increased 3-MGA. Mutations in POLG1, SUCLA2, DNAJC19, TMEM70, and mtDNA have all been described to cause elevated 3-MGA in the urine organic acid analysis.
2-methyl-3-hydroxybutyryl-CoA dehydrogenase deficiency (MHBG) is a newly identified enzyme defect in the pathway of isoleucine metabolism, caused by an interesting enzyme with different substrates besides short-branched chain acyl-CoAs. These include 17/ 3␣-hydroxysteroids, compounds that play a role in sex hormone and neurosteroid metabolism, and is therefore also referred to as 17hydroxysteroid dehydrogenase (17-HSD10). It also shows affinity for amyloid--peptide and has the additional designation of endoplasmic reticulum-associated amyloid--binding protein (ERAB or ABAD) [75]. The disease presentation is that of a mitochondrial disorder with progressive neurodegeneration in the more severely affected males. More studies are needed to investigate the role of isoleucine restriction or antioxidant therapies in the management of these patients [76][77][78].
3-Hydroxyisobutyryl-CoA hydrolase deficiency is a very rare disease described in three families in the literature. The first patient presented with congenital malformations, including vertebral abnormalities, tetralogy of Fallot, and agenesis of the cingulate gyrus and corpus callosum, poor feeding, gross motor delay, and neurological regression in infancy [79]. The subsequent cases displayed progressive infantile neurodegeneration as well as episodes of ketoacidosis and Leigh-like changes in the basal ganglia associated with a combined deficiency of multiple mitochondrial respiratory chain enzymes and pyruvate dehydrogenase in one set of sibs [80,81]. This defect involves an exceptional step in valine metabolism where free acids are generated in contrast to all other intermediates that are CoA thioesters. Patients had increased secretion of S-2-carboxypropyl-cysteamine and S-2-carboxypropyl-cysteine, and persistently elevated C4-OH acylcarnitine, that led to the identification of mutations in the HIBCH gene [81]. Methacrylyl-CoA formed because of the block is a highly reactive compound that reacts with thiol groups, such as glutathione, cysteine or cysteamine, causing significant oxidative stress.
The last enzymatic steps in the valine degradation pathway convert 3-hydroxyisobutyrate to (S)-methylmalonic semialdehyde (MMSA) by 3-hydroxyisobutyrate dehydrogenase and (S)methylmalonic semialdehyde to propionyl-CoA by the methylmalonate semialdehyde dehydrogenase enzyme (MMSDH). Thymine metabolism on the other hand generates (R)-aminoisobutyric acid (AIBA), which is then deaminated to (R)-methylmalonic semialdehyde. Both S-and R-methylmalonic semialdehyde are then handled by MMSDH, which catalyzes their oxidative decarboxylation to propionyl-CoA, suggesting that a single enzyme is involved in the catabolism of valine, thymine as well as uracil. Pathogenic mutations in the gene encoding MMSDH (ALDH6A1) have been identified in patients that displayed 3-hydroxyisobutyric aciduria [82], while others also manifested transient methylmalonic acidemia/uria [83].
Over the years, 3-hydroxyisobutyric aciduria and methylmalonic semialdehyde deficiency have been recognized as heterogenous conditions, both clinically and biochemically. The small number of patients with 3-hydroxyisobutyric aciduria described presented with dysmorphic features including a triangular face, low set ears, long philtrum and microcephaly and widely different phenotypes ranging from mild vomiting attacks with normal cognitive development, to profound intellectual impairment and early death [84][85][86]. Enzymatic or molecular testing of the predicted human cDNA for 3-hydroxyisobutyrate dehydrogenase (HIBADH) were negative in some of these patients [87]. Whether previously described but untested patients will harbor mutations in ALDH6A1 or other gene(s) remains unknown.
Methylmalonyl semialdehyde deficiency can present at newborn screening as hypermethioninemia and is associated with developmental delay, hypotonia and dysmorphic features [88,89]. The urine metabolic pattern is variable, including beta-alanine, 3-hydroxypropionic acid, both isomers of 3-amino-and 3-hydroxyisobutyric acid, and mild methylmalonic aciduria. Mutations in ALDH6A1, have been identified only in a subset of patients with these biochemical findings [90].
Molecular genetics
All disorders in the BCAA metabolic pathway follow autosomal recessive inheritance pattern, with the exception of Barth syndrome that is X-linked. Table 2 lists the gene loci and genes identified for the enzymatic defects in this pathway.
Therapeutic approaches
In general, early detection and vigilant metabolic control maximizes the chances to avoid significant insults to the developing brain and achieve an optimal outcome for each patient. General measures for all the disorders in this pathway include: a) fasting avoidance with scheduled frequent feeds to ensure the daily requirements for calories and nutrients are met. For the most severe patients this is often achieved with continuous overnight feeding regimens through gastrostomy or jejunostomy tubes; b) a high calorie -low protein diet is prescribed to limit the BCAA load, often accompanied by specific amino acids mixtures deficient in selected BCAA; c) conjugation agents, such as L-carnitine and glycine are provided to enable excretion of toxic metabolites; d) hemodialysis for hyperammonemia during acute metabolic decompensations and e) multidisciplinary care to account for the multi-organ manifestations and specific complications associated with each of the disorders.
Dietary management
Therapy of all the disorders in these biochemical pathways is based on dietary restriction of protein and particularly the offensive amino acids, leucine for IVA or valine and isoleucine for PA and MMA among the other less frequent disorders in the respective pathways, with regular clinical assessments of growth and monitoring of specific biochemical biomarkers. Primary goals of dietary management include: 1) the promotion of growth and anabolism, avoiding fasting or energy imbalance that may result in metabolic decompensation. This is achieved by providing a high caloric intake, continuous feeds through gastrostomy tube, and hospitalization for parenteral nutrition during intercurrent illness; 2) the reduction of toxic intermediates through the provision of medical foods deficient in the offensive amino acids or antibiotics to reduce precursor generation from propiogenic gut flora; 3) the enhancement of excretion of toxic metabolites through the provision of substrates for conjugation, such as carnitine for the acyl-CoA species or glycine in the case of IVA to form isovalerylglycine; 3) the administration of cofactors where indicated, such as thiamine in thiamine-responsive MSUD or hydroxocobalamin in MMA; 4) vigilant monitoring for nutritional deficiencies, such as micronutrients, vitamins and essential amino-and fatty-acids.
There are no evidence-based guidelines for each of the above therapeutic measures employed. This is reflected in the wide range of practices recorded in multicenter studies and the controversies surrounding even the few available therapeutic measures, such as carnitine supplementation. Furthermore, there are only few carefully conducted studies that document the actual dietary requirements and outcomes for these patients [91][92][93][94][95].
The experience gained in MSUD in recent years from clinical and animal studies has led to the design of improved formulas, which are enriched in amino acids that compete with leucine for brain uptake (such as Tyr, Trp, Phe, Met, Thr, His), and in essential fatty acids and micronutrients observed to be deficient with current management (omega-3 polyunsaturated fatty acids, zinc and selenium). Use of this modified formula resulted in improved metabolic control and biochemical parameters, fewer hospitalizations and normal growth and development of the patients [96]. Based on the same pathophysiological paradigm, a synthetic analog of leucine, norleucine, was shown to compete effectively for brain uptake of leucine, improving survival and biochemical aberrations and delaying the encephalopathy symptoms of different mouse models for classic and intermediate MSUD [32], making this an interesting candidate treatment option for the patients.
Similar principles guide the dietary management of MMA/PA. Natural protein needs to be carefully titrated to allow for normal growth, while avoiding an excessive propiogenic amino acid load (isoleucine, valine, methionine and threonine) into the pathway. Adjustment of dietary whole (complete)-protein intake will depend on growth parameters, metabolic stability, stage of renal failure, and other factors [97]. A propiogenic amino acid-deficient-and/or protein-free-formula are given to some individuals to provide extra fluid and calories. In patients with low protein tolerance, severe restriction of propiogenic amino acid precursors (isoleucine, valine, methionine, and threonine) can produce a nutritional deficiency state. Moreover, a severe iatrogenic essential amino acid deficiency can be induced by the relatively high leucine intake through the MMA/PA formulas [98,99]. Ketoisocaproic acid derived from the supplemented leucine can inhibit the BCKDH kinase and thereby increase the oxidation rates of valine and isoleucine resulting in very low plasma levels of these essential amino acids and negatively affect long-term growth and possibly other outcomes. Moreover, leucine can compete for the uptake of methionine through the blood brain barrier, an effect that can be detrimental for patients with methionine synthetic defects, like cobalamin C disease [99]. These observations further highlight the tight interconnections between the three BCAAs.
Carnitine supplementation has been widely accepted as a means to restore the depleted intramitochondrial free CoA pool and help excrete toxic acyl-CoA compounds in the urine as carnitine esters, for example propionyl-CoA in the form of the less toxic propionylcarnitine, and thereby also prevent secondary carnitine deficiency [100][101][102]. The efficacy of this approach given the renal handling of supplemental carnitine has been debated [103]. However, a combination of L-carnitine with glycine conjugation in severe forms of isovaleric acidemia was found to provide an efficient means to eliminate isovaleryl-CoA [104,105]. More recently treatment with phenylbutyrate was shown to increase the de-phosphorylated active form of the E1a subunit, by preventing its phosphorylation by the BCKD kinase. This results in reduced BCAA and BCKA levels in cases with iMSUD [106].
Mitochondria-targeted therapies
Given the secondary mitochondrial dysfunction affecting various aspects of mitochondrial metabolism that has been documented in an increasing number of disorders in the BCAA metabolic pathway, agents targeting the mitochondria could hold significant therapeutic benefit for these patients.
The use of antioxidants has been evaluated mainly in experimental animal models of organic acidemias, including maple syrup disease [107], isovaleric academia [108], 3-hydroxyisobutyric academia [109], 3 MGA [110] and methylmalonic academia [60,111], among others. Glutathione deficiency was documented during a metabolic crisis in a patient with isolated MMA, who responded to ascorbate therapy [112], while a regimen including coenzyme Q 10 and vitamin E has been shown to prevent progression of acute optic nerve involvement in a different MMA patient [113]. Despite these encouraging case reports, systematic randomized multicenter studies addressing the role of antioxidants in the acute or chronic management of MMA or other disorders in these pathways are currently lacking. This is a well-recognized shortcoming in treatment studies for mitochondrial disorders of all principles, where only very few well-conducted randomized controlled studies were able to convincingly show benefit of such therapeutic interventions [114][115][116][117].
The expectation from ongoing research studies on animal models is that gaining better understanding of the pathophysiology underlying the mitochondrial involvement of each of these disorders, we will be able to design better disease-specific therapies.
Organ transplantation
Liver transplantation has been used to cure different inborn errors of metabolism, including urea cycle defects, tyrosinemia, familial hypercholesterolemia, primary hyperoxaluria, Wilson's disease, Criggler-Najar syndrome and others. Among the BCAA metabolism disorders, classic MSUD is probably the only condition where liver transplantation has been shown to have a significant therapeutic, though not completely curative effect.
MSUD: Liver transplantation has been reported to improve the metabolic instability greatly and allow protein intake liberalization in MSUD patients [118,119]. It is suggested that the donor liver introduces sufficient BCKD activity in the body that is also subject to physiologic regulation, resulting in the maintenance of near-normal plasma BCAA and ␣KIC levels through various dietary and physiological challenges. Patients can liberalize their protein intake and are less vulnerable during intercurrent illnesses, although significant leucinosis can occur during periods of stress. Although Fig. 3. Organ pathology in methylmalonic acidemia. Liver and kidney pathology from patients with mut 0 methylmalonic acidemia are presented. Mild steatosis (A), lipid-laden stellate cells (white arrows) (B), with abnormal mitochondrial ultrastructure on transmission electron microscopy (EM) (pale mitochondria with absent or disorganized cristae, yellow arrowheads) are observed in patient livers. Tubulointerstitial nephritis with patchy interstitial chronic inflammation and tubular dilation (A), proximal tubule vacuolization (B) (white arrows), and enlarged mitochondria with disorganized cristae (yellow arrowhead) along with large remnant vacuoles that contained amorphous membranous inclusions (black arrow) on transmission EM are present in patient kidneys. Pathology was obtained from explanted organs after a combined liver and kidney transplantation procedure. Patients were enrolled in the clinical study: NCT00078078.
there is no further deterioration, existing neurocognitive dysfunction or behavioral problems cannot be reversed.
There have been a number of cases where classical MSUD patient livers were successfully transplanted into a recipient ('domino' transplantation) with no adverse consequences for peripheral amino acid homeostasis on unrestricted protein intake [118,120]. Such a utilization of explanted organs from patients with MSUD or other organic acidurias may alleviate some of the ethical controversies surrounding allograft distribution.
Orthotopic liver transplantation (OLT) has been offered to patients with PA who suffer frequent metabolic decompensations, recurrent hyperammonemia, and/or restricted growth. There have been several reported cases of successful liver transplantation in PA [121][122][123]. Furthermore, it has been shown that transplantation can restore cardiac function in cases with PA complicated by dilated cardiomyopathy [124]. However, the procedure is not curative and is still associated with significant morbidity and mortality. It does not completely protect against a metabolic stroke, hyperammonemia or metabolic decompensations, while in a number of patients the renal disease, as well as cardiac manifestations, have progressed despite the procedure [125]. There have been fewer cases of early elective transplants in this disorder to address its effects prospectively on neurological and other disease-related outcomes.
Although renal transplant has an absolute indication for patients with MMA in end-stage renal failure, the benefits of elective liver, kidney or combined liver and kidney transplantation are less conclusive [126][127][128][129][130][131][132]. Although organ transplant stabilizes the patients and decreases the frequency of admissions associated with metabolic decompensations, it does not completely prevent devastating neurological complications, such as the metabolic stroke [133]. Levels of plasma MMA post-transplantation remain significantly elevated, most likely due to extra-hepatorenal MMA production from tissues such as the muscle and brain [63]. Continued dietary restriction and vigilant metabolic follow-up is therefore recommended post transplant. | 2018-04-03T05:41:12.386Z | 2016-10-12T00:00:00.000 | {
"year": 2016,
"sha1": "1d36ba86e2b9ed497cd3df69d9da56f973a4ef4d",
"oa_license": "CCBYNC",
"oa_url": "https://content.iospress.com/download/translational-science-of-rare-diseases/trd009?id=translational-science-of-rare-diseases/trd009",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "1d36ba86e2b9ed497cd3df69d9da56f973a4ef4d",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
208532393 | pes2o/s2orc | v3-fos-license | Mode of photoexcited C60 fullerene involvement in potentiating cisplatin toxicity against drug-resistant L1210 cells
Introduction: C60 fullerene has received great attention as a candidate for biomedical applications. Due to unique structure and properties, C60 fullerene nanoparticles are supposed to be useful in drug delivery, photodynamic therapy (PDT) of cancer, and reversion of tumor cells’ multidrug resistance. The aim of this study was to elucidate the possible molecular mechanisms involved in photoexcited C60 fullerene-dependent enhancement of cisplatin toxicity against leukemic cells resistant to cisplatin. Methods: Stable homogeneous pristine C60 fullerene aqueous colloid solution (10-4 М, purity 99.5%) was used in the study. The photoactivation of C60 fullerene accumulated by L1210R cells was done by irradiation in microplates with light-emitting diode lamp (420-700 nm light, 100 mW·cm-2). Cells were further incubated with the addition of Cis-Pt to a final concentration of 1 μg/mL. Activation of p38 MAPK was visualized by Western blot analysis. Flow cytometry was used for the estimation of cells distribution on cell cycle. Mitochondrial membrane potential (Δψm) was estimated with the use of fluorescent potential-sensitive probe TMRE (Tetramethylrhodamine Ethyl Ester). Results: Cis-Pt applied alone at 1 μg/mL concentration failed to affect mitochondrial membrane potential in L1210R cells or cell cycle distribution as compared with untreated cells. Activation of ROS-sensitive proapoptotic p38 kinase and enhanced content of cells in subG1 phase were detected after irradiation of L1210R cells treated with 10-5M C60 fullerene. Combined treatment with photoexcited C60 fullerene and Cis-Pt was followed by the dissipation of Δψm at early-term period, blockage of cell transition into S phase, and considerable accumulation of cells in proapoptotic subG1 phase at prolonged incubation. Conclusion: The effect of the synergic cytotoxic activity of both agents allowed to suppose that photoexcited C60 fullerene promoted Cis-Pt accumulation in leukemic cells resistant to Cis-Pt. The data obtained could be useful for the development of new approaches to overcome drug-resistance of leukemic cells.
Introduction
Multidrug resistance (MDR) is a major problem in anticancer therapy and approximately 70-90% of patients do not respond to initial chemotherapy. 1 Mechanisms of MDR including reduced drug uptake, active drug efflux by transporters of ATP-binding cassette (ABC) superfamily, decreased intracellular drug concentration, altered cell cycle checkpoints, and induced expression of genes for impairing apoptotic pathways of cell death are well studied. 2,3 Nevertheless the problem of MDR reversion in cancer still remains. Thus, the chemical development of ABC transporter inhibitors was found to increase the toxicity associated with chemotherapy. 4 Unique properties of C 60 fullerene nanoparticles with the size of 10-100 nm distinguish them from other cancer therapeutics; they are non-toxic for normal cells, 5,6 can bypass traditional drug resistance mechanisms, 7,8 have therapeutic properties themselves as photosensitizers in photodynamic therapy (PDT), [9][10][11] and enable combinatory treatment with anticancer drugs. 12,13 *Corresponding author: Svitlana Prylutska, Email: psvit@bigmir.net and after toluene evaporation, C 60 fullerene was transferred to the water phase followed by prolonged ultrasound sonication. An aqueous colloid solution of C 60 fullerene (concentration 10 -4 М, purity 99.5%) was highly stable for 12 months when stored at room temperature. 24 The average hydrodynamic diameter of C 60 fullerene nanoparticles was 50 nm, and no changes in their size were detected in RPMI-1640 medium containing 5% FBS. 25
Cell culture
The murine cancer cell line of leukemic origin resistant to cisplatin L1210R was obtained from the Bank of Cell Lines from Human and Animal Tissues, R. E. Kavetsky Institute of Experimental Pathology, Oncology and Radiobiology, NAS of Ukraine (Kyiv, Ukraine). Cells were incubated in RPMI 1640 medium supplemented with 10% FBS, 50 μg·mL -1 penicillin and 100 μg·mL -1 streptomycin at 37°C in a humidified atmosphere with 5% CO 2 .
Photodynamic treatment
Cells were incubated for 2 hours with or without 10 -5 М C 60 fullerene in a medium described above. Photoactivation of accumulated C 60 fullerene was done by probes irradiation in microplates with light-emitting diode lamp (420-700 nm light, irradiance 100mW·cm -2 ). Cells were further incubated for an indicated time period without or with the addition of cisplatin to a final concentration of 1 μg/mL in the incubation medium.
Immunoblot analysis
Cells were washed with PBS and lysed with ice-cold lysis buffer containing protease inhibitor. Cell lysates were centrifuged (14 000 g, 15 minutes) and 30 μg of cell lysate protein was loaded onto a gradient 8%-15% SDS-polyacrylamide gel. After electrophoresis, the proteins were transferred onto polyvinylidene difluoride membrane 26 and incubated overnight at 4°C with monoclonal antibody against phospho-p38 kinase (dilution 1:1000). The membranes were washed and incubated for 1 hour with anti-rabbit peroxidaselinked secondary antibody. Immunoreactive bands were visualized by enhanced chemiluminescence plus western blotting detection system (Amersham, USA). Then, the membranes were incubated with antibodies against β-actin to provide the loading control. Finally, protein concentration was determined using DC Protein Assay kit (Bio-Rad, USA).
Cell cycle analysis
For cell cycle analysis, cells (1x10 6 ) were resuspended in 0.1 mL PBS (pH 7.4), fixed by adding 0.9 mL of 90% ethanol at -20°C overnight and centrifuged at 13000 g for 1 minute. The fixed cells were rinsed twice with PBS and resuspended in propidium iodide solution (10 µg/mL) containing RNase A (100 µg/mL) in PBS. The stained cells were analyzed by a COULTER EPICS XL TM (Beckman Due to the extended π-conjugated system of molecular orbitals, C 60 fullerene is able to generate toxic reactive oxygen species (ROS) in polar solvents after UV/Vis light absorption. Photoexcited C 60 molecule is reduced from a long-lived triplet state ( 3 C 60 * ) to radical anion C 60 -• which subsequently reduces O 2 to O 2 -• ; thereby initiating radical chain reactions with the generation of hydroxyl radical and hydrogen peroxide. 9, 14 Photoexcited C 60 fullerene or its derivatives have been shown to evoke oxidative stress and to induce apoptosis in cancer cells of different origins. [14][15][16][17][18] In this study, we used the photodynamic potential of C 60 to enhance the cytotoxic effect of chemotherapeutic drug cisplatin towards murine leucosis cell line resistant to cisplatin. Cisplatin (cis-[Pt(NH 3 ) 2 Cl 2 ], Cis-Pt) belongs to the first-line highly efficient cytotoxic agents in current cancer therapy. It is generally accepted, in that the main Cis-Pt target is nuclear DNA, but recent studies have demonstrated that activation of apoptosis signaling pathways in the cytoplasm is the mechanism of Cis-Pt toxicity alternative to DNA damage. 3,19,20 Therapeutic efficiencies of Cis-Pt are substantially limited by the development of cancer cells' MDR.
In the previous studies, we confirmed fullerene C 60 nanoparticles penetration into leukemic L1210 cells and demonstrated photoinduced cytotoxicity of accumulated C 60 determined by ROS production. 21 The possibility to decrease substantially the viability of cisplatin-resistant L1210R cells by combined treatment with photoexcited C 60 fullerene and cisplatin was also shown, 22 but the mechanisms of this phenomenon still need further investigation.
The aim of this study was to elucidate the possible molecular mechanism of photoexcited C 60 fullerenedependent enhancement of cisplatin toxicity against leukemic cells resistant to cisplatin.
Preparation and characterization of pristine C 60 fullerene aqueous colloid solution
C 60 fullerene aqueous colloid solution was synthesized and characterized in the Ilmenau Technical University (Germany) as described by Scharff et al. 23 In brief, the toluene extract was obtained after graphite combustion Coulter, USA) and FCS Express 3 Flow Cytometry Software (DeNovo Software, USA).
Statistics
The data were represented as Mean (M) ± standard deviation (SD) of more than four independent experiments. Mean and SD were calculated for each group. Statistical analysis was performed using two-way ANOVA followed by post Bonferroni test. A value of P < 0.05 was considered statistically significant. Data processing and plotting were performed by IBM PC using specialized applications GraphPad Prism 7 (GraphPad Software Inc., USA) and Gel-Pro Analyzer 6.3 (Media Cybernetics Inc., USA).
Activation of p38 MAPK in L1210R cells after C 60 fullerene photoexcitation
Mitogen-activated p38 kinase (MAPK) is one of the important redox-sensitive and stress-activated targets involved in apoptosis induction by phosphorylation of proapoptotic proteins р53 and Вaх. 28,29 We examined p38 MAPK activity in L1210R cells by estimation of the level of its active phosphorylated form (pp38) using Western blot analysis. As shown in Fig. 1, no statistically valid changes in the level of active p38 kinase were detected at 2-hour incubation of cells loaded with C 60 fullerene or irradiated with 420-700 nm light alone, though photoexcitation of C 60 fullerene accumulated with L1210R cells was followed by an increase of p38 MAPK level which was found to be 3 times higher than that in the control at 1 hour of incubation and remained at enhanced level at 2 hours. This finding is in agreement with data presented by Li et al, 30 where substantial activation of p38 MAP kinase was detected after light irradiation of MCF-7 cells loaded with C 60 derivatives С 60 -phe or C 60 -gly. This increase was prevented by antioxidant N-acetyl-L-cystein and thus proved to be ROS dependent. Activation of p38 MAP kinase as a result of H 2 O 2 -induced oxidation of its thiol groups was also shown by Olson and Hallahan. 29 A growing body of evidence suggests that p38 MAPK is able to control the p53-mediated response to several genotoxic stimuli and could be specific to cancer therapy. 31,32 The data on increase in the viability of HaCaT cells pretreated with p38 MAPK specific inhibitors before incubation with cisplatin as well as the data obtained in experimental head and neck cancer model indicating that lower activation or lack of activation of p38 MAPK correlates with a more resistant phenotype 33 suggested that the inhibition of p38 MAPK is a potential mechanism of resistance and that activation of this pathway could help to overcome cancer cells drug resistance. Interference into cell cycle transition is suggested to be one of the mechanisms of p38 MAP kinase involvement into apoptosis induction. To elucidate if ROS dependent effects of photoexcited C 60 fullerene could disturb cell cycle checkpoints in cisplatin-resistant L1210R cells, we further studied the cells cycle distribution after combined treatment with photoexcited C 60 and cisplatin.
Cell cycle distribution of L1210R cells after the combined action of cisplatin and photoexcited C 60 fullerene
Flow cytometric analysis showed that at 48 hour incubation of L1210R cells in control, the most significant content of cells (47.6±4.6%) was detected in the G0/G1 phase (Figs. 2A, B). Accumulation of cisplatin-resistant cancer cells of different origins in the G0/G1 phase of cell cycle is believed to ensure the transition from G1 to S phase, DNA doubling, and mitosis. 34,35 No effect on cell cycle profile was detected after L1210R cells light irradiation (data not shown) or treatment with cisplatin alone, while treatment with C 60 fullerene was followed by an increase of cells content in the subG1 phase (8.7±3.5% vs. 2.7±1.5% in control) (Figs. 2A, B). After C 60 fullerene photoactivation, this effect was enhanced (14.4±2.9% vs. 2.7±1.5% in control) with simultaneous decrease of cells content in the G0/G1 phase. After combined treatment with photoexcited C 60 fullerene and cisplatin, a decrease of cells content in the G2/M phase was shown, while the number of cells accumulated in subG1 phase was found to be substantially higher than that after C 60 fullerene photoexcitation alone (28±4% vs. 14.4±3% respectively) (Fig. 2B). Cells accumulation in the subG1 phase is considered to be the marker of the blockage of cell transition into S phase and transition to apoptotic pathway. 8 Activation of proapoptotic MAP kinases in L1210R cells after C 60 fullerene photoexcitation could be the initial step of the cell death program, but its realization needs reinforcement of apoptotic signals particularly at the level of mitochondria. Since acute apoptosis induced by cisplatin is shown to be associated with mitochondrial ROS response, 20, 36 we tested whether combined treatment of L1210R cells with photoexcited C 60 fullerene and cisplatin had an impact on mitochondrial redox status.
Mitochondrial membrane potential in L1210R cells after the combined action of cisplatin and photoexcited C60 fullerene
The relative value of mitochondrial membrane potential (Δψ m ) in L1210R cells incubated for 2 hours after the treatment with either cisplatin or photoexcited C 60 fullerene alone or their combination was estimated with the use of fluorescent probe TMRE.
No effect of light irradiation (data not shown) or 1 µg/ mL cisplatin alone on the Δψ m value in L1210R cells was detected, while the tendency to its decrease after cells treatment with C 60 fullerene was revealed (Fig. 3).
C 60 fullerene photoexcitation was followed by 1.7 fold decrease of Δψ relative value in L1210R cells as compared with control. Combined treatment with photoexcited C 60 fullerene and cisplatin was followed by further substantial decrease of TMRE fluorescent signal, the Δψ relative value was decreased 3.9 folds as compared with the control and 2 folds as compared with the effect of photoexcited C 60 fullerene alone, indicating the cisplatin involvement in mitochondria redox status disturbance.
Discussion
The ability of photoexcited C 60 fullerene to induce apoptosis in human leukemic cells was confirmed in our previous studies, where the depletion of mitochondrial Ca 2+ -pool, cytochrome c release from mitochondria to the cytosol, caspase -3 activation and DNA fragmentation with the formation of the "ladder pattern" after UV/Vis irradiation (320-600nm) of cells treated with 10 -5 M C 60 fullerene were demonstrated. 37,38 As we have shown earlier, the treatment of cisplatin resistant L1210R leukemic cells with cisplatin in a range of 0.1-10 µg/mL had no effect on cell viability, while substantial C 60 -mediated photodamaging effect was detected with 50% decrease of cell viability at 48 hour time point after photoexcitation (420-700 nm) of accumulated carbon nanostructure. 21 The intense ROS production detected at 3 hour time point after irradiation of L1210R cells loaded with C 60 fullerene confirmed the ability of photoactivated C 60 fullerene to generate O 2 -• in intracellular space. 22 In this study we demonstrated the activation of p38 MAP kinase in L1210R cells after treatment with C 60 fullerene and light irradiation. These data indicate that p38 MAPK could be the target of ROS produced by photoexcited C 60 and thus be involved in the molecular mechanisms of photoexcited C 60 toxic effect against leukemic cells resistant to cisplatin. This suggestion was confirmed by the data indicating the ability of photoexcited C 60 to evoke L1210R cells accumulation in proapoptotic subG 1 phase of cell cycle.
We showed that under combined treatment with photoexcited C 60 and cisplatin in a low 1 μg/mL concentration, the synergic effect of both agents became apparent both in dropping the mitochondrial membrane potential and inducing L1210R cells accumulation in The synergic cytotoxic activity of photoexcited C 60 and cisplatin could be realized on condition that cisplatin enters L1210R cells and is accumulated in intracellular space. The ability of C 60 fullerene derivatives to reactivate cisplatin endocytosis in cancer cells and thus to circumvent tumor resistance to cisplatin 8 as well as incapability of P-gp type of ABC transporters to recognize pristine C 60 fullerene nanoparticles and to prevent their accumulation in drugresistant K562R leukemic cells 7 confirm this assumption. Additionally, the expression of ABC family transporters responsible for drug efflux from cancer cells is shown to be ROS-regulated. Thus, ROS induced down-regulation of both P-glycoprotein in prostate tumor cells 39 and MDRassociated protein (MRP1) expression in urinary bladder cells 40 were demonstrated. The results of our study allow suggesting that photoexcitation of C 60 fullerene may affect the components of the system controlling cisplatin influx and accumulation in L1210R cancer cells, thereby promoting overcoming of drug resistance.
Conclusion
Series of genetic and metabolic rearrangements allow cancer cells to prevent the cytotoxic effects of anticancer drugs. The phenomenon of cancer cells MDR substantially reduces the effect of anticancer therapy and the efficiency of cisplatin as the commonly used drug. In this respect, application of C 60 fullerene nanoparticles seems to be perspective as they penetrate into cancer cells, avoid efflux by transporters of ABC family and facilitate drug delivery, produce toxic ROS after photoexcitation and enable combinatory treatment with anticancer drugs.
In this study the activation of p38 MAP kinase and the decrease of Δψ m value as the inducing markers of ROSdependent apoptotic pathways in resistance to cisplatin leukemic L1210R cells treated with 10 -5 M C 60 fullerene What is the current knowledge? √ C60 fullerene is potential for PDT and phenomenon of cancer cells MDR.
What is new here? √ Activation of ROS-sensitive proapoptotic p38 kinase and enhanced content of cells in proapoptotic subG1 phase were detected when leukemic cell line L1210 resistant to Cis-Pt was treated with 10-5M C60 fullerene and irradiated with visible light. √ Combined treatment of L1210R cells with photoexcited C60 fullerene and Cis-Pt in low concentration was followed by the intensification of proapoptotic effects. √ The effect of the synergic cytotoxic activity of both agents allowed us to suppose that photoexcited C60 fullerene promoted Cis-Pt accumulation in leukemic cells resistant to Cis-Pt. and irradiated with visible light was shown. The data obtained indicated that combined treatment of L1210R cells with photoexcited C 60 fullerene and cisplatin in a low 1 µg/mL dose was followed by more intense proapoptotic effect as compared with treatment with photoexcited C 60 fullerene alone. Dissipation of Δψ m at early term period, the blockage of cell transition into S phase and mitosis with accumulation in the proapoptotic subG1 phase of the cell cycle at long-term period after combined treatment of L1210R cells were detected. The effect of synergic cytotoxic activity allowed us to suppose that photoexcited C 60 fullerene promoted cisplatin accumulation in L1210R cells. The data obtained could be useful for the development of approaches to overcome drug-resistance of leukemic cells and to extend the methods of PDT.
Funding sources
None to be declared.
Ethical Statement
Not applicable. | 2019-11-07T15:11:15.000Z | 2019-05-22T00:00:00.000 | {
"year": 2019,
"sha1": "69ec12feefcbf5c06ee7ec6e85e7c4e9f631fdd7",
"oa_license": "CCBYNC",
"oa_url": "https://bi.tbzmed.ac.ir/PDF/bi-9-211.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "30e04600e947b76309607c6cc41885a659d06931",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
218648102 | pes2o/s2orc | v3-fos-license | Whole Genome Sequencing and Comparative Genomic Analyses of Lysinibacillus pakistanensis LZH-9, a Halotolerant Strain with Excellent COD Removal Capability
Halotolerant microorganisms are promising in bio-treatment of hypersaline industrial wastewater. Four halotolerant bacteria strains were isolated from wastewater treatment plant, of which a strain LZH-9 could grow in the presence of up to 14% (w/v) NaCl, and it removed 81.9% chemical oxygen demand (COD) at 96 h after optimization. Whole genome sequencing of Lysinibacillus pakistanensis LZH-9 and comparative genomic analysis revealed metabolic versatility of different species of Lysinibacillus, and abundant genes involved in xenobiotics biodegradation, resistance to toxic compound, and salinity were found in all tested species of Lysinibacillus, in which Horizontal Gene Transfer (HGT) contributed to the acquisition of many important properties of Lysinibacillus spp. such as toxic compound resistance and osmotic stress resistance as revealed by phylogenetic analyses. Besides, genome wide positive selection analyses revealed seven genes that contained adaptive mutations in Lysinibacillus spp., most of which were multifunctional. Further expression assessment with Codon Adaption Index (CAI) also reflected the high metabolic rate of L. pakistanensis to digest potential carbon or nitrogen sources in organic contaminants, which was closely linked with efficient COD removal ability of strain LZH-9. The high COD removal efficiency and halotolerance as well as genomic evidences suggested that L. pakistanensis LZH-9 was promising in treating hypersaline industrial wastewater.
Introduction
Hypersaline industrial wastewaters generated from processes, such as food production, petroleum refining, pharmaceutical manufacturing, printing, and dyeing, often contain large amounts of toxic compounds [1][2][3][4][5], most of which were recalcitrant to conventional biological treatment due to inhibition of salt and generally required expensive physico-chemical treatments to remove the salts as well as the organic matter [6]. Under this background, halophilic and halotolerant microorganisms with high chemical oxygen demand (COD) removal efficiency and the capability to convert hazardous compounds to relatively simple compounds, such as H 2 O, CO 2 , CH 4 , and NH 3 under hypersaline Genomic DNA of Lysinibacillus pakistanensis LZH-9 was extracted using the Qiagen Genomic DNA Extraction Kit. After the DNA sample quality test was passed, the large fragment was subjected to gelatinization recovery applying BluePippin automatic nucleic acid recovery instrument; the DNA was damaged and repaired; after purification, the DNA fragments were end-repaired and linked with adenine. After purification, the linkers in the kit LSK108 (Oxford Nanopore Technologies, Oxford, United Kingdom) were used to perform the ligation reaction and, finally, qubit [29] was used to accurately quantify the constructed DNA library. After the DNA library was built, a certain concentration and volume of the DNA library was added to a flow cell, and the flow cell was transferred to the Nanopore GridION sequencer for real-time single molecule sequencing (Nextomics Biosciences Institute, Wuhan, China). Cutoffs including mean_qscore_template (>= 7) and sequence length (>= 1000 bp) were applied in order to carry out quality control on the raw data. The reads were first corrected and assembled with the Canu version 1.7 [30]. Pilon version 1.2 [31] was further applied to correct the sequencing errors with default parameters. The corrected genome was tested for circularization by in-house script. Circlator (parameter: fixstart) [32] was used to move the starting point of the sequence to the replication starting site of the genome after removing the redundant parts. After sequencing, the sample yield a total of 1,415,478,110 bp raw data, and the amount of data passed through the quality control was 1,343,227,710 bp. After assembling, correcting, and optimizing, the final genome consisted of a circular chromosome (5,038,663 bp) and a plasmid (66,276 bp) with a total size of 5,104,939 bp. The 16S rRNA sequences of strains LZH-9, LZH-13, LZH-22, and LZH-24 have been deposited at NCBI database under the accession numbers MN121313, MN121312, MN121253, and MN121251, respectively. The whole genome sequence of strain LZH-9 has been deposited at JGI IMG-ER database under the IMG Taxon OID 2823662158 and NCBI database under accession number CP045835-CP045836. Lysinibacillus pakistanensis LZH-9 was deposited in China Center for Type Culture Collection (CCTCC) and the accession number was CCTCC AB 2019361.
Average Nucleotide Identity (ANI) and Whole Genome Alignments
Pyani (https://pypi.org/project/pyani/) was used to calculate the average nucleotide identity (ANI) [33] based on Blast algorithm with default parameters. BlastN-based whole genome comparison of strains L. pakistanensis LZH-9, L. pakistanensis JCM 18776, L. contaminans DSM 25560, L. xylanilyticus t26, Lysinibacillus sp. UBA7518, L. sphaericus OT4b.31, L. mangiferihumi M-GX18, and L. parviboronicapiens VT1065 were performed and represented with BRIG-0.95 [34], and these strains were used as reference, respectively. Table 2 lists a summary of features for the eight Lysinibacillus genomes involved in this study and BUSCO [35] was used to estimate the completeness of each genome against a bacterial core gene set. Gene family clustering followed by genome wide comparisons of eight Lysinibacillus representative strains, including L. pakistanensis LZH-9, L. pakistanensis JCM 18776, L. contaminans DSM 25560, L. xylanilyticus t26, Lysinibacillus sp. UBA7518, L. sphaericus OT4b.31, L. mangiferihumi M-GX18, and L. parviboronicapiens VT1065 together with UniProt search, GO Slim annotation, and GO enrichment analyses (default cutoff p-value is 0.05), were performed via OrthoVenn [36] with default parameters. Bacterial Pan Genome Analyses tool (BPGA) pipeline [37] was further used to perform models extrapolation of the Lysinibacillus pan/core-genome applying default parameters. The size of Lysinibacillus pan-genome was fitted into an power law regression function Ps = κn γ with a built-in program of BPGA [37], in which Ps was the total number of gene families, n was the number of tested strains, and γ stood for free parameters. In case of exponent γ < 0, then the pan-genome of Lysinibacillus was suggested to be 'closed' because the size of the pan-genome is relatively constant as an additional genome involved. On the contrary, the pan-genome was suggested to be "open" in case of 0 < γ < 1. In addition, the size of the core-genome of Lysinibacillus was fitting into an exponential decay function Fc = κ c exp(-n/τ c ) with a built-in program of BPGA pipeline [37], in which Fc represented the number of core gene families, whereas κ c , τ c were free parameters.
Phylogenic Analyses
Phylogenetic tree based on 16S rRNA sequences was constructed with the Neighbor-joining (NJ) method while using MEGA-X [38] with 1000 bootstrap replicates, and phylogenetic trees based on protein sequences of functional genes were constructed using PhyML [39] with the Maximum Likelihood (ML) method and 1000 bootstrap replicates, followed by visualization with iTOL [40], and the sequences were aligned with Muscle [41], and then trimmed with Gblocks [42] while applying a "less stringent" option before tree construction.
Prediction of Mobile Genetic Elements
We applied the ISFinder [52] to predict and classify insertion sequences (IS) and transposases within Lysinibacillus genomes with Blastp (cutoff e-value 1e −5 ). We applied the IslandViewer 4 [53] to detect putative genomic islands (GIs) that were distributed over Lysinibacillus genomes. We applied PHASTER (Phage Search Tool Enhanced Release) [54] for detection and annotation of prophage and prophage remnant sequences within Lysinibacillus genomes. We also applied CrisprCasFinder [55] for detection of CRISPRs and Cas genes within Lysinibacillus genomes. Correlation coefficients (Rs) and two-sided p values were obtained by applying Spearman rank correlation analysis (https: //www.wessa.net/rwasp_spearman.wasp/).
Genome-wide Detection of Positively Selected Genes and Codon Adaption Index (CAI) Calculation
Comparisons of non-synonymous (dN) to synonymous (dS) substitution rate (as ω = dN/dS) have been widely applied in order to figure out whether the mutations that change the amino acid (dN) in a specific position are adaptive (ω > 1, positive selection), deleterious (ω < 1, negative selection) or neutral (ω = 1, neutral evolution), and we used PosiGene pipeline [56] for genome-wide detection of positively selected genes in above-mentioned genomes of Lysinibacillus spp. (Table 2), in which L. pakistanensis LZH-9 was used as the anchor, reference, and target species. Genes were considered to be PSGs if the branch-wide test resulted in a false discovery rate (FDR) of <0.05 and an adjusted p value of <0.05. We used CAI as a numerical estimator of gene expression level. We used CAIcal [57] in order to calculate CAI values of genes of above-mentioned strains. PCoA based on Bray-Curtis distance was performed with Origin Pro 2017 software (OriginLab, Northampton, MA, USA)
Isolation and Identification of Halotolerant Bacteria
After enrichment, isolation, and purification, four strains LZH-9, LZH-13, LZH-22, and LZH-24 were obtained and phylogenetic analysis based on 16S rRNA sequences was conducted ( Figure 1). All of the strains were capable of removing COD, and strain LZH-9 was selected for further optimization and analyses, since it possessed the highest COD removal efficiency 69.8% (Table 1). The colony of strain LZH-9 on LB (Lysogeny broth) solid medium was light yellow, smooth, moist, with neat edges, and as seen under a scanning electron microscope, strain LZH-9 has rod shape and folded surface with a size of~0.4 µm × (1.5-3) µm ( Figure 2). Notably, strain LZH-9 could grow in the presence of up to 14% (w/v) NaCl (Figure 3a).
Optimization of COD Removal Efficiency with Strain LZH-9
The removal efficiency of COD increased largely during the first 96 h and it reached 74.3% at 96 h ( Figure 3b). With the pH increasing from 5.0 to 7.0, the removal of COD consistently increased and reached the maximum value at pH 7.0 (Figure 3c), whereas, with initial pH increasing from 7.0 to 9.0, the COD removal percentage decreased from 74.3% to 54.1%. It was concluded that with increasing of initial pH, COD removal percentage showed a trend of first increase, followed by decrease.
Osmotic pressure that is caused by saline concentrations (>1% salt) would cause plasmolysis or loss of biological activity in microbes [58]. It was reported that the COD removal efficiency of wastewater with rotating biological disc system fell from 85% to 59% when salinity increased from 0 to 5%, and the COD removal efficiency would decrease to below 80% when the salt concentration was over 50 g/L in Fed-Batch Operation [58,59]. In this study, we found that strain LZH-9 had a maximal salt tolerance of 14% (w/v) NaCl, higher than that of L. halotolerans LAM612 T , which could resist up to 10% (w/v) NaCl [18], and LZH-9 presented the highest salt tolerance amongst reported strains of genus Lysinibacillus thus far. Of synthetic wastewater with NaCl concentration lower than 5%, the COD removal efficiency varied from 68.1% to 81.9% at 96 h (Figure 3d), whereas with a NaCl concentration higher than 5%, the COD removal efficiency fell below 60.0%. These results suggested that the strain LZH-9 could remove COD in high salt conditions (1%-5% NaCl). Kubo at el. [3] isolated two salt-tolerant bacteria (resisted up to 15% NaCl), in a mixed culture with both strains the COD removal efficiency was approximately 70% for 72 h in a flask, and it increased to about 90% when they were applied in a pilot plant (working volume 1m 3 ) for 7 d. Mehdi Ahmadi at el. [2] isolated three salt-tolerant bacteria and observed that the COD removal efficiency was 78.7%-61.5% in the treatment of real saline wastewater with a decreasing trend along with increasing of the organic loading rate. Comparatively, strain LZH-9 was among the most efficient COD removal bacteria, illustrating notable potential as a microorganism resource for the bio-treatment of hypersaline wastewater.
Optimization of COD Removal Efficiency with Strain LZH-9
The removal efficiency of COD increased largely during the first 96 h and it reached 74.3% at 96 h (Figure 3b). With the pH increasing from 5.0 to 7.0, the removal of COD consistently increased and reached the maximum value at pH 7.0 ( Figure 3c), whereas, with initial pH increasing from 7.0 to 9.0, the COD removal percentage decreased from 74.3% to 54.1%. It was concluded that with increasing of initial pH, COD removal percentage showed a trend of first increase, followed by decrease. The initial concentration of NaCl was 1%, inoculum concentration was 5% and strain LZH-9 was cultured with an initial pH 7.0. (c) Effects of initial pH on the COD removal efficiency percentage in synthetic wastewater. The effect of initial pH was tested at the 4th day. The initial concentration of NaCl was 1%. (d) The effects of initial NaCl concentration on the COD removal efficiency in synthetic wastewater. The effect of initial NaCl concentration was tested at the 96 h.
Osmotic pressure that is caused by saline concentrations (>1% salt) would cause plasmolysis or loss of biological activity in microbes [58]. It was reported that the COD removal efficiency of wastewater with rotating biological disc system fell from 85% to 59% when salinity increased from The initial concentration of NaCl was 1%, inoculum concentration was 5% and strain LZH-9 was cultured with an initial pH 7.0. (c) Effects of initial pH on the COD removal efficiency percentage in synthetic wastewater. The effect of initial pH was tested at the 4th day. The initial concentration of NaCl was 1%. (d) The effects of initial NaCl concentration on the COD removal efficiency in synthetic wastewater. The effect of initial NaCl concentration was tested at the 96 h.
Genomic Features
The genome of LZH-9 consisted of a circular chromosome (5,038,663 bp) and a plasmid (66,276 bp) with a total size of 5,104,939 bp. A total of 5263 CDS including 108 tRNA and 34 rRNA were predicted in the complete genome of strain LZH-9 while using IMG Annotation Pipeline v.5.0.1 [44]. Whole genome BLASTN-based average nucleotide identity (ANI) analyses showed that strain LZH-9 had ANI values that were above the cutoff (96%) with strains L. pakistanensis JCM 18,776 (99.0%) and Lysinibacillus sp. UBA7518 (96.8%) ( Figure S1), thus these three strains were classified into the specie L. pakistanensis. We also chose five other genome of Lysinibacillus spp. that were phylogenetically close with strain LZH-9 from public database, and a summary of features for the eight Lysinibacillus genomes involved in comparative study were listed in Table 2. The G+C contents of the 8 genomes ranged from 36.3% to 37.8%. These genomes varied in coding density from 71.5% to 83.7%. In addition, whole genome comparison of Lysinibacillus spp. while using BLAST Ring Image Generator (BRIG) revealed that genome of strain LZH-9 was rather conserved amongst the species L. pakistanensis showing high similarity, whereas many short non-common-shared genomic regions were also found in each Lysinibacillus genome, most of which harbored poorly characterized proteins ( Figures S2 and S3).
Core and Pan-genome of Lysinibacillus
The pan-genome three strains of L. pakistanensis possessed 4427 gene families, whereas core genome possessed 2847 gene families accounting for 64.3% of all gene families ( Figure 4). Clusters of Orthologous Groups (COG) annotation of the pan-genome of three L. pakistanensis strains revealed that the core genome had a
higher proportion of genes in COG [J] [F] [O] [U] [E] [D] [C] [H] and [N]
associating with basic biological function linking with ribosome, nucleotide, translation, amino acid, division, energy, and motility in comparison with accessory genome and strain-specific genes, whereas the accessory genome had a higher proportion of genes related to COG Figure S4). In addition, Gene Ontology (GO) enrichment analyses showed that functions significantly enriched (p-value < 0.05) in strain-specific gene families of L. pakistanensis LZH-9 were mostly related to metabolic process of sucrose and protein ( Figure 4).
The pan-genome analyses of six strains from six different Lysinibacillus species showed that 2182 (41.4%) out of total 5272 genes families were shared by all tested strains ( Figure 5). Additionally, L. pakistanensis LZH-9 had the most genes families (4097) in its genome. Mathematical modeling revealed a "open" pan genome fitted into a power law regression function [P s (n) = 4276.74n 0.444721 ] with a parameter (γ) of 0.444721 falling into the range 0 < γ < 1, whereas core genome was fitted into an exponential regression [F c (n) = 4475.34e −0.177936n ] ( Figure S5). COG annotation showed that core genome had a higher proportion of genes that were
involved in COG [J] [F] [O] [U] [E] [D] [C] [H] and [I]
, associating with central biological function than accessory genome and unique genes, whereas the accessory genome contained a higher proportion of genes related to COG [T] [P], and the unique genes had a higher proportion of genes categorized into COG Figure S6). GO enrichment analyses showed that the only GO term significantly enriched (p-value < 0.05) in core genome was glycolytic process (Table S1), reflecting the considerable catabolic capability of carbohydrate of Lysinibacillus. Additionally, functions significantly enriched (p-value < 0.05) in accessory genome and strain-specific gene families were mostly related to substrate transport, signal transduction and regulation, catabolic process of various carbon source, diverse nitrogen source, metabolic process of antibiotics and other toxic compounds, which in a way reflected that these Lysinibacillus strains harbored potential for biodegradation application, in consideration of the above-mentioned enriched catabolism related pathways.
Carbon Metabolism
Central carbohydrate metabolism including the glycolysis and gluconeogenesis, oxidative tricarboxylic acid cycle (TCA), pentose phosphate pathway (PPP), glyoxylate bypass, acetogenesis, methylglyoxal metabolism, and genes involved in metabolism of organic acids glycerate and lactate were found in all tested genomes of Lysinibacillus. Genes that were involved in biosynthesis of butanol, butyrate, acetolactate, metabolism of acetoin, butanediol, glycerol, and utilization of chitin and N-acetylglucosamine were also present in all tested genomes, and genes related to ethanolamine metabolism were present in all of the tested genomes, except for L. contaminans DSM 25560. Genes that were related to metabolism of monosaccharides, such as D-ribose, deoxyribose, and deoxynucleoside, D-gluconate and ketogluconates were also detected in all of the tested genomes, and genes related to mannose metabolism were present in all tested genomes except for L. xylanilyticus t26. Phylogenetic analyses indicated that genes encoding mannose-6-phosphate isomerase, gluconate 2-dehydrogenase, ribokinase of Lysinibacillus spp. involved in the metabolism of mannose, D-gluconate, and ketogluconates, and PPP, respectively, were likely acquired via cross-family gene exchange or HGT events from Planococcaceae or Paenibacillaceae, and genes encoding glycerophosphodiester phosphodiesterase of Lysinibacillus involved in glycerol metabolism were likely acquired via cross-order HGT from Lactobacillales ( Figures S7-S10). Annotation against dbCAN (database of carbohydrate-active enzyme) [48] also revealed that Lysinibacillus spp. harbored an abundant repertoire of carbohydrate active enzymes (CAZymes), including carbohydrate esterases (CEs), carbohydrate binding molecules (CBMs), glycosyltransferases (GTs), glycoside hydrolases (GHs), auxiliary activities (AAs), and a small number of polysaccharide lyases (PLs), of which GTs were most abundant, and strain L. pakistanensis JCM 18,776 possessed the most carbohydrate active enzymes ( Figure 6). All of these enzymes involved in carbon metabolism are closely linked with the COD removal ability of Lysinibacillus, through which carbohydrate-containing contaminants can be consumed by microbes while supplying energy for microbes at the same time.
Mathematical modeling revealed a ''open'' pan genome fitted into a power law regression function [Ps (n) = 4276.74n 0.444721 ] with a parameter (γ) of 0.444721 falling into the range 0 < γ < 1, whereas core genome was fitted into an exponential regression [Fc (n) = 4475.34e −0.177936n ] ( Figure S5). COG annotation showed that core genome had a higher proportion of genes that were involved in COG [H] and [I], associating with central biological function than accessory genome and unique genes, whereas the accessory genome contained a higher proportion of genes related to COG [T] [P], and the unique genes had a higher proportion of genes categorized into Figure S6). GO enrichment analyses showed that the only GO term significantly enriched (p-value < 0.05) in core genome was glycolytic process (Table S1), reflecting the considerable catabolic capability of carbohydrate of Lysinibacillus. Additionally, functions significantly enriched (p-value < 0.05) in accessory genome and strain-specific gene families were mostly related to substrate transport, signal transduction and regulation, catabolic process of various carbon source, diverse nitrogen source, metabolic process of antibiotics and other toxic compounds, which in a way reflected that these Lysinibacillus strains harbored potential for biodegradation application, in consideration of the above-mentioned enriched catabolism related pathways.
Nitrogen and Sulfur Metabolism
Nitrogen-and sulfur-containing contaminants also contribute to COD concentrations. We found in all tested genomes of Lysinibacillus genes encoding nitric oxide synthases that helped to oxidize L-arginine to nitric oxide (NO), which might protect the bacteria against oxidative stress [61,62], and nitric oxide dioxygenases encoded by hmp genes that oxidized nitric oxide to nitrate. Genes encoding nitrilase that catabolized organic nitrogen sources to produce ammonia were found in the genomes of L. contaminans DSM 25,560 and L. sphaericus OT4b.31. Gene clusters encoding urease composed of the functional subunits (ureAB and ureC) and accessory proteins (ureD, ureE, ureF, and ureG) that converted urea into molecule ammonia and carbon dioxide [63] were only found in L. parviboronicapiens VT1065 and L. sphaericus OT4b.31. Additionally, genes encoding ammonium transporter, glutamate dehydrogenase, glutamine synthetase, and carbamoyl-phosphate synthase were found in all of the tested genomes of Lysinibacillus, through which enzymes a series of important biosynthesis reactions were carried out with ammonia as entry. However, denitrifying reductase genes were missing in all the tested genomes of Lysinibacillus. Genes encoding sulfate and thiosulfate permease, sulfate adenylyltransferase, adenylyl-sulfate reductase, phosphoadenylyl-sulfate reductase, and assimilatory sulfite reductase involved in reversible assimilatory sulfate reduction or indirect sulfite oxidation were also found in all the tested genomes of Lysinibacillus. It seemed that utilizations of ammonia, organic nitrogen sources, and sulfate for growth were the main strategies of tested Lysinibacillus spp.
of mannose, D-gluconate, and ketogluconates, and PPP, respectively, were likely acquired via cross-family gene exchange or HGT events from Planococcaceae or Paenibacillaceae, and genes encoding glycerophosphodiester phosphodiesterase of Lysinibacillus involved in glycerol metabolism were likely acquired via cross-order HGT from Lactobacillales ( Figure S7-S10). Annotation against dbCAN (database of carbohydrate-active enzyme) [48] also revealed that Lysinibacillus spp. harbored an abundant repertoire of carbohydrate active enzymes (CAZymes), including carbohydrate esterases (CEs), carbohydrate binding molecules (CBMs), glycosyltransferases (GTs), glycoside hydrolases (GHs), auxiliary activities (AAs), and a small number of polysaccharide lyases (PLs), of which GTs were most abundant, and strain L. pakistanensis JCM 18,776 possessed the most carbohydrate active enzymes ( Figure 6). All of these enzymes involved in carbon metabolism are closely linked with the COD removal ability of Lysinibacillus, through which carbohydrate-containing contaminants can be consumed by microbes while supplying energy for microbes at the same time.
Energy Conservation and Transduction
All of the tested genomes of Lysinibacillus contained gene clusters qcrABC involved in the synthesis of menaquinone cytochrome c reductase complexes functioned preferentially under anaerobic to microaerobic conditions, which coupled the transfer of electrons from quinol in the membrane to c-type cytochrome [64,65]. The succinate:quinone oxidoreductase (complex II) that linked the TCA cycle to the quinone pool [66], cytochrome c oxidases (complex IV) that transferred electrons from cytochrome c to oxygen [67], and oxygen-reducing bd-type oxidase encoded by cydAB genes were also present in all of the tested genomes. In addition, genes encoding NADH:quinone oxidoreductase (complex I) were not detected.
Resistance to Antibiotics and Toxic Metals
Genes that were involved in vancomycin resistance, such as vanW encoding vancomycin B-type resistance proteins, were present in all tested genomes of Lysinibacillus, acquired likely via cross-family HGT events ( Figure S11), whereas vanRS encoding related two-component signal transduction systems [68,69] were found in all the tested genomes of Lysinibacillus, except for L. contaminans, and L. mangiferihumi, acquired likely via cross-class HGT events from members of Clostridiales (Figures S12 and S13). Genes fosB that encoding fosfomycin resistance protein [70], tetracycline resistance genes tet(M) and tet(O) encoding paralogs of the translational GTPase, the elongation factor EF-G were present in all of the tested genomes, through which tetracycline was actively removed from the ribosome of bacteria [71,72], and phylogenetic analyses suggested that fosB genes of Lysinibacillus were likely acquired via cross-family gene exchange or HGT events ( Figure S14). Genes encoding aminoglycoside adenylyltransferases that adenylated streptomycin and spectinomycin [73] were present in all of the tested genomes, except for L. contaminans DSM 25,560 and L. sphaericus OT4b.31, whereas genes satA encoding N-acetyltransferases that inactivated streptothricin via acetyl-CoA-dependent lysine acetylation [74,75] were only found in strains of L. pakistanensis and strain L. xylanilyticus t26, likely acquired via cross-order HGT ( Figure S15). Genes encoding beta-lactamases involved in resistance to beta-lactam antibiotics were present in all tested genomes, except for L. contaminans DSM 25560, acquired probably via cross-class HGT from members of Clostridiales or Tissierellales ( Figure S16). Gene clusters bceRSAB and yvcSRQP encoding bacitracin export systems that are involved in responses and resistance to bacitracin [76][77][78] and genes cbrC encoding colicin E2 tolerance protein [79] were only present in the strains of L. pakistanensis and strains L. xylanilyticus t26, L. sphaericus OT4b.31, which were probably acquired via cross-order HGT from Clostridiales ( Figure S17). As for resistance to heavy metals, we found that genes chrA encoding chromate transport proteins were present in all of the tested genomes, except for L. contaminans DSM 25560. Arsenic resistance genes arsC encoding arsenate reductases that converted arsenate to arsenite and arsM encoding arsenic methyltransferases that converted inorganic arsenic into volatile derivatives were present in all the tested genomes of Lysinibacillus. Arsenite can then be expelled from the cells by arsenite efflux pump encoded by acr3 or arsB of Lysinibacillus spp., which were probably acquired via cross-family HGT events ( Figures S18-S20), and acr3 were present in all tested genomes of Lysinibacillus. Gene clusters which involved in efflux of divalent heavy metal cations including cadAC that encoded putative ATP-dependent efflux systems and genes czcD encoding cation diffusion facilitator (CDF) proteins were also present in all of the tested genomes of Lysinibacillus, both of which were acquired likely via cross-family HGT events ( Figures S21-S23) [80,81]. Genes copA encoding copper-translocating P-type ATPase that transported Cu(I) ions from the cytosol to the periplasm were also present in all tested genomes. All of these genes may confer Lysinibacillus spp. resistance to toxic compound, such as antibiotics and heavy metal ions in polluted water, or other environmental contaminants, enhancing their environmental adaptivity and bioremediation ability.
Capsular and Extracellular Polysacchrides
Capsular and extracellular polysacchrides (EPS) play important roles in cell adhesion and biofilm formation, which is closely related to the colonization, biodegradation, desiccation, and toxic compound resistance of bacteria [82][83][84][85]. Gene clusters rfbABCD-wbbL encoding proteins that converted glucose-1-phosphate to the EPS precursor d-TDP-rhamnose were detected in the strains L. pakistanensis JCM 18,776 and L. contaminans DSM 25,560 [86,87], whereas gene clusters epsBCD and epsEF that were also involved in biosynthesis of EPS [88,89] were detected in all of the tested genomes, except for those of L. pakistanensis. In addition, genes hasA encoding hyaluronan synthases and hasC encoding UTP-glucose-1-phosphate uridylyltransferase involved in the biosynthesis of hyaluronic acid capsule [90] were found in strains L. mangiferihumi M-GX18, L. parviboronicapiens VT1065 and strains of L. pakistanensis. Genes pda encoding polysaccharide deacetylases and genes pgd encoding peptidoglycan N-acetylglucosamine deacetylase mediating peptidoglycan deacetylation in protection against lysozyme [91][92][93] were present in all of the tested genomes. We also found luxS genes encoding S-ribosylhomocysteinase present in all tested genomes, acquired via cross-family gene exchange or HGT events ( Figure S24). Autoinducer-2 (AI-2) produced by the S-ribosylhomocysteinase (LuxS), formed a universal quorum sensing system that facilitated both inter-and intra-genomes communication and played important roles in growth regulation, EPS production, and biofilm formation [94][95][96][97], and induction of gene luxS under salt and chloride stress was also observed in a Bacillaceae member [98].
Halotolerance and Resistance to Osmotic Stress
Biological processes, including sodium efflux, potassium uptake, and compatible solute uptake and synthesis, are known to counteract osmotic stress [99]. Gene clusters kdpEDABC that enhanced resilience to salt stress by scavenging K + [100][101][102][103], and genes kefA encoding mechanosensitive channel family proteins that regulated ion homeostasis and turgor pressure of bacteria upon growth at high osmolarity [104,105] were present in all of the tested strains except for L. contaminans DSM 25560, and phylogenetic analyses revealed that kdp gene clusters were clustering with those from Clostridiales and Planococcaceae, indicating that gene clusters kdp were likely acquired via HGT (Figures S25-S28). Another ATP-dependent transporters of monovalent cation (K + and Na + ) present in all of the tested strains contributed to salt resistance were KtrAB, composed of cytosolic octameric regulatory proteins (KtrA) and dimeric membrane proteins (KtrB) [100,106,107], acquired likely via cross-family gene exchanges or HGT events ( Figure S29). The accumulation of small, uncharged compatible solute, such as glycine betaine in the cytoplasm, is also a common strategy of bacteria to counteract external salt stress, and previous reports also illustrated the possible additional functions of glycine betaine as cold and heat stress protectant [108][109][110]. Most bacteria uptake glycine betaine with different transporters or synthesize glycine betaine from choline. Choline dehydrogenase (BetA) induced by salt and/or choline, together with glycine betaine aldehyde dehydrogenase (BetB), catalyzed the two-step oxidation of choline to glycine betaine [111], and we found genes betA present in L. xylanilyticus t26 and strains of L. pakistanensis and betB present in L. xylanilyticus t26 and L. contaminans DSM 25560. Phylogenetic analyses showed that the genes betA of Lysinibacillus were clustering with those from Paenibacillaceae and Thermoactinomycetaceae suggesting cross-family gene exchanges ( Figure S30). The OpuA system is the main ATP-binding cassette (ABC) transporter for glycine betaine consisting of three components: a hydrophilic polypeptide encoded by opuAC, which is a glycine betaine-binding protein (GBBP), an integral membrane protein encoded by opuAB, and an ATPase encoded by opuAA [110,112,113]. We found that genes opuAA, opuAB, and opuAC were present in all of the tested strains, being probably acquired from members of Lactobacillales via cross-order HGT (Figures S31-S33). We also found gene clusters proXV involved in transport of glycine betaine present in all tested strains [114,115]. In addition, genes proCBA that were involved in biosynthesis of proline, an effective osmolyte important in salt tolerance [116][117][118] were also present in all tested genomes.
Xenobiotics Biodegradation and Metabolism
KEGG (Kyoto Encyclopedia of Genes and Genomes) annotation revealed multiple genes involved in xenobiotics biodegradation and metabolism that closely correlated with contaminant removal ability in the tested genomes of Lysinibacillus spp. As for genes associated with benzoate degradation, genes encoding 3-hydroxybutyryl-CoA dehydrogenase (fadB), 4-oxalocrotonate tautomerase (xylH), acetyl-CoA C-acetyltransferase (atoB), acetyl-CoA acyltransferase (fadA), and catechol 2,3-dioxygenase (dmpB) were found in all of the tested genomes of Lysinibacillus, of which xylH was essential in the conversion of many aromatic compounds to intermediates of the TCA cycle [119], and the atoB and fadB genes were likely acquired via cross-family HGT events ( Figures S34 and S35), and genes encoding 2-keto-4-pentenoate hydratase (mhpD), 2-oxo-3-hexenedioate decarboxylase (dmpH), and aminomuconate-semialdehyde/2-hydroxymuconate-6-semialdehyde dehydrogenase (dmpC) were only present in strains of L. pakistanensis. Genes encoding 3-hydroxyacyl-CoA dehydrogenase/enoyl-CoA hydratase (fadJ), and 4-carboxymuconolactone decarboxylase (pcaC) were missing in strain L. sphaericus OT4b.31, and fadJ genes were likely acquired via cross-class HGT events from members of Burkholderiales or Clostridiales ( Figure S36). As for chloroalkene degradation, genes encoding 2-haloacid dehalogenase, alcohol dehydrogenase (adh), aldehyde dehydrogenase (NAD + ) (aldh) were also present in all of the tested genomes of Lysinibacillus. Additionally, genes encoding 4-hydroxy 2-oxovalerate aldolase (mhpE), acetaldehyde dehydrogenase (mhpF) involved in xylene and dioxin degradation were only present in strains of L. pakistanensis. As for genes that associated with aminobenzoate degradation, genes encoding acylphosphatase and 4-nitrophenyl phosphatase that likely acquired via cross-family HGT events ( Figure S37) were present in all the tested genomes of Lysinibacillus. As for aromatic amin catabolism, genes encoding 4-hydroxyphenylacetate 3-monooxygenase (hpaB) and nitrilotriacetate monooxygenase (ntaB/nmoB) that likely acquired via cross-family HGT events ( Figure S38) were present in all of the tested genomes of Lysinibacillus. Gene clusters dmpR-hpaB-dmpB-pnpA-mhpD-dmpFGH-xylH-hpaE with identical arrangement involved in xenobiotics biodegradation and metabolism were present in genomes of Lysinibacillus, whose non-uniform gene contexts and deviant GC contents from that of genomes suggested that it was likely acquired via HGT after the speciation of Lysinibacillus ( Figure S39). Annotation against the MEROPS database [50] showed numbers of peptidases that helped to hydrolyze proteinous contaminants in the genomes of Lysinibacillus, of which strain JCM 18,776 contained the most genes encoding peptidases (287), followed by strain LZH-9 (159), and both strains belonged to specie L. pakistanensis. Cytochrome P450 represent a super family of heme-containing monooxygenases that played critical roles in the adaptation of microbes to diverse environments by modifying harmful environmental chemicals, and annotation against cytochrome P450 Database [49] revealed that strain L. pakistanensis JCM 18,776 contained the most genes encoding cytochrome P450 (66), followed by strain L. xylanilyticus t26 (44) and strain L. pakistanensis LZH-9 (43) ( Table S2).
Mobile Genetic Elements and CRISPR-Cas Systems
Mobile genetic elements (MGEs) are moveable genome segments, such as insertion sequences, transposases, genomic islands (GIs), and phages, and the amount of MGEs positively correlates with the frequency of HGT [120]. Results showed that an abundant repertoire of MGEs as well as CRISPR-Cas (clustered, regularly, interspaced, short, palindromic repeats -associated genes) systems existed in Lysinibacillus genomes. The number and the total length of transposon sequences per genome can reach 158 and 20.9 kb (L. pakistanensis LZH-9). The number and the total length of GIs related sequences per genome can reach 621 and 317.8 kb (L. mangiferihumi M-GX18), and the number and the total length of phage related sequences can reach 550 and 430.8 kb (L. mangiferihumi M-GX18) (Table S3). On the other hand, CRISPR-Cas systems are immune system of prokaryote against viral attack [121], and Type I-B CRISPR-Cas systems were founded in the tested genomes, in which L. pakistanensis LZH-9 contained the most (76) CRISPR-Cas related genes or spacers (Table S3). We found in the tested genomes that the numbers of identified transposons sequences correlated positively (rho = 0.881, p = 0.007) with the total length of transposons sequences, the same as the numbers of phage sequences with total length of phage region (rho = 0.922, p = 0.001) and the number of CRISPR-Cas sequences and total length of CRISPR-Cas sequences (rho = 0.878, p = 0.004). However, the number of genomic island sequences in tested genomes did not significantly correlate with the length of genomic island (rho = 0.476, p = 0.243) ( Figure S40). The abundant MGEs present in tested genomes of Lysinibacillus indicated that HGT may contribute to the genomic evolution of Lysinibacillus genomes during niche adaption, and CRISPR-Cas system may also help to protect the genomes of Lysinibacillus and eliminate harmful genomic intrusions.
Positive Selection Analyses
Positive selection was also found to be an important driving force for evolution of Lysinibacillus. Genes can be changed by positive selection for fixation of beneficial gene variants in a population/specie over time if they increased fitness. Genome-scale positive selection analyses were performed on eight genomes of Lysinibacillus in this study (Table 2). Seven genes (Lp_411, Lp_718, Lp_1054, Lp_2135, Lp_3474, Lp_3540, Lp_4098) were identified as being under positive selection (Table S4), two (Lp_2135, Lp_3540) of which were annotated as hypothetical proteins. Gene Lp_1054 encoding an uncharacterized conserved protein was located in a known gene cluster related to flagellum biosythesis. Gene thiJ (Lp_411) encoded protein performing multiple functions including protease/amidase activity of broad specificity, acid resistance, oxidative stress resistance and holdase chaperone activity [122][123][124][125] and gene surA (Lp_4098) encoded a protein functioned as periplasmic chaperone and peptidyl-prolyl isomerase (PPIase) that is involved in cell envelope functions, biogenesis of β-barrel outer membrane proteins (OMPs) and virulence mediation [126][127][128][129], both of which may play important roles in maintaining cellular environment homeostasis. Gene fabB (Lp_718) encoding β-ketoacyl-ACP synthase capable of catalyzing the elongation of longer-chain-ACPs during fatty acid synthesis [130][131][132], and gene trpF (Lp_3474) encoding phosphoribosylanthranilate isomerase involved in tryptophan synthesis were also found containing adaptive changes. Taken together, those genes under positive selection in tested Lysinibacillus genomes were prone to playing a variety of functional roles with broad substrate specificity, of which even small adaptive changes in their coding sequences may bring considerable benefits in evolution.
Expression Assessment with Codon Adaption Index (CAI)
In this study, the expression levels of all genes in the above-mentioned genomes of Lysinibacillus were assessed utilizing CAI (Codon Adaption Index) as a numerical estimator. The CAI developed by Sharp et al. [133] measured synonymous codon usage bias for between nucleic acid sequences and confirmed highly expressed reference gene sets, which has been widely applied in many aspects, including estimation of gene expressivity, prediction of highly expressed genes, predicting successful expression likelihood of heterologous gene [134][135][136][137][138][139], and depicting lifestyles of genomes [140]. The results showed that the top four highly expressed COG classes in L. pakistanensis LZH-9 based on average CAI values were COG [J] [F] [O] [C] associated with most essential biological processes including nucleotide metabolism, translation, and energy production, followed by COG [M] (cell wall/membrane/envelope biogenesis), [E] (amino acid transport and metabolism), [Q] (secondary metabolites biosynthesis), and [G] (carbohydrate transport and metabolism) reflecting high metabolic rate to digest potential carbon or nitrogen sources in potential contaminants, which closely linked with efficient COD removal ability of LZH-9, whereas genes that were related to COG [X] (mobilome: prophages, transposons) were predicted to be most inactively expressed (Figure 7a). The top four highly expressed COG classes in other strain were mostly consistent with L. pakistanensis LZH-9, whereas they differed in the expression levels of other COG classes. In the other two strains of L. pakistanensis, genes related to COG [L] (Replication, recombination and repair) in strain JCM 18,776 and genes that were related to COG [V] (Defense mechanisms) in strain UBA7518 were predicted to be most inactively expressed (Figure 7b,c; Figure S41). Principal coordinates analyses (PCoA) of average CAI values based on COG classes were further conducted in order to visualize the similarity or dissimilarity of expression pattern among different strains of the Lysinibacillus. Result showed genomes analyzed in this study were clustered into three groups by PCo2 (accounting for 73.4%) based on COG classes ( Figure S42), in which strain L. pakistanensis LZH-9 were clustered with L. contaminans DSM 25560, L. mangiferihumi M-GX18 and L. pakistanensis JCM 18,776 indicating similar gene expression pattern among these strains, whereas strains L. parviboronicapiens VT1065, L. sphaericus OT4b.31, and L. pakistanensis UBA7518 formed another clusters, and both of the clusters were clearly separated with strain L. xylanilyticus t26. It is possible that divergent niches and environmental stresses these strains faced drive differentiation of general gene expression patterns; however, a larger scale examination is needed to support this point of view.
Conclusions
In this study, we isolated and identified four halotolerant strains from wastewater treatment plant, and found that strain LZH-9 could grow in the presence of up to 14% (w/v) NaCl, and remove 81.9% COD at 96 h after optimization. Whole genome sequencing of strain LZH-9 and comparative genomic analysis of eight strains of the Lysinibacillus revealed metabolic versatility of different genomes of Lysinibacillus, and we also found a multitude of genes that were involved in xenobiotics biodegradation, resistance to toxic compounds and salinity in all tested genomes of Lysinibacillus, pointing to promising application of Lysinibacillus in bioremediation. Genome-scale positive selection analyses showed that those genes under positive selection in Lysinibacillus spp. tended to be multifunctional. Additionally, genes that were related to COG [M] [E] [Q] [G] were relatively highly expressed in L. pakistanensis LZH-9 in addition to those related to basic biological functions, reflecting high metabolic rate of L. pakistanensis to digest potential carbon or nitrogen sources in potential contaminants, which closely linked with efficient COD removal ability of strain LZH-9. In all, the high COD removal efficiency and halotolerance as well as genomic evidences suggested that L. pakistanensis LZH-9 possessed great potential to be applied in the bio-treatment of hypersaline industrial wastewater.
Conclusions
In this study, we isolated and identified four halotolerant strains from wastewater treatment plant, and found that strain LZH-9 could grow in the presence of up to 14% (w/v) NaCl, and remove Table S1: GO enrichment analyses (p-value < 0.05) of gene families in six representative strains of Lysinibacillus. Table S2: Numbers of cytochrome P450 and peptidases in genomes of Lysinibacillus spp., Table S3: Statistics of Mobile genetic elements (MGEs) present in the genomes of Lysinibacillus spp. Table S4: Genes under positive selection of Lysinibacillus detected by posigene pipeline, Figure S1: Heat map of whole genome BLASTN-based average nucleotide identity (ANI) value of nine strains of genus Lysinibacillus, Figure S2: BlastN-based whole genome comparisons of eight representative strains of genus Lysinibacillus using BRIG and L. pakistanensis LZH-9 was used as reference, Figure S3 mangiferihumi M-GX18 and L. parviboronicapiens VT1065, L. pakistanensis LZH-9, Figure S4: Bar chart showing proportions of COG classes of different part of 3 strains of L. pakistanensis (L. pakistanensis JCM 18776, Lysinibacillus sp. UBA7518, L. pakistanensis LZH-9) pan-genome (i.e., core, accessory, unique), Figure S5: Mathematical modeling of the pan-genome and core genome of 6 strains of Lysinibacillus type strains. L. contaminans DSM 25560, L. xylanilyticus t26, L. sphaericus OT4b.31, L. mangiferihumi M-GX18 and L. parviboronicapiens VT1065, L. pakistanensis LZH-9, Figure S6: Bar chart showing proportions of COG classes of different part of 6 strains of Lysinibacillus type strains pan-genome (i.e., core, accessory, unique), Figure S7: Maximum likelihood phylogenetic tree of mannose-6-phosphate isomerase protein sequences derived from Lysinibacillus spp. strains and other representative species, Figure S8: Maximum likelihood phylogenetic tree of gluconate 2-dehydrogenase protein sequences derived from Lysinibacillus spp. strains and other representative species, Figure S9: Maximum likelihood phylogenetic tree of ribokinase protein sequences derived from Lysinibacillus spp. strains and other representative species, Figure S10: Maximum likelihood phylogenetic tree of glycerophosphodiester phosphodiesterase protein sequences derived from Lysinibacillus spp. strains and other representative species, Figure S11 Figure S34: Maximum likelihood phylogenetic tree of Acetyl-CoA C-acetyltransferase (AtoB) sequences derived from Lysinibacillus spp. strains and other representative species, Figure S35: Maximum likelihood phylogenetic tree of 3-hydroxybutyryl-CoA dehydrogenase (FadB) sequences derived from Lysinibacillus spp. strains and other representative species, Figure S36: Maximum likelihood phylogenetic tree of 3-hydroxyacyl-CoA dehydrogenase/enoyl-CoA hydratase (FadJ) sequences derived from Lysinibacillus spp. strains and other representative species, Figure S37: Maximum likelihood phylogenetic tree of 4-nitrophenyl phosphatase sequences derived from Lysinibacillus spp. strains and other representative species, Figure S38: Maximum likelihood phylogenetic tree of nitrilotriacetate monooxygenase component B sequences derived from Lysinibacillus spp. strains and other representative species, Figure S39: Synteny analysis of a xenobiotics biodegradation and metabolism related gene cluster dmpR-hpaB-dmpB-pnpA-mhpD-dmpFGH-xylH-hpaE derived from L. pakistanensis LZH-9, and other representative species in Lysinibacillus and GC contents comparison against the genome of L. pakistanensis LZH-9, Figure S40: Scatterplots of the relationship between: (a) the numbers of transposons sequences and total length of transposons, (b) the numbers of genomic island sequences and total length of genomic island, (c) the numbers of phage sequences and total length of phage region, (d) the numbers of CRISPR-Cas sequences and total length of CRISPR-Cas sequences detected in genomes in Table 2, Figure S41: | 2020-05-16T13:05:30.410Z | 2020-05-01T00:00:00.000 | {
"year": 2020,
"sha1": "3f9fedbc6e3fa119b81adf590a089eb137121338",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2607/8/5/716/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "db02ef07fe62cd36a185ed4982930d44075088c3",
"s2fieldsofstudy": [
"Biology",
"Engineering"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
243947262 | pes2o/s2orc | v3-fos-license | Glass fiber-reinforced composites in dentistry
Enormous improvements in dental materials’ manufacturing for the aim of producing durable dental materials without compromising the aesthetic properties were developed. One of the approaches that fulfill this aim is the use of reinforcing glass fibers as fillers into dental materials, typically resin polymers, in order to obtain glass fiber-reinforced composites. Glass fiber-reinforced composite offered many advantages to the dental materials though some limitations were recorded in many literature. In this review, a study of the glass fibers’ types, factors affecting the properties and the properties of glass fibers reinforced materials was carried out; in addition, research papers that experimentally studied their applications in dentistry were presented. The success of glass fibers reinforced composites in dentistry depends on glass fibers’ composition, orientation, distribution, amount, length and adhesion; these factors once employed according to the required clinical situation would provide the essential reinforcement to the dental restorations and appliances.
Background
Glass fibers are thin strands, silica-based glass, that are extruded into small-diameter fibers. These fibers are enclosed into resin matrix to produce glass fiber-reinforced composites.
Glass fiber-reinforced composites are polymerized monomer matrix that is filled by fine thin glass fibers, chemically bonded to that matrix using silane coupling agents. The concept of the reinforcing effect of the fiber fillers depends on the transfer of stress from the polymer to the fibers as well as the role of each fiber in preventing crack propagation.
Glass fibers are existing in different compositions, namely A-glass, C-glass, D-glass, AR-glass, S-glass and E-glass, they have different properties and uses, but generally, all glass fibers are amorphous, and they are formed of three-dimensional network of silica, with oxygen and other atoms arranged randomly.
Glass fibers are employed in different fields, such as engineering, plastic industries, electronic boards, radar housing and in dentistry. They are applied in the manufacturing of different dental products such as fixed partial denture, endodontic post systems and orthodontic fixed retainers.
Glass fiber-reinforced composite offers many advantages to the dental materials, as they provide acceptable aesthetics, non-corrosiveness, high toughness, metal free, non-allergic effect, applicable chair side handling, biocompatibility and ability to be tailored to meet the specific requirement of many dental applications (Ferracane and Condon 1992;Ramakrishna et al. 2001).
Methods
Different types of reinforcing fibers exist, such as carbon/epoxy, polyaramide, ultra-high molecular weight polyethylene (UHMWPE) and glass fibers. Each has their own advantages and disadvantages; for instance, carbon/ Safwat et al. Bull Natl Res Cent (2021) 45:190 epoxy fibers have high tensile, fatigue strength and modulus of elasticity, but poor esthetics, while polyaramide fibers are difficult to handle and polish; on the other hand, UHMWPE fibers have poor adhesion to polymer matrix; meanwhile, glass fibers have enhanced adhesion to the polymer matrix with better esthetics; that's why glass fibers have gained wide-spread in dentistry (Khan et al. 2015). Glass fibers types (Khan et al. 2015).
Open Access
1. A-Glass fibers: they are high-alkali glass with 25% soda and lime. They are used as filler in plastic industry as they are cheap and easy to manufacture though they have poor chemical resistance to water and alkaline media. 2. C-Glass fibers: they have high chemical resistance to corrosion so they are used in contact with acidic materials; however, they have low strength properties. 3. D-Glass fibers: they have greater electrical properties and low dielectric permittivity so they are used in electronic boards as a reinforcing material. However, they have low strength and poor chemical resistance. 4. S-Glass fibers: they have high strength, modulus of elasticity, corrosion resistance with low dielectric permittivity. Unfortunately, they are difficult to manufacture and thus are expensive. 5. AR-Glass fibers: they have high resistance to crack propagation and great impact strength due to the presence of zirconium; however, they have high melting temperature that restricts their application. 6. E-Glass fibers: they are electric grade glass fibers and are the most used type of glass fibers (50% of the glass fiber market) (Kolesov et al. 2001), due to their low cost, high electrical insulation and high water resistance. Unfortunately, boron oxide and fluorine are volatile elements that might disturb the glass chemical homogeneity and pollute the environment.
Of all these types, only E-glass and S-glass fibers have used in dentistry. Many dental products reinforced with glass fibers are available commercially such as preimpregnated E-glass fiber-reinforced composite (Vectris Pontic, Ivoclar Vivadent, Schaan, Liechtenstein), pre-impregnated S-glass fiber-reinforced composite (FiberKor, Pentron Corporation,Wallingford, CT, USA) and PMMA-impregnated E-glass fiber-reinforced composite (Stick Tech, Turku, Finland) (Khan et al. 2015).
Chemical composition of glass fibers
The content of alkali metals (Li, Na, K, etc.) and alkaline earth metals (Mg, Ca, etc.) in glass fibers plays an important role in their physical and mechanical properties. The chemical composition and the properties of the most used glass fibers in dentistry, S-glass and E-glass, are listed in Table 1. It is well documented that the constitutional elements of glass fibers are critical factors for the hydrolytic stability of the glass fibers. Boron oxide (B 2 O 3 ), for example, could react with saliva in oral cavity with the subsequent leaching of B 2 O 3 ; this reaction induces corrosion effects on glass fibers causing negative impact on glass strength. E-glass fibers contain 6-9 wt% B 2 O 3 and S-glass fibers contain less than 1 wt% B 2 O 3 (Li et al. 2014;Miettinen et al. 1999). This problem was overcome by pre-impregnating (Prepreg) the glass fibers with polymer matrix, or the use of impregnated fibers, which are glass fibers impregnated with highly porous PMMA during manufacturing (Takahashi et al. 2006).
Table 1
A comparison between S-glass and E-glass fibers, composition and properties. (Meriç et al. 2005;Chong and Chai 2003;Vallittu 1998) Safwat et al. Bull Natl Res Cent (2021) 45:190 Orientation of glass fibers in the polymer matrix Glass fibers are present in different orientations in the polymer matrix; these orientation provide different properties and strengthening effect. They could have random or longitudinal orientation; the random (chopped) oriented fibers give isotropic properties, i.e., same mechanical properties in all directions, while the longitudinal orientation in the form of (a) unidirectional continuous fiber laminates, which provide anisotropic effect, i.e., they have different properties in different direction; it shows the highest strength and stiffness in composite, but only in the one direction of the fibers. (b) bidirectional discontinuous short and long fiber or textile fabrics (woven, knitted and braided fabrics) laminates which present orthotropic effect, i.e., same properties in two directions with different properties in the third, orthogonal direction. A combination of two or more types of orientations in a composite is called hybrid fiber composites (Tezvergil et al. 2003).
Point of comparison S-glass fibers E-glass fibers
In general, the parallel orientation of the glass fibers to the applied force results in strength reinforcement, while those perpendicular to the applied force yield low reinforcement (Khan et al. 2015).
A composite with the longer fibers displays higher wear resistance (Callaghan et al. 2006), higher ultimate strength and fracture resistance (Xu et al. 2000). Garoushi et al. (2007b) and Manhart et al. (2000) stated that short glass fibers can be detached readily from the matrix resulting in high wear.
In addition, glass fiber orientation affects thermal behavior and polymerization shrinkage of the composites as well (Tezvergil et al. 2006).
Distribution of fibers
The distribution of glass fibers, whether evenly distributed or concentrated in a particular site, affects its mechanical properties. If they are evenly distributed, fatigue strength increases, and if they are concentrated at one area, then the stiffness and strength increase (Khan et al. 2015). Usually, distribution of glass fibers is controlled by its application; however, in most dental literature, glass fibers were positioned in the center of the specimens (Dos Santos et al. 2000).
A study conducted by Alander et al. in 2021 proved that FRC reinforcement positioning is very importance in cantilever fixed partial denture; they suggested that glass fibers should be within the tension side which is near the occlusal surface of the cantilever bridge (Alander et al. 2021).
Amount of glass fibers
Glass fiber should be covered with a sufficient layer of polymer composite to avoid wear; therefore, very high concentration of glass fibers is not preferred. It was found that more than 7.6 wt% glass fiber loading may result in a cluster of fibers with diminutive matrix inbetween, resulting in a poor fiber bonding. The ultimate fiber loading for superior wear resistance is from 2.0 to 7.6 wt% (Lassila and Vallittu 2004).
Critical fiber length and fiber aspect ratio
In order to transmit the stress from the matrix to the fibers, fibers should have a sufficient length that is equal or greater than the so-called critical fiber length (Landel and Nielsen 1993). Critical fiber length could be calculated using a fiber fragmentation test. It was estimated that the critical lengths of E-glass with Bis-GMA polymer matrix are from 0.5 to 1.6 mm (Cheng et al. 1993). Weak adhesion between the fibers and polymer matrix could be compensated by increasing the critical length of glass fibers (Karacaer et al. 2003).
Additionally, a certain length to diameter ratio of the fiber called "fiber aspect ratio" needs attention to achieve optimum properties. Several studies examined a range of fiber aspect ratios added to dental composites and concluded that dental composites reinforced with a range of 50-500, i.e., low fiber aspect ratio, are the best range for reinforcing dental composites (Shouha et al. 2014).
In an experimental conducted by Behl et al. in 2020, flowability, mechanical properties and degree of conversion were tested for a variety of experimentally prepared GFRCs containing low fiber aspect ratio of 50, 70 and 100, micro-sized fibers (5 μm diameter). They concluded that micro-sized fibers can enhance flexural and compressive properties without significantly affecting flowability and degree of conversion and that the best composition is 5% of 70 fiber aspect ratio (Behl et al. 2020).
Bond between glass fibers and polymer matrix
Glass fiber reinforcement is achieved merely when load is transferred from the matrix to the glass fibers; therefore, fibers should have strong bond to the matrix to attain good reinforcement (Lastumäki et al. 2003). To achieve this, impregnation of the glass fibers, as well as their adhesion to polymer matrix, should have the ultimate concern.
Poor impregnation creates voids between the matrix and the fiber resulting in poor flexural strength, low elastic modulus and high water sorption that cause hydrolytic degradation of polysiloxane network and subsequent discoloration (Miettinen and Vallittu 1997;Lassila et al. 2002). Good impregnation can be obtained through Page 4 of 9 Safwat et al. Bull Natl Res Cent (2021) 45:190 pre-impregnated of fibers with monomers and/or, polymers, such as light polymerizable bifunctional acrylate or methacrylate monomers (Lastumäki et al. 2002).
On the other hand, poor adhesion results in stress concentration at the glass fiber's interface (Kallio et al. 2001;Cheikh et al. 2001). It is worth mentioning that the attraction force between glass fibers and polymer matrix is the result of many factors, for instance, van der Waals forces, chemical bond, electrostatic attraction and mechanical interlocking (DiBenedetto et al. 1995). Chemical adhesion between glass fiber and polymer matrix is obtained using 3-(trimethoxysilyl) propyl methacrylate (TMSPMA) silane coupling agent (Rosentritt et al. 2001).
Silanization and impregnation of fibers with a resin improve the hydrolytic stability and prevent water sorption of the composite. Multiple studies concluded that water sorption causes reduction in flexural strength and load bearing capacity of denture base polymers, causing plasticizing effect (Garoushi et al. 2007a).
Many commercially available glass fibers, such as S-glass fibers, are coated with lubricants, antistatic agents, polymeric binders and dust; this coating should be removed to enable appropriate bond to resin (Tomao et al. 1998). In addition, glass fibers should be etched using hydrochloric acid or sulfuric acid to selectively remove Al 2 O 3 and MgO on the surface of the fibers without destroying SiO 2 ; this selective atomic level etching technique increases the surface roughness of the fibers and thus provides mechanical interlocking. Moreover, etching exposes plentiful hydroxyl groups thus provides strong chemical bonding with silane coupling agents. This treatment showed 11 ~ 40% increases in interfacial shear strength, and it improves the flexural strength and modulus of composites filled with modified S-glass fibers (Cho et al. 2019;Wang et al. 2020).
Mechanical properties
Geometry of the reinforcement fibers as well as fiberresin interfaces in GFRC system affects dramatically many mechanical properties, such as strength, stiffness, toughness, static, impact and fatigue properties (Table 2). Additionally, silanization of glass fibers increases the hardness and diametric tensile (Debnath et al. 2004). The efficiency of the fiber reinforcement varies according to fiber orientation (Tuusa et al. 2007). Krenchel (Krenchel 1964) suggested that the efficiency of the fiber reinforcement (Krenchel's factor; value 0 to 1) estimates the strength of FRCs. As shown in Fig. 1, if fibers are oriented in continuous unidirectional manner, then the reinforcing efficiency will be 1 (100%), but are only gained in one direction (Murphy 1998), while continuous bi-directional (woven, weave) fibers have reinforcing efficacy of 0.5 (50%) or 0.25 (25%) and are equal in two directions. Yet, woven fibers are advantageous in many clinical situations, where the direction of the load is unknown or where there is no space for unidirectional fibers; woven fibers also provide additional toughness to the polymer, as it prevents crack propagation. On the other hand, random chopped short FRCs provide Krenchel's factor of 0.38 (38%) in two dimensions and 0.2 (20%) in three dimensions, where the mechanical properties are the same (isotropic) threedimensionally (Chong and Chai 2003).
Fig. 1 Krenchel's factor of fibers according to their orientation in plane
Page 5 of 9 Safwat et al. Bull Natl Res Cent (2021) 45:190
Optical properties
Glass fibers possess similar refractive index to that of resin; therefore, they allow light transmittance efficiently (Khan et al. 2015). Accordingly, addition of glass fibers to dental composite will improve their mechanical properties without affecting the degree of conversion of resin matrix, unlike opaque colored kelvin, carbon or zirconia fibers (Behl et al. 2020).
Viscoelastic properties
Studies revealed that the viscoelastic behavior of polymers reinforced using glass fibers was 15.32 GPa which is comparable to dentin (17GPa) (Khan et al. 2008).
Adhesive properties
Adhesion is an important property in dental practice, as the success of different restorative systems depends on adhesion. In a study conducted by La Bell et al., on the adhesion of titanium post, carbon fiber-reinforced posts and glass fiber-reinforced post to cements, they found that only GFRC posts showed no adhesive failure, while titanium and carbon fiber-reinforced composites posts showed 70% and 55% failure rate, respectively (Le Bell et al. 2005).
Thermal properties
The orientation of glass fibers has an impact on the linear coefficient of thermal expansion; linear coefficient of thermal expansion for unidirectional glass fiber was found to have an average of 5.0 × 10 −6 °C −1 (Tezvergil et al. 2003). Interestingly, studies revealed that continuous unidirectional reinforced fibers have two coefficients of thermal expansion values, a lower value, in the direction parallel to the fibers, and a higher value, in the direction perpendicular to the fibers, as the rigidity of the fibers inhibits expansion of the matrix longitudinally and allows expansion in the transverse direction.
Biocompatibility
Many studies revealed that glass fiber-reinforced filling materials have low microbial adhesion to Streptococcus mutans compared to dentin and enamel (Murphy 1998). For instance, in a study conducted on Candida albicans adhesion to GFRC, it was observed that the impregnated hydrophobic resins with E-glass fibers reduced microbes adhesion (Assif et al. 1993). Biocompatibility of BisGMA/TEGDMA reinforced with E-glass fibers evaluation revealed good proliferation and differentiation of cultured cells (Waltimo et al. 1999); in another study on fiber-reinforced oral implant, inspected using micro-CT scans, bone trabeculae was observed between the FRC implant's threads, which prove their excellent biocompatibility (Ballo et al. 2014).
Prosthodontic applications
The application for glass fibers as reinforcement agent in denture base may be the earliest application of GFRC in dentistry (1960s.) (Khan et al. 2015), and it proved successful influence on mechanical properties. Stress transfer can be reduced by adding glass fibers to the denture base (Duraisamy et al. 2019). Results revealed that denture base reinforced using 6-mm chopped glass fibers resulted in an increase in transverse strength, elastic modulus & impact strength (Selvan and Ganapathy 2016). Conversely, other studies mentioned that the use of GFRC as removable dentures provides high fatigue resistance but low flexural modulus (Karmaker et al. 1997). Meanwhile, most researches consider GFRC to be an excellent option for making denture base, due to their high fatigue resistance, the ability of resisting extremely high temperature, moisture and oil, as well as the property of polishing (Subasree and Murthykumar 2016).
In a study that compared the bonding strength of polymer matrix with denture base polymers enclosing either carbon, aramide, woven polyethylene or glass fibers, results revealed that glass fibers yielded the best esthetics and ease of bonding to the polymer matrix .
Glass fiber-reinforced autopolymerizing resin can be utilized for repairing a broken denture, with a 45° bevel joint design of the broken surfaces and surface pretreatment; it proved to minimize stress concentration and to improve the transverse strength of the repaired denture base (Mamatha et al. 2020).
Glass fiber-reinforced composites are also utilized in the fabrication of fixed partial denture instead of the conventional cast metal resin-bonded fixed partial denture; they present adhesive, esthetic, metal-free, high elastic modulus, high fracture strength, low risk of allergy and low-cost option for tooth replacements (Van Heumen et al. 2009a). Studies reported that GFRC FPDs had 71% success rate and 78% survival rate after 5 years in the posterior area (van Heumen et al. 2009b).
Additionally, FRC CAD/CAM composite was evaluated for the fabrication of fixed dental prostheses; FRC consisted of parallel glass fibers dispersed in a multi-layer bi-direction manner into resin matrix. Results confirmed high reliability to the expected physiological masticatory load in the molar region (Bergamo et al. 2021).
Another study investigated the mechanical properties of short GFRC fabricated as temporary crown and bridge and found high flexural strength (117 MPa) and Page 6 of 9 Safwat et al. Bull Natl Res Cent (2021) 45:190 compressive load bearing capacity (730 MPa) compared to the conventional temporary crowns and bridges (Garoushi et al. 2008).
Endodontic applications
GFRC endodontic posts are another option introduced to endodontic dentistry; they are either prefabricated posts or individually polymerized GFRC ones. Individually polymerized GFRC posts show higher flexural strengths and better bond to composite resin luting cement than the prefabricated posts (Le Bell et al. 2005;Biały et al. 2020).
Studies revealed that glass fibers oriented parallel to the long axis of the post, provided high strength and elastic modulus in this direction (Chieruzzi et al. 2014).
GFRC endodontic posts have the advantage of allowing light transmitted deep into the root canal and thus increase bonding of the cement to the post and dentin (Vieira et al. 2021).
Tooth restoration applications
One of the applications of glass fiber-reinforced composite is dental restorations, short glass fibers have positive impact on polymerization shrinkage stresses of composite resin and, accordingly, on marginal microleakage; therefore, it is an ideal choice in posterior and bulk composite restorations. Experimental studies on short GFRCs displayed high fracture toughness, flexural strength and flexural modulus (Garoushi et al. 2012).
Short GFRC (everX Posterior, GC, Tokyo, Japan) is a dental restorative composite resin product that was introduced into markets as dentin replacement in large cavities below conventional composite, to reinforce it and prevent fracture (Fallis and Kusy 1999). It consists of 8.6 wt% randomly orientated short E-glass fibers and 67.7 wt% barium glass fillers and resin matrix; this composite restoration showed high load bearing capacity, flexural strength and fracture toughness (Säilynoja et al. 2013).
Another impressive glass fiber-reinforced resin disk (TRINIA, SHOFU, Kyoto, Japan) that utilizes CAD/ CAM technique for tooth restoration was introduced. It contains 55 wt% multi-directionally interlaced glass fibers, aligned as woven layers, parallel to the top surface of the disk. The size of E-glass fibers was 1.2-1.5 mm in width and 0.1-0.4 mm in thickness, respectively. It demonstrated high flexural strength (254.2-248.8 MPa) and fracture toughness (9.1 ± 0.4 MPa/m 1/2 ), but these properties are anisotropy; therefore, this material can be used only in specific directions recommended by the manufacturer (Suzaki et al. 2020).
Novel dental composite filler, composed of nanohydroxyapatite (nHA) and E-glass fibers, was created using the microwave irradiation technique; these fillers combine the advantages of bioceramic (nHA) and the high strength of E-glass fiber. The degree of conversion, flexural strength and micro-hardness results were very promising; however, flexural strength and water sorption behavior of the experimental composites decreased with increasing nHA/E-glass fibers (Syed et al. 2020).
Orthodontic applications
An esthetic orthodontic retainer was presented as a new clinical use of GFRC. It provides high fracture strength and high adhesive bond strength to enamel and to orthodontic attachments (Meiers et al. 2003).
A study compared the bond strength of glass fibers and stainless steel bonded lingual orthodontic retainers, to maxillary and mandibular teeth, for six years, revealed that detachment rate and breakage rate of glass fibers retainers are lower than stainless steel retainers (Rosenberg 1980).
First-generation GFRC retainers presented by Burstone and Kuhlberg (2000) were too rigid to allow tooth movement; recently, glass fiber (EverStick Ortho*) pre-impregnated with PMMA polymer was introduced; they offered micromechanical and chemical adhesion (Lastumäki et al. 2002). Alternatively, GFRC space maintainers placed on primary teeth are prone to failure, either due to presence of prismless enamel or due to moisture contamination (Zachrisson 1977).
Periodontal applications
Periodontal splints made of fiber-reinforced resin have provided clinicians the sufficient mechanical strength of metal splints with the satisfactory aesthetics of resins; in addition, they are simple in design, durable (Meiers et al. 1998), don't interrupt the occlusion and facilitate keeping good oral hygiene (Kumbuloglu et al. 2011).
Limitations of glass fibers reinforced materials
Though glass fibers have been investigated as reinforcing agent in dental polymers for almost forty years, still some of these materials may have limitations, for example, 1. It is not always applicable to include sufficient glass fibers. 2. Some GFRCs can only be used in the particular direction, recommended by the manufacturer, due to the anisotropy property. 3. Overlying veneering composite is prone to wear. 4. Deficient rigidity for use in long-span bridges. 5. Handling requires adequate moisture control for adhesive technique. 6. Posterior occlusal situations should have sufficient space to allow enough room for the glass fibers and Page 7 of 9 Safwat et al. Bull Natl Res Cent (2021) 45:190 the overlying veneering composite (Butterworth et al. 2003). 7. Relatively higher density of glass fibers compared to other fibers as carbon and organic fibers. 8. Self-abrasive if not treated and the tensile modulus would be prone to decrease. 9. Relatively low fatigue resistance (Zhang and Matinlinna 2012). 10. The S-glass is very costly though their service life is short.
Conclusions
The interest in using glass fiber-reinforced dental materials is growing; these materials offer strength and toughness equivalent to dental tissues, with very satisfactory aesthetics.
In this review, types of glass fibers and factors affecting the properties of fibers reinforced materials were revealed, and the properties and the applications of fiberreinforced composites were discussed. This extensive research proved the effectiveness of glass fiber reinforcement in many dental restorations, as long as glass fibers' composition, orientation, distribution, amount, length, and adhesion are well performed in accordance with every clinical situation. In conclusion, the reported success of glass fibers as reinforcing material surpasses their limitations. | 2021-11-11T14:22:40.807Z | 2021-11-10T00:00:00.000 | {
"year": 2021,
"sha1": "c40b87e7397f653ee6908ef82c947633517061d5",
"oa_license": "CCBY",
"oa_url": "https://bnrc.springeropen.com/track/pdf/10.1186/s42269-021-00650-7",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d13fb7fd360a7c28932d9da3c9a67810720e819b",
"s2fieldsofstudy": [
"Medicine",
"Materials Science"
],
"extfieldsofstudy": []
} |
52957139 | pes2o/s2orc | v3-fos-license | A guanine-flipping and sequestration mechanism for G-quadruplex unwinding by RecQ helicases
Homeostatic regulation of G-quadruplexes (G4s), four-stranded structures that can form in guanine-rich nucleic acids, requires G4 unwinding helicases. The mechanisms that mediate G4 unwinding remain unknown. We report the structure of a bacterial RecQ DNA helicase bound to resolved G4 DNA. Unexpectedly, a guanine base from the unwound G4 is sequestered within a guanine-specific binding pocket. Disruption of the pocket in RecQ blocks G4 unwinding, but not G4 binding or duplex DNA unwinding, indicating its essential role in structure-specific G4 resolution. A novel guanine-flipping and sequestration model that may be applicable to other G4-resolving helicases emerges from these studies.
G -quadruplexes (G4s) are highly stable nucleic acid secondary structures that can form in guanine-rich DNA or RNA 1 . G-quartets, the repeating structures within G4s, are formed by an extensive hydrogen-bonding network that links four guanine bases around a cationic core. G4 structures, in turn, comprise G-quartets stacked upon one another, stabilized by base stacking between the layers. Their stability can make G4s impediments to numerous cellular processes, including replication 2 , transcription 3 , and translation 4 . Despite their potential hazards, G4-forming sequences are well represented in genomes, particularly within promoter regions 5 and telomeric DNA ends 6,7 , indicating cells have developed mechanisms of abating the negative consequences of G4 DNA and have even co-opted the structures as regulatory and protective genomic elements.
G4 unwinding is essential for both G4 tolerance and G4 regulatory functions. Accordingly, cells have evolved a range of helicases that can unwind G4 structures, including DHX36 8 , the Pif1 2 and XPD 9,10 families of helicases, and members of the RecQ helicase family including bacterial RecQ 11 , yeast Sgs1 12 , and human WRN 13 and BLM 14 . The importance of these helicases is highlighted by the profound genomic instability that results from their dysfunction, observed in xeroderma pigmentosa (XPD) 15 , Fanconi anemia (FANCJ (an XPD paralog)) 16 , Werner (WRN) 17 and Bloom (BLM) 18 syndromes. In spite of the diverse clinical presentations caused by their absence, these enzymes operate on a range of G4 substrates using an apparent shared mechanism that relies on repetitive cycles of unwinding and refolding 19,20 . However, the small number of structural studies that have provided insights into the G4 unwinding process has limited our current understanding of the physical mechanisms underlying G4 resolution.
In this study, we report the X-ray crystal structure of the RecQ helicase from Cronobacter sakazakii (CsRecQ) bound to a resolved G4 DNA. Surprisingly, the 3′-most guanine base, which is the first base in the quadruplex that the 3′-5′ translocating RecQ would encounter, is bound in a guanine-specific pocket (GSP) in the helicase core. Residues within the GSP satisfy all of the hydrogen bonds that are normally formed by guanines within G-quartet structures, which highlights the remarkable guanine selectivity of the binding site. Guanine docking within the GSP is incompatible with a folded G4 structure, implying that the base must flip from the quartet to be sequestered within the GSP. Consistent with an important and selective role for the GSP in G4 unwinding, changes to the guanine-coordinating residues in RecQ block G4 DNA unwinding but do not alter duplex DNA unwinding. These data lead to a guanine-flipping and sequestration model of G4 unwinding by RecQ helicases that may also be shared with other G4 unwinding helicases.
Results
Structure of RecQ bound to a resolved G4. To better understand how G4 structures are resolved by helicases, the catalytic core domain of CsRecQ ( Fig. 1a) whereas the 3′ ssDNA end was bound in an electropositive channel in the helicase domain ( Supplementary Fig. 1a) 21 . Because G4 and duplex DNA bind to the same surface of RecQ 22 , we hypothesized that RecQ would bind G4 DNA in the same orientation. Surprisingly, the 2.2 Å-resolution structure revealed a product complex of CsRecQ bound to unwound G4 DNA rather than a folded quadruplex ( Fig. 1b and Table 1). The RecQ/G4 product structure was very similar to the RecQ/duplex DNA structure, with a root mean square deviation of 0.68 Å among 511 Cα atoms ( Supplementary Fig. 1). As was seen in the RecQ/duplex structure, the 3′ ssDNA is bound in an electropositive groove across the face of the helicase domain and it extends to dock in the ATP binding site of a symmetrically related molecule. Moreover, the helicase and winged-helix domains were closed around the unfolded G4. However, electron density was only observed for the three 3′-most guanines of the G4-forming DNA with the rest of the DNA apparently disordered within the crystal lattice. The positions of the resolved guanine bases deviated significantly from their expected placement within a folded G4, indicating that the quadruplex was unwound in the structure. The structure, therefore, suggested that binding by RecQ was sufficient to unwind G4 DNA, despite the presence of cations that otherwise stabilize the G4 ( Supplementary Fig. 2).
RecQ contains a guanine-specific pocket. Examination of the structure revealed an unexpectedly specific arrangement for binding to the unwound G4 product (Fig. 1b-d). The 3′-most guanine base of the G4-forming sequence (G21), which is the first base within the folded G4 that would be encountered by the 3′-5′ translocating RecQ enzyme, was found sequestered in a guaninespecific pocket (GSP) on the surface of RecQ. The GSP forms hydrogen bonds with the guanine base using the sidechain hydroxyl and backbone amide of Ser245 and the sidechain carboxyl group from Asp312 of RecQ (Fig. 1c). These contacts are uniquely selective for guanine and, strikingly, they substitute for all of the hydrogen bonds that stabilize guanines within G4 structures. The base is further stabilized by base stacking against a cytosine base two nucleotides 3′ of the flipped base (C23). The GSP is capped on the 5ʹ end by the hydrophobic portion of the Lys248 sidechain and by Trp347 on the 3′ side ( Fig. 1d). Lys222 and Lys248 make additional contacts with the phosphodiester backbone of the unfolded DNA, anchoring it against the helicase domain ( Supplementary Fig. 3). Given this arrangement, guanine-binding to RecQ is incompatible with its position within a folded G4. Instead it appears that the guanine must flip from within a G-quartet to be sequestered in the RecQ GSP. In both DNA-free and duplex DNA-bound bacterial RecQ structures, access to the GSP is occluded by Lys248, which folds to interact with Asp312 from the GSP 21,23 . However, the GSP is open to accept the guanine base in the RecQ/G4 product complex ( Supplementary Fig. 4). These observations suggested a possible model in which guanine-flipping and GSP-mediated base-specific sequestration support RecQ unwinding of G4 DNA.
Binding of RecQ variants to duplex and G4 DNA substrates. A guanine-flipping and sequestration model predicts that sequence changes in the GSP would impair G4, but not duplex, DNA unwinding. To test this prediction and allow for comparison with prior studies, Escherichia coli (Ec) RecQ (92.5% similar to CsRecQ, relevant residue numbering identical to CsRecQ) and CsRecQ catalytic core domain variants with compromised GSPs (Ser245Ala and Asp312Ala) were purified. The biochemical activity of these variants was tested alongside the wild-type EcRecQ and CsRecQ catalytic core domains. The CsRecQ Asp312Ala protein was unstable and difficult to purify, therefore, this protein was excluded from analysis.
Affinity for FAM-labeled duplex DNA with a 3ʹ ss extension was measured first for the RecQ panel ( Supplementary Fig. 5a and Table 2). Each variant was found to bind the DNA, although the CsRecQ proteins had lower affinities relative to their EcRecQ counterparts. The EcRecQ Asp312Ala variant had a~3-4-fold higher affinity for the partial duplex DNA, which may be due to the removal of a negative charge in the duplex DNA-binding groove. The DNA affinities reported here are consistent with those reported previously for the RecQ catalytic core 24 . Next, the affinity of each variant for G4 DNA with a 3ʹ ss extension was measured. EcRecQ, EcRecQ Asp312Ala, and CsRecQ all bound G4 DNA with very similar affinities to those measure with the partial duplex (Supplementary Fig 5b and Table 2). Unfortunately, we were unable to measure the equilibrium G4 affinity for either Ser245Ala variant; both were able to bind DNA but we observed a time-dependent decrease in anisotropy that made measurement of the binding constant impossible. This is likely due to a modest instability/insolubility of the variants in the conditions tested. Nevertheless, each of the variants could bind G4 and duplex DNA, indicating that residues within the GSP are not essential for G4 binding.
Disruption of the GSP inhibits G4 but not duplex unwinding. Single-molecule (sm) FRET assays were carried out to determine the impact of GSP sequence changes on RecQ DNA unwinding. These assays were designed to test unwinding of substrates with a 3ʹ ss loading site that contain either a duplex structure preceded by a G4 element or a duplex structure alone (Fig 2a, e, respectively). The substrates consist of an immobilized Cy5-labeled 18mer annealed to a Cy3-labeled strand comprising the complementary 18mer along with either dT 15 or both a G4 element and dT 15 . Unwinding of the substrate releases the Cy3containing DNA strand and can be measured as a reduction of the number of Cy3 spots over time (Fig. 2b).
In contrast to the results with the wild-type RecQ proteins, none of the GSP variant RecQ proteins were able to unwind the G4 DNA structures. Single-molecule traces ( Table 2, Fig. 2c, bottom, and Supplementary Fig. 6) showed that each of the GSP variants failed to elicit the repetitive unwinding/refolding FRET signature observed with the wild-type RecQ proteins and G4 unwinding was not observed, even after long (12 min) incubation periods. These data are consistent with an essential role of the RecQ GSP in G4 unwinding.
To test whether the GSP RecQ variants retained duplex helicase activity, the assay was repeated using a substrate that lacked the G4-forming sequence (Fig. 2e). The single-molecule traces ( Fig. 2f and Supplementary Fig. 7) and FRET histograms before and after the addition of the proteins (Fig. 2g), demonstrate robust helicase activity of the duplex DNA substrate by all of the variants. Each protein unwinds the DNA at rates that were very similar to those observed with EcRecQ and CsRecQ (Table 2). Thus, the GSP in RecQ is required uniquely for unwinding G4 DNA.
In an attempt to visualize folded G4 DNA bound to RecQ, crystals of the Ser245Ala CsRecQ catalytic core variant were generated with G4 DNA. Diffraction data were collected from over a dozen crystals and molecular replacement revealed several crystals in which the guanine base was not found in the altered GSP. In these cases, discontinuous electron density consistent with the dimensions of a folded G4 structure was observed in the cleft formed by the helicase and winged-helix domains (Supplementary Fig. 8). Unfortunately, the fragmented nature of the electron density did not permit modeling of the full G4 structure. Nonetheless, the structural study was consistent with the significantly reduced activity of the variant predicted from the FRET experiment.
Discussion
Despite the importance of G4 homeostasis in cells, our mechanistic understanding of quadruplex resolution has been hampered by a lack of structural information for G4-processing helicases. In this report, we have described the X-ray crystal structure of a RecQ helicase bound to a resolved G4. The structure identified a guanine-specific pocket, or GSP, in RecQ that sequesters a guanine base from the resolved G4. Guanine is selectively bound within the GSP via residues that form a pattern of hydrogen bond donors and acceptors that mimic the bonding pattern for a guanine within a G-quartet structure. As such, guanine-binding to the GSP is incompatible with a folded G4 structure and instead requires the base to be flipped away from the G4. These observations suggested a possible role for the GSP in G4 unwinding. In agreement with such a role, RecQ variants with altered guanine-binding residues failed to unwind G4 DNA, but they maintained their ability to unwind duplex DNA. Our data collectively support an unexpectedly specific helicase mechanism for RecQ unwinding of G4 structures that relies on guanine base flipping and sequestration for G4 resolution.
In the G4 unwinding model, RecQ first recognizes a ssDNA/G4 junction, placing the G4 in a position adjacent to the GSP and leaving the pocket poised to receive the 3′-most guanine from a G-quartet as it flips from the folded structure (Fig. 3). For the structural studies described here, guanine sequestration appears sufficient to unfold a G4 with three guanine quartet planes. ATP-dependent RecQ translocation would then slide the 3′-most guanine base out of the GSP, moving it along the face of the helicase domain and allowing the next guanine to be sequestered within the GSP as the G4 structure is resolved. What then gives rise to the repetitive cycles of G4 unwinding and refolding that have been observed in single-molecule experiments 11,14 ? Two possibilities may explain this phenomenon. First, since RecQ must release the first guanine to advance along the DNA, it may be that the base can either slide along the ssDNA binding face of RecQ to promote unwinding or it can flip back and allow the G4 structure to refold (Fig. 3). It is possible that G4 reformation is more efficient than processive translocation, which would lead to repetitive rounds of unwinding and refolding. Second, although the GSP matches the hydrogen-bonding pattern for a guanine in a folded G4, it may form a complex that is less stable than that found in the context of a G4, which includes base stacking and ionic stabilization in addition to hydrogen bonding. If RecQ transiently captures a frayed guanine from the 3′ end of the G4 and if translocation is slower than the rate at which the guanine can transition back into the folded G4, this difference could allow the captured guanine to be released and the G4 to reform, resulting in a cycle of G4 unwinding and refolding.
Base-flipping activities have been observed in several enzymes that act on nucleic acids, including polymerases 25 , endonucleases 26 , glycosylases 27 , and methyltransferases 28 . In these enzymes, base flipping is accompanied by a distortion of B-form DNA near the flipped base, facilitating extraction of the base by the enzyme while extensive protein-DNA contacts hold the enzyme in position. Similarly to RecQ, base-flipping enzymes coordinate the isolated nucleobase through a hydrogen-bonding pattern that selects for the targeted base. This specificity allows repair enzymes, for example, to survey the integrity of the flipped base prior to initiation of a repair process. RecQ binding may similarly distort G4 DNA to allow guanine base flipping. It is also possible that RecQ simply traps transiently frayed guanine bases at the ssDNA/G4 interface. Additional studies are needed to examine these possibilities.
Because the RecQ GSP is specific for a canonical base, it is possible that the GSP may inadvertently sample guanines outside of G4 structures, hindering RecQ unwinding of guanine-rich duplex DNA. Indeed, RecQ pauses have been observed while unwinding GC-rich duplex DNA 29 , which could possibly result from guanine occupancy in the GSP. However, examination of the structure of the GSP reveals a mechanism that appears to counteract such non-productive base-flipping. In the absence of G4 DNA, Lys248 and Asp312 interact with one another to occlude access to the GSP (Supplementary Fig. S4). This closure is maintained when RecQ is bound to duplex DNA 21 . However, Repeat until G4 resolved RecQ binds G4 G4 resolved GSP base flips guanine G4 destablized Translocation, RecQ binds next guanine GSP Fig. 3 Model of RecQ-mediated G4 unwinding. RecQ (domains colored as in Fig. 1) binds the folded quadruplex, trapping it between the helicase and winged-helix domains. This positions the GSP near the G4, allowing for a guanine (indicated by blue squares) to be flipped out of the G-quartet and sequestered in the GSP. The guanine can either release back into the G-quartet, allowing the G4 to refold and leading to the observed repetitive FRET cycling, or RecQ can translocate to the next guanine NATURE COMMUNICATIONS | DOI: 10.1038/s41467-018-06751-8 ARTICLE NATURE COMMUNICATIONS | (2018) 9:4201 | DOI: 10.1038/s41467-018-06751-8 | www.nature.com/naturecommunications interaction with resolved G4 DNA appears to favor GSP opening through an interaction formed between Lys248 and the phosphodiester backbone of the G4 product. This interaction could make the GSP accessible to guanine bases under conditions where resolved or, presumably, folded G4 DNA is bound to RecQ. This interaction may attenuate guanine-binding by the GSP during duplex DNA unwinding while promoting it during G4 unwinding.
It remains to be seen how prevalent a guanine base-flipping mechanism is among G4 helicases. Among the bacterial RecQ helicases, the GSP sequence is conserved but not invariant. Some variability may be tolerated in the GSP while still allowing for G4 helicase activity. It may also be the case that the GSP is structurally conserved, even if the sequence homology is not invariant. For example, examination of the structure of BLM helicase, a human RecQ homolog with G4 helicase activity, reveals a potential GSP situated at the duplex/ssDNA junction comprising Ser965 and either Glu900 or Asp 997 ( Supplementary Fig. 9a, b) 30 . We are unable to assess if the other RecQ-G4 helicases WRN or Sgs1 possess a GSP due to the lack of structures of their catalytic cores. However, even outside of the RecQ family, GSP-like pockets can be found. One instance is the bacterial helicase UvrD, which also contains a GSP-like structure poised to potentially receive a guanine flipped from a G4 substrate ( Supplementary Fig. 9c) 31 .
While base-flipping described here provides a simple method of G4 resolution, other mechanisms may also exist. A very recent structure of G4 DNA in complex with the helicase DHX36 has been reported, suggesting a mechanism of G4 resolution in which the G4 is bound by the extended N-terminal DHX specific motif (DSM) 32 . This binding triggers repetitive conformational shifts in the G4 that are thought to reorganize and destabilize the quadruplex before ultimately releasing the resolved DNA in an ATPdependent manner. The broader applicability of this mechanism may be limited to proteins with a DSM or analogous domain. Furthermore, the DSM best recognizes and unfolds parallel G4s, whereas this not a requirement of the GSP mechanism. Indeed, different RecQ helicases are known to unwind both parallel and antiparallel G4s 20,33 .
In summary, our studies have identified a remarkably specific mechanism for G4 DNA unwinding by RecQ DNA helicases. This model relies on base flipping in a manner that was first envisioned as a possible helicase mechanism shortly after the discovery of enzyme-mediated DNA base flipping 34 , although experimental evidence for such a mechanism has been lacking prior to the structural work described here. Discovery of this novel mechanism also underscores the apparent importance of G4 regulation by helicases in vivo. In what ways do the G4specific functions of RecQ helicases impact cells? Several RecQ pathways have been linked to recognition and/or processing of G4 structures, including those involved in recombination regulation 35 and telomere maintenance 36 in eukaryotes, and antigenic variation in bacteria 37 . Investigations of the cellular activities of RecQ variants with selectively-blocked G4 resolution functions could pave the way to a better understanding of the general roles of G4 structures in vivo.
Methods
Protein purification. The catalytic core of CsRecQ and EcRecQ and all variants were overexpressed in Rosetta 2 (DE3) E. coli cells transformed with pLysS (Novagen, Darmstad, Germany) and a RecQ overexpression plasmid. Cells were grown at 37°C in Luria Broth supplemented with 50 μg/mL kanamycin and 1 μg/ mL chloramphenicol. Once the cells reached an OD 600 of 0.6, protein expression was induced with 1 mM IPTG for 4 h at 37°C before the cells were pelleted and stored at −80°C. Cell pellets were resuspended in lysis buffer (20 mM Tris·HCl (pH 8.0), 500 mM NaCl, 1 mM 2-mercaptoethanol (BME), 1 mM phenylmethane sulfonyl fluoride, 100 mM dextrose, 10% (vol/vol) glycerol, 15 mM imidazole), lysed by sonication and clarified by centrifugation. The supernatant was incubated with Ni-NTA agarose resin at 4°C before being washed extensively with lysis buffer. The N-terminally His-tagged proteins were eluted from the resin with elution buffer (lysis buffer containing 250 mM imidazole) before the His tag and HRDC domains were removed by overnight thrombin cleavage while the protein was dialyzed into dialysis buffer (20 mM Tris·HCl (pH 8.0), 300 mM NaCl, 1 mM BME, 10% (vol/vol) glycerol). The cleaved protein was diluted to 100 mM NaCl, loaded onto a HiPrep QFF ion exchange column (GE healthcare, Chicago, IL) and eluted with a 0.1-1 M NaCl gradient. RecQ-containing fractions were pooled, concentrated, and then further purified with an S-100 size exclusion column (GE healthcare) before dialysis into storage buffer (20 mM Tris·HCl (pH 8.0), 1 M NaCl, 4 mM BME, 40% (vol/vol) glycerol, 1 mM ethylenediaminetetraacetic acid) and stored at −20°C.
X-ray diffraction data were collected at the Advanced Photon Source (LS-CAT beamline 21ID-F) and were indexed and scaled using HKL2000 38 . The structure of the CsRecQ/G4 DNA complex was determined by molecular replacement using the CsRecQ/duplex DNA structure (PDB ID code 4TMU) 21 as a search model in the program Phaser 39 followed by rounds of manual fitting using Coot 40 and refinement using PHENIX 41 . The quality of the electron density map of the refine structure is shown in Supplementary Fig. 10. Coordinate and structure factor files have been deposited in the Protein Data Bank (PDB ID code 6CRM [https://doi. org/10.2210/pdb6CRM/pdb]). The Ser245Ala CsRecQ variant was phased by molecular replacement using the CsRecQ/G4 product complex as a search model in the program Phaser 39 followed by rounds of manual fitting using Coot 40 and refinement using Phenix 41 .
DNA-binding assay. G4 DNA containing a 3′ FAM modification (F-G4) was solubilized to 50 µM in G4 folding buffer [10 mM Tris·HCl (pH 8.0), 100 mM KCl]. Using a heat block, the DNA was heated to 95°C for 5 min, after which the block was removed from heat and allowed to cool to room temperature over approximately 4 h. Folded DNA was then stored at 4°C. RecQ proteins were serially diluted from 20,000 to 0.6 nM in G4 binding buffer [20 mM Tris·HCl (pH 8.0), 100 mM NaCl, 1 mM MgCl 2 , 1 mM β-mercaptoethanol, 0.1 mg/mL bovine serum albumin, 4% (vol/vol) glycerol], then incubated with 5 nM F-G4 for 30 min at room temperature in a total volume of 100 µL. The fluorescence anisotropy of each sample was measured at 25°C with a Beacon 2000 fluorescence polarization system. Measurements are reported in duplicate and error bars represent 1 SEM. Binding affinities and uncertainties were determined using Prism version 5.0c (GraphPad Software, La Jolla, CA, USA). Duplex binding assays were performed as the G4 binding assays using a 3ʹ FAM-labeled ssDNA (duplex 1) annealed to an unlabeled 18mer (duplex 2) to create a substrate with an 18-bp duplex with 3ʹ overhang of 12 nucleotides. 21 Duplex binding assays were performed in triplicate and error bars represent 1 SEM.
smFRET DNA substrates. ssDNAs with amino modifier at the labeling sites were purchased from Integrated DNA Technologies (Coralville, IA, USA). The DNAs were labeled using Cy3/Cy5 monofunctional NHS esters (GE Healthcare, Princeton, NJ, USA). Amino modified oligonucleotides (10 nmol in 50 mL ddH2O) and 100 nmol of Cy3/Cy5 NHS ester dissolved in dimethylsulfoxide were combined and incubated with rotation overnight at room temperature in the dark. The labeled oligonucleotides were purified by ethanol precipitation.
Both G4 and non-G4 substrates consist of 18 base pairs of dsDNA and a 3′ tailed ssDNA of specific sequence (Supplementary Table 1. For non-G4 DNA substrate, the 18mer DNA is immediately followed with a tail of dT 18 . For G4 DNA substrates, a G4 sequence is between the 18mer dsDNA and the dT tail. A Cy5-Cy3 FRET pair are placed at the junction and the 3′ end of the ssDNA, respectively. DNA substrates were annealed by mixing the biotinylated and non-biotinylated oligonucleotides in a 1:2 molar ratio in T50 buffer [10 mM Tris·HCl (pH 8.0), 50 mM NaCl]. The final concentration of the mixture is 10 μM. The mixture was then incubated at 95°C for 2 min followed by slow cooling to room temperature to complete the annealing reaction in just under 2 h. The annealed DNAs were stored at -20°C and were diluted to 10 nM single-molecule stock concentration in K100 buffer [10 mM Tris·HCl (pH 8.0), 100 mM KCl] at the time of experiment.
smFRET unwinding assays. A custom-built total internal reflect fluorescence microscope was used for the single-molecule unwinding assays. A solid state 532 nm laser (75 mW, Coherent CUBE) is used to excited the donor dye in the Cy3-Cy5 FRET pair used in FRET experiments. Emitted fluorescence signals collected by the microscope are separated by a dichroic mirror with a cutoff of 630 nm to split the Cy3 and Cy5 signals, which are then detect on an EMCCD camera (iXon DU-897ECS0-#BV; Andor Technology). Custom C + + programs control the camera and IDL software and are used to extract single-molecule traces from the recorded data. The traces are displayed and analyzed using Matlab and Origin software. All homemade codes are in the smFRET package available at the Center for the Physics of Living Cells (https://cplc.illinois.edu/software/, Biophysics Department, University of Illinois at Urbana-Champaign).
Biotinylated FRET DNA (50 to 100 pM) were immobilized on polyethylene glycol-coated quartz surface via biotin-neutravidin linkage. RecQ and mutant proteins (100 nM) were added at room temperature to initiate unwinding. 10-20 short movies (10 s) and 3-4 long movies (3 min) were then taken monitoring the Cy3 and Cy5 emission intensities over time. These are then analyzed to produce the FRET histograms and trajectories to monitor any unwinding activity.
To calculate the unwinding rate, note that as the DNA is unwound, the Cy3 strand is freed from the immobilized DNA substrate and the Cy3 signal disappears. Snapshots of the Cy3 spots detected in an imaging area are taken via short movies (2 s) and the spots counted over time. The counts are then plotted and fitted to an exponential curve to obtain the rate of disappearance of the Cy3 spots over time as the indication of unwinding. For each rate calculation, 400-500 single molecules were monitored and the standard error of the measurement was reported. During imaging, a fraction of the G4 molecules were unwound by a protein-dependent and GSP-independent mechanism. The number of G4s lost during through this process (~20% over 12 min) was insufficient to allow for rate calculations and the GSP-independent unwinding was assumed to be negligible relative to the GSP-dependent mechanism.
Circular dichroism. G4 DNA used for the crystallographic studies were refolded by diluting to 10 μM in either 10 mM Tris·HCl (pH 8.0) or 35 mM sodium acetateacetic acid, 500 mM ammonium acetate, 4% (vol/vol) PEG 4 K and 15% (vol/vol) glycerol by heating to 95°C for 10 min and slowly cooling to room temperature. These conditions represent unfolded ssDNA or crystallization conditions, respectively. CD spectra were recorded on an AVIV 420 circular dichroism spectrometer at 25°C over a range of 200-340 nm in a 1-mm path length quartz cuvette. Data were collected using a 1 nm step size with a 5 s average and a blank reading containing no DNA was subtracted from each reading. | 2018-10-27T14:06:02.138Z | 2018-10-10T00:00:00.000 | {
"year": 2018,
"sha1": "4cdc8137e36106e594cd75b441a32d9bf170b70f",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-018-06751-8.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4cdc8137e36106e594cd75b441a32d9bf170b70f",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
30338901 | pes2o/s2orc | v3-fos-license | Clustering of Mueller matrix images for skeletonized structure detection
This paper extends and refines previous work on clustering of polarization-encoded images. The polarization-encoded images used in this work are considered as multidimensional parametric images where a clustering scheme based on Markovian Bayesian inference is applied. Hidden Markov Chains Model (HMCM) and Hidden Hierarchical Markovian Model (HHMM) show to handle effectively Mueller images and give very good results for biological tissues (vegetal leaves). Pretreatments attempting to reduce the image dimensionality based on the Principal Component Analysis (PCA) turns out to be useless for Mueller matrix images. © 2004 Optical Society of America OCIS codes: (120.5410) Polarimetry; (110.2960) Image analysis; (100.5010) Pattern recognition and feature extraction. ___________________________________________________________________________ References and links 1. J. Zallat, C. Collet, and Y. Takakura, “Clustering of polarization-encoded images,” Appl. Opt. 43, 283-292 (2004). 2. J.M. Bueno, and P. Artal, “Double-pass imaging polarimetry in the human eye,” Opt. Lett. 24, 64-66 (1999). 3. P.Y. Gerligand, M.H. Smith, and R.A. Chipman, “Polarimetric images of a cone,” Opt. Express 4, 420-430 (1999). http://www.opticsexpress.org/abstract.cfm?URI=OPEX-4-10-420 4. G.D. Lewis, D.L. Jordan, and P.J. Roberts, “Backscattering target detection in a turbid medium by polarization discrimination,” Appl. Opt. 38, 3937-3944 (1999). 5. J.-N. Provost, C. Collet, P. Rostaing, P. Pérez, and P. Bouthemy, “Hierarchical Markovian Segmentation of Multispectral Images for the Reconstruction of Water Depth Maps,” Computer Vision and Image Understanding 93, 155-174 (2004). 6. Collet, C., M. Louys, C. Bot, and A. Oberto, “Markov Model for Multispectral Image analysis : application to Small Magellanic Cloud segmentation,” in International Conference on Image Processing ICIP'03. 2003. Barcelona, Spain. 7. P.A. Devijver , “Baum's forward-backward algorithm revisited,” Pattern Recognition Lett. 39, 369-373 (1985). 8. N. Giordana, and W. Pieczynski, “Estimation of generalized multisensor hidden Markov chains and unsupervised image segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence 19, 465-475 (1997). 9. J. M. Laferté, P. Pérez, and F. Heitz, “Discrete Markov Image Modeling and Inference on the Quad-tree,” IEEE Transactions on Image Processing 9, 390-404 (2000). 10. C. H. Chen, L.F. Pau, and P.S.P. Wang, Handbook of Pattern Recognition and Computer Vision. WORLD SCIENTIFIC. 1044 (2000.) 11. A. P. Dempster, A.P., N.M. Laird, and D.B. Rubin, “Maximum likelihood from incomplete data via the EM algorithm,” Royal Statistical Society, 1-38 1976. 12. R. O. Duda, P.E. Hart, and D.G. Stork, Pattern Classification, 2nd Edition Ed. Cloth. 680 (2000). 13. Movie of Mueller images after PCA transform. 2004. ftp://picabia.u-strasbg.fr/pub/www/collet/Publis/Animations/acp.gif 14. S. D. Baker, Unsupervised pattern recognition, PhD Thesis Antwerpen University (2002). http://www.ruca.ua.ac.be/visielab/theses/debacker/SteveThesis.pdf 15. A. Hyvärinen, J. Karhunen, and E. Oja, Independent Component Analysis, John Wiley & Sons (2001). 16. Movie of HMCM segmentation. 2004. ftp://picabia.u-strasbg.fr/pub/www/collet/Publis/Animations/Iteration_map.gif #3811 $15.00 US Received 11 February 2004; revised 12 March 2004; accepted 23 March 2004 (C) 2004 OSA 5 April 2004 / Vol. 12, No. 7 / OPTICS EXPRESS 1271 17. Movie of HMCM segmentation map with different number of classes. 2004. ftp://picabia.u-strasbg.fr/pub/www/collet/Publis/Animations/3-6classes_resample.gif 18. Movie of HHMM segmentation. 2004. ftp://picabia.u-strasbg.fr/pub/www/collet/Publis/Animations/HHMM_animation.gif 19. S. R. Cloude and E. Pottier, “Concept of polarization entropy in optical scattering,” Opt. Eng. 6 1599-610 (1995). 20. S. Y. Lu and R.A. Chipman, “Interpretation of Mueller matrices based on polar decomposition,” J. Opt. Soc. Am. A 13 1-8 (1996).
Introduction and objectives
We have previously reported on Mueller images clustering algorithms [1] after noticing that there was no available literature that take into account the image structure of the measurements.Indeed, all available references use only a physical pixel-based, processing approach.In that study, the aim was to consistently consider the two-dimensional structure of polarization-encoded images.The clustering procedure based on Markovian algorithm was able to segment properly the Mueller matrix image under poor illumination conditions.The task of segmenting skeletonized structures remained pending, due to possible block effects induced by HHMM algorithms based on a quad-tree topology.In this paper, we present our approach to deal with this peculiar case.
In the following, the Mueller matrix image is arranged in a three dimensional structure ( N M p × × ) where N M × defines the image size in pixels and 1,16 p = indexes the Mueller matrix elements.We will use indifferently the 'Mueller matrix image' or the 'Mueller image cube' terms.Due to computing time considerations, we use Hidden Markov Chains Model (HMCM) or Hidden Hierarchical Markovian Model (HHMM) instead of Markov Field Models for their fast convergence features.Both methods (based on chain or quad-tree model) are unsupervised (model parameters estimation stand alone) and robust: segmentation maps obtained are quite similar.These approaches are validated on raw Mueller images of leaves where one can easily observe the great robustness and the capacity of these Markov models to extract finest details, with low computing time.
In this work, we address the problem of analyzing Mueller images of physical objects and explore the potential of this technique for classification issues.We describe shortly our image acquisition system and, after recalling some definitions and a survey of the actual state of art concerning the interpretation of Mueller matrices, we propose an analyzing procedure of fully polarimetric images based on Markovian assumption in a Bayesian inference framework.
Experimental setup
A fully polarimetric imaging system that records spatially dependent intensity patterns of polarized light that is diffusely scattered from a target allows to acquire the whole Mueller matrix image [1].We define the Mueller matrix image as the two-dimensional measurement of the Mueller matrix attached to each pixel.Many such imaging systems have been designed and built for a wide variety of applications, including, medical [2], metrologic [3] and remote sensing [4] among others.This study was made possible by a fully polarimetric, high precision, and well calibrated imaging system that we developed.The experimental setup is a classical dual rotating quarter wave plates.It uses an incoherent source coupled with an interferential filter centered at λ = 632.8nm with a 10 nm bandwidth, positioned in front of the camera.The incident beam is polarized through a polarizing optic before impinging on the target.The object is imaged in backlight illumination (see Fig. 1) through the polarization analysis optic, and onto a digital CCD camera (12 bits image definition).α , this intensity can be written as where ij M is the pixel's ( ) , i j 4×4 Mueller matrix.
To fully determine
where Appropriate values of 1 k α and 2 l α ( , 1,4) k l = that maximize the determinants of A and P are obtained by minimization to avoid matrix singularities.
Calibration procedures ensure a maximum relative error over the whole image of less than 0.1% for the free-space Mueller matrix, and less than 1% for the Fresnel reflection matrix of a BK7 prism's surface.
To enhance the signal level, every acquired intensity image is the result of six frame integrations.
Physical analysis of Mueller matrices
Mathematically, Mueller matrices act as linear operators on the incoming Stokes vectors yielding the outgoing Stokes vectors.This provides a mathematical formalism known as "Stokes-Mueller formalism" to treat polarized interaction in a coherent way.The main interest of this approach lies in the addition theorem of Stokes vectors and the optical equivalence principle, allowing decomposition of a general matrix as the incoherent sum of matrices representing "simpler" processes.Furthermore, Mueller matrices are able to represent depolarizing systems in contrast with Jones vector-matrix formalism.Although it is always possible to associate any (non-depolarizing) system's Jones matrix to its Mueller matrix, the reverse is not always true.This is due to the fact that Mueller matrices have sixteen degrees of freedom while Jones matrices admit only eight.The extra degrees of freedom come from the existent correlations between the elements of the Jones matrices.Analyzing these correlations may be of great interest to understand the underlying physical interactions.This task is simplified by using the coherency approach introduced by Cloude [5].
Cloude shows that there exists a one to one correspondence between any Mueller matrix ( ) M and a 4 4 × complex hermitian semi-definite positive matrix ( ) T called "system coherency matrix".In this paper, we shall note ( ) O the linear operator that allows to pass from a Muller matrix to the system coherency matrix and ( ) O − its inverse.Cloude showed that a necessary and sufficient condition for ( ) M to be physically realizable is that all eigenvalues ( ) ; 0 3 i i λ = L of its associated coherency matrix ( ) T are real positive numbers.
Moreover, ( )
M is non-depolarizing, if and only if the rank of ( ) T equals one.One can combine the coherency matrix eigenvalues in a scalar quantity H called "entropy" that characterizes the polarimetric disorder in the physical system.H is defined as The system is non-depolarizing if its entropy is zero and a perfect depolarizer has its entropy equal to one.Practically, systems showing low entropy can be approximated by a nondepolarizing Mueller matrix associated to the largest eigenvalue and its corresponding eigenvector.
Another interesting approach to physically analyze experimentally-obtained Mueller matrices is by using the polar decomposition developed by Lu and Chipman [6] From these matrices the diattenuation, retardance, and depolarization properties are readily determined.From these matrices the diattenuation, retardance, and depolarization properties are readily determined.
In order to extract high-level information from the Mueller matrix image, one may need to segment the image properly, prior to any post-processing algorithms.Image segmentation is one of the first fundamental computer vision's problems, it consists in the partitioning process of an image into non-overlapping regions, where each one exhibits homogeneity features (intensity distribution, spatial clustering, spectral or in-scale homogeneity) that depend on the application needs.We show here that the recent Markovian developments in image segmentation, applied to Mueller matrix images, prove to be robust and efficient for skeletonized structure detection.
Markovian Model and Bayesian Inference
Our motivations for using Hidden Markov Model are to provide fast computations and efficient structures to analyze parameter data cubes.Indeed, different communities needs tools to interpret large data corresponding to multispectral observations (e.g., remote sensing [7], astronomical images [8]), parameter observations (e.g., polarimetric imaging [1]) or volume information (e.g., multimodal slices of RMI, SPECT, X cubes for medical applications), etc.
The segmentation problem is to estimate the unobserved realization X x = from the observed realization Y y = , where , , ,..., K ω ω ω Ω = and n , respectively.One assumes that each observation s y is a filtered and noisy measurement of the observed scene : • the segmentation process tries to determine which label (or class { } . The segmentation step is thus of great importance to cluster the pixels into sets of similar spectral, statistical or geometric features, in order to be able to extract their characteristics with further discriminant analysis (classification step).In the context of unsupervised Bayesian segmentation, we need to estimate all the parameters defining the joint distribution ( , ) p X Y between the label field X that have to be estimated and the noisy observed field Y.In other words, X p is the marginal distribution of X, and ( / ) p Y X is the distribution of Y conditional to X. Denoting by C the set of cliques (a clique being either a singleton or a set of sites mutually neighbors), the X distribution is where Z is a normalization coefficient and c Ψ stand for a potential function [10], [11].Moreover, a trade-off with the data-driven term is necessary to maximize the global joint distribution ( , ) P X Y : [12]) and the goal consists in minimizing the average risk of misclassification [12].This trade-off converges slowly in the case of Markov field because optimization algorithm requires Monte Carlo samplings and a large number of iterations before convergence is reached (simulated annealing algorithm).More recent studies in image analysis have suggested replacing the purely spatial priors used in Markov field with hierarchical priors, in which interaction between variables are not supported by the grid over the image pixels but are defined from scale to scale.Thus, the HHMM modeling scheme captures, over a label pyramidal lattice (quad-tree, Fig. 2(a)), significant interscale statistical dependencies whereas the intrascale dependencies with the observation vector are statistically defined at the finest resolution by a density probability function linking label and observation.This approach, described in [1], [7], may induce block effects due to quad-tree structure.
The usefulness of HMCM stems from their ability to learn Hidden Markov Model parameters through the Baum-Welch re-estimation procedure [9].Firstly, the images are scanned by a Hilbert Peano path [10] (cf.Fig. 2(a)) in order to transform the cube into a chain of Mueller vector, then a Markov chain, defined by its transition matrix, is estimated.The Baum-Welch algorithm is an iterative update algorithm which re-estimates parameters of a given hidden Markov model to produce a new model which has a higher probability of generating the given observation sequence.This re-estimation procedure is continued until no more significant improvement in probability can be obtained: the local minimum is thus found.
The foremost importance of HHMM and HMCM models consists in performing exactly, in a non-iterative way, the Maximum a Posteriori Mode (MPM) inference [11].Such noniterative approaches can also provide substantial gains in speed and result quality.
These models are justified by the fact that for many natural scenes, neighboring pixels are more likely to belong to the same class than pixels that are farther away from each other : this property is translated on a Markov 1D-chain (HMCM, cf.Fig. 2(a)) [10] or in scale on a Markovian quad-tree (HHMM, cf.Fig. 2(b)) [7].One important contribution in this paper consists in showing that these two approaches give segmentation maps of similar quality on real data cubes.From the observed cube Y to the segmented image X, the HHMM and HMCM algorithms can be decomposed into three main phases: Initialization: this phase provides a first estimation of data-driven parameters.K-means algorithm can be used [12].
Segmentation: the label map is then achieved using the Maximum a Posteriori Mode (MPM) segmentation rule [14].Return to step "parameter iteration" until convergence (weak label variations on map X between two iterations).
The number of desired classes for the segmentation map is the single parameter given by the user.The software of complete segmentation chain is available on line on http://voltairemiv.u-strasbg.fr/.m Mueller element image.It corresponds to conventional intensity image.We observe that this image is quite hard to segment since the finest details do not appear clearly (finest veins are not observed).This observation was pointed out in [1].
Experimental results and discussions
Figure 4 While the conventional image (Fig. 3(a)) seems to be unsuitable for segmentation, the animation (normalized to the m 00 element image), Fig. 4(a) shows clearly that the polarimetric response of the object provides the appropriate candidate to the segmentation task.The goal is to segment this data cube into different classes corresponding to physical properties of the leaf.We observe further that the images of m 11 /m 00 , m 22 /m 00 , and m 33 /m 00 , Mueller elements provide a better contrast between the leaves constituents (nervures and tissue).This indicates a smoothing effect in the transmittance heterogeneity over the leaf surface.The segmentation map obtained with HMCM analysis (Fig. 3(b)) reveal the finest details of the object.This can not be reached with classical intensity based imaging.Some authors suggest to reduce the data cube by means of a Principal Component Analysis (PCA) transform [14].PCA consists in projecting the 16 Mueller images on different axes (inner vectors) corresponding to the largest energy.The goal is to project the image on axes maximizing the variance, in order to decorrelate the different parameter bands.Movie of Mueller images after PCA transform is available on-line [15].This transform, used generally for data reduction (i.e., multispectral or hyperspectral imagery [16]) is useless in our case.Indeed, the segmentation map obtained by feeding the Markovian algorithms with such pictures does not improve the result (Fig. 5(a)).One may assume that an energy-based projection is not adapted to Mueller parameters.Another way for the data reduction we explore is based on Independent Component Analysis (ICA) [17].Finally, some results are displayed on-line : HMCM [18] map (Fig. 4(b)) with a different number of classes [19] and HHMM results [20].Similar results for both Markovian algorithms (weak block effect Fig. Once the label maps are obtained from the Markovian procedure, different classes can be merged properly to provide appropriate masks corresponding to nervures and tissue.These masks were used to extract the mean Mueller matrices of nervures N M and tissue T M from the Mueller image cube.We found 1.0000 0.0227 0.0031 0.0028 0.0077 0.2066 0.0038 0.0096 0.0009 0.0121 0.2225 0.0024 0.0035 0.0118 0.0082 0.1306 1.0000 0.0269 0.0021 0.0018 0.0101 0.3236 0.0087 0.0023 0.0008 0.0024 0.3276 0.0009 0.0026 0.0023 0.0029 0.2754 M and T M are found to be nearly equal 0.96 and 0.91, suggesting that the two classes reach a close-to-ideal depolarizer.Figure 6 shows the eigenvalues distribution.We observe a dominant eigenvalue while the three remaining eigenvalues have almost the same order of magnitude.This makes it reasonable to approximate the Mueller matrices as the sum of a main interaction mechanism and an isotropic depolarizer (i.e.0 iso ≈ + M M M ).The Mueller matrix of an isotropic depolarizer has all their elements bring to naught except m 00 ,which is equal to d (depolarizer strength).Its associated coherency matrix is given by 1 2 4 iso d = T I , where I 4 is the 4×4 identity matrix.These two matrices can be constructed using the following equations: where v 0 is the eigenvector corresponding to λ 0 .Table 1 summarizes the results obtained by using Eq. ( 11).These results indicate the following: i) the two classes composing our object depolarize a large amount of incoming polarization states, ii) They act equally on all polarization states in magnitude.However, the handedness and the tilt of the incoming wave are inverted.We point out the performance of the clustering based on Markovian Bayesian inference in view of the very low signal level (10 nm wideband for the interferential filter) and the 20% (nervures) and 30% (tissue) remaining unpolarized fraction of light that escapes the vegetal leaf.
3.3
The use of the polar decomposition for the two Mueller matrices under study shows that the pure depolarizer and pure attenuator matrices are nearly equal to identity matrix.The two classes act primarily as general pure depolarizers which confirm the conclusion of the analysis based on Cloude's theory and validate the Markovian segmentation approach : the obtained classes given by this process have physical significance.
Conclusion
A new analysis framework for Mueller matrices images is introduced which allows an original study of the polarimetric properties of skeletonized shapes.The major interest of this study comes from the association of up-to-date image processing algorithms with physical interpretation of the results.Markovian based algorithms turn out to be of great benefit for such inference tasks: the only image dependent parameter that must be provided by the user is the number of classes, all other parameters are computed automatically.Moreover, this approach is quite general and can be extended to a large variety of samples, e.g.Mueller images of biological tissues.Work in progress pursues two lines of inquiries: i) Bayesian tree-structure image using wavelet domain hidden Markov models, ii) noise features variations and its impact on the clustering algorithms performances.
Fig. 1 .
Fig. 1.Schematic of the Mueller matrix imaging polarimeter used in this study.
which yields three Mueller matrices for a pure diattenuator, D M , a pure retarder, R M , and a pure depolarizer, dep M , related to M by matrix multiplication s y y s S = ∈ and S is the set of voxel positions in the data cube to be segmented.The variables s X and s Y take their values in { } 1 2
Fig. 2 .
Fig. 2. (a) Markov chain model and Hilbert Peano paths for different image sizes (2 2 , 4 2 , 8 2 and 16 2 pixels).The image becomes a vector after Hilbert-Peano scan.Markovian prior model describes the transitions within the chain between labels X n and X n+1 whereas the data-driven parameter links X n+1 and Y n+1 .(b) Markovian quadtree with inter-scale transition probabilities a ij (s -stands for the father of site s in the tree).Data-driven probability density function between observations (white circle) and labels (black circle) equals ( ) ( ) / i s s i f l p y l x ω = = =
Figure 3 (
Figure 3(a) represents the 00m Mueller element image.It corresponds to conventional intensity image.We observe that this image is quite hard to segment since the finest details do not appear clearly (finest veins are not observed).This observation was pointed out in[1].Figure4(a) contains a movie of raw Mueller images (16 parameters on each pixel: ; , 0..3
Fig. 3 .
Figure 3(a) represents the 00m Mueller element image.It corresponds to conventional intensity image.We observe that this image is quite hard to segment since the finest details do not appear clearly (finest veins are not observed).This observation was pointed out in[1].Figure4(a) contains a movie of raw Mueller images (16 parameters on each pixel: ; , 0..3 ij m i j = ).These parametric images form a data cube of size 256 by 256 by 16 of floating values ( ; , 0..3; , 0..255 kl ij m i j k l = =).
Fig. 4 .
Fig. 4.. (a) (1.54 Mb) movie of raw Mueller images (16 parameters for each pixel: ( ; , 0 3 ij m i j = L ).(b) (1.04 Mb) Sequence of HMCM maps from initialization (based on Kmean algorithm) up to label map: convergence is performed in 6 iterations, taking less than 1 mn on a PC Pentium IV, 2 GHz, 4 Gb RAM..Each class is displayed with a random gray level.
Fig. 5 .
Fig. 5. (a) Segmentation map PCA transform (b) HHMM map : convergence is performed in 6 iterations, taking less than 1 mn on a PC Pentium IV, 2 GHz, 4 Gb RAM.One observes weak blocky effects.This label map is similar to HMCM map obtained on Fig. 4(b). | 2017-06-11T21:23:26.394Z | 2004-04-05T00:00:00.000 | {
"year": 2004,
"sha1": "57cc696e7983566fc9fec8c3718242f0567f4760",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1364/opex.12.001271",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "57cc696e7983566fc9fec8c3718242f0567f4760",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
219443635 | pes2o/s2orc | v3-fos-license | The Constrains of Green Building Implementation in Indonesia
Various success evidence of green building to save energy usage and reduce environmental impacts has been widely shared in various scientific and popular media. The information is not merely theoretical but also practical, include information about the addition of investment value, savings and profit earned on the operation stage. The government and the private sector have also encouraged the acceleration of its implementation through the establishment of regulations and rating tools that accomodateresponsibility for property industry actors to participate in implementing green building concepts. This study aims to find out the obstacles of green building implementation in Indonesia through literature review, policy review and also interviews to stake holders. Apparently, efforts are still needed to complete the various regulations needed for stakeholders to implement the concept of green buildings in all parts of the building’s life cycle.
Introduction
The issue of environmentally friendly development has been discussed for more than three decades, but to date efforts to reduce environmental damage and energy consumption remain unresolved. CO 2 emissions, as an indicator of environmental damage, continue to increase from year to year. Globally [1] in 2013 CO 2 emissions were recorded at 32.2 Gt, an increase of 2.2% from 2012. Energy consumption also showed an increasing trend, in 2014 world energy consumption had doubled 3.5 times compared to 1965 [2]. Fossil energy still dominates despite an increase in nuclear and other renewable energy. Data from the National Energy Council [3] states that total energy consumption in Indonesia in 2014 amounted to 1.415 Million BOE (Barrels of Oils Equivalent) or an average increase of 4.9% per year, and from that amount the largest consumption was oil by 48% followed by coal 31% and gas 17.2%. Data from the International Energy Agency in 2014 also showed that the current available electricity supply is still largely produced from fossil fuels, namely coal and natural gas, for example in the USA, 67% of electricity supply is produced from fossil energy and natural gas.
In the building sector, the issue of eco-building or green building or environmentally friendly buildings has emerged since the 1970s [4,5]. Some building designs are inspired by the ideas of Victor Olgay (Design with Climate), Ralph Knowles (Form and Stability) and Rachel Spring (Silent Spring), for example in 1977 Norman Foster tried to apply grass roof, daylighted atrium, and mirrored windows on Willis building Faber and Dumas Headquarter in England. Although the seeds of green building have long emerged, in fact efforts to reduce environmental damage and energy consumption, especially non-renewable energy, are still constrained. In America for example, the building sector is 2 still the largest consumer of energy users and most of the energy is used for the purposes of room conditioning. The building sector is also one of the biggest contributors to CO 2 emissions [6] One of the causes of the still widespread issue of environmental damage and energy consumption is the incomprehension of the principles and criteria of green buildings. Some actors in the building industry, including architects, are still trapped in practical matters, such as using anefficient air conditioning equipment (AC) but forgetting essential things such as building design (form, orientation, envelope, layout) which will significantly affect the performance of buildings in energy savings and environmental impact reduction. It is feared that such limited understanding can obscure the essence of environmentally friendly buildings. An efficient mechanical and electrical equipment (active strategy) is indeed needed in buildings as well as good material specifications but both of these can be optimized if preceded by planning and designing the shape, orientation, spatial layout and elements of the building envelope (passive strategy) right .
Another thing that is also often misunderstood is an understanding of environmentally friendly buildings that are often mixed with human-friendly buildings. Practitioners try to provide a comfortable environment for residents of the building but unconsciously these efforts can damage the environmental conditions outside the building. The use of air conditioning equipment to cool the room can have a direct impact on the heating of the outside environment. For this reason, serious consideration must be given to the design process of the building and its interior space so that the option to use air conditioners is an additional option if natural ventilation is not possible. Likewise, it is necessary to consider which rooms require air conditioning and which spaces require only natural ventilation. Efforts to design a comfortable space for humans must be directly accompanied by consideration and efforts not to damage the environment.
Efforts to expand the implementation of environmentally friendly buildings are carried out by the government and non-governmental organizations. One example of a country that is consistent in its efforts to implement environmentally friendly buildings is Singapore. At the end of September 2013 (approximately 8 years since it began in 2005) a total of 1696 buildings were certified or approaching 50 million m² of building floor area. BCA (Building Construction Authority) as the authority in charge of regulating buildings in Singapore targets the achievement of 80% of green building certification from all buildings in Singapore by 2030 [7]. Singapore's persistence in applying the concept of green to buildings should be an example for other countries.
The Indonesian government has also sought to encourage the implementation of the green concept in buildings as part of the government's promise to the world to reduce CO 2 emissions and greenhouse gases. In order to follow up on the Bali Action Plan at the 13th Conferences of Parties (COP) of the United Nations Framework Convention on Climate Change (UNFCCC), the results of COP-15 in Copenhagen and COP-16 in Cancun and fulfill commitments at the G-20 meeting in Pittsburg. The Government has established Presidential Regulation N0. 61 of 2011 which contains the government's commitment to reduce CO 2 and greenhouse gas emissions by 26% by 2020. The government represented by the Governor of DKI Jakarta has also promised to reduce CO 2 In addition to the government, a non-governmental organization, the Green Building Council Indonesia (GBCI), or the Indonesian Green Building Council, also attempting to expand the implementation of green buildings. GBCI is a non-profit institution established in 2009 supported by practitioners, government, industry, academia and professional associations. GBCI is committed in the field of community education to applying environmental practices and facilitating the transformation of a sustainable building industry. In June 2010, GBCI, which is a member of the World Green Building Council (WGBC), adopted the Greenship Rating Tools as a tool or reference for determining green ratings for buildings in Indonesia. The Greenship Rating Tools apply voluntary, meaning that there is no requirement for every building in the territory of Indonesia to follow these guidelines.
The immediate benefit of implementing the green building concept is the production of highperformance buildings in reducing energy consumption and reducing the impact on the environment. Reports from the USGBC (United States Green Building Council) in November and December 2002 stated that the implementation of the green concept in buildings would save operational energy use by an average of 28% of conventional buildings, not including savings on the use of renewable energy. Another report from the BCA (Building Construction Authority) Singapore entitled 'Leading the Way for Green Buildings in the Tropics' in 2015 states that the implementation of the green concept in buildings can save energy consumption between 30% to 80% by utilizing the available current technology.
The Kats report [8] on costs and financial benefits of implementing green buildings actually proves that the cost of designing and constructing green buildings, especially in California, is only around 2% higher than ordinary buildings, but the financial benefits are actually greater than the additional costs, both in the form of energy savings, water savings, waste management savings, savings in operational and maintenance costs, increased productivity and improving the health conditions of its users. This report also invalidates the view that believes that green buildings are expensive and require large additional costs compared to ordinary buildings. Kats also added that, the additional costs associated with the implementation of the green building concept are calculated to break even by the benefits in just 2 to 3 years, and then within a period of 15 years, the financial benefits will reach 10 times compared to the additional costs.
Until June 2019, or about 7 years after the Jakarta's governor regulation on green buildings was established, a total of 392 buildings or approximately 25 million m2 of building area has met the minimum criteria of green buildings stipulated in the regulation. That achievement is still far compared to Singapore's achievement. For this reason, it is important to know the obstacles encountered by stake holders in their efforts to apply the concept of green buildings.
Methodology
To find out the constrains of green buildings implementation in Indonesia, this research use a literature review, a policy review and interviews with green building stake holders. The literature review is carried out to determine all factors associated with the implementation of green buildings. The policy review is carried out through tracking regulations related to the implementation of green buildings that have been established so far by the government. This review aims to determine the suitability and completeness of the regulations needed to support the implementation of green buildings in Indonesia. While interviews were conducted to find out the obstacles faced by stakeholders in implementing the concept of green buildings.
Results and Discussion
William [9] emphasized that green design utilizes environmentally sensitive materials, creating a healthy environment that does not have a negative impact either before, during or after the manufacturing, construction and demolition processes. This needs to be emphasized because designs that have considered the concept of green will be useless if the process of making the material (building materials), the process of development, the process of utilization, and the process of destruction do not consider the concept of green as well. The concept of green covers the entire life cycle of the project from the idea to the demolition. According to Wonoraharjo [10] there are 3 main things that should be considered in the green building concept (Figure 1), namely: efforts to minimize the environmental impact on the whole building life stages; energy conservation efforts on the whole building life stages; efforts to pay attention to human comfort and health The maximum possible reduction of the impact on the environment can be done from the design process to the utilization process [11]. This can be started by making the design process a holistic thinking process [12]. If we want to change a product, one of the step options we can do is change the design process, for example collaborating with various disciplines starting from the beginning to the end of the design process so that the product produced is an optimization of various considerations. The building design process is no longer a linear process from the architect forwarded to the structural expert and then continued to the mechanical, electrical and plumbing experts but the process of intensive and interactive discussion between various disciplines from concept to detail. The next option is to change the way of thinking for example by making the environment as a subject and not an object so that its interests can remain a priority or not make the environment an 'enemy' for the building but instead make the environment a 'friend' by utilizing its potential to minimize its threats. Sunlight is utilized as optimal as possible as a source of natural light and is pushed over the heat to reduce the heating of buildings or even heat used by converting it into electrical energy. Likewise utilizing wind for natural ventilation and reducing the heat or dust that goes with it or reducing the pressure so that it remains within the threshold of comfort. Efforts to minimize the impact of buildings on the environment can also be done by selecting the material by considering its availability in nature and the subsequent impact if the material is used up. Materials that are difficult to obtain from nature are in a place to be restricted in their use and materials that have been proven to have adverse effects on the health of their inhabitants should not be reused [11].
The construction stage is part of the building life cycle that is considered important by the construction industry players because this stage is directly related to the amount of investment. The general consideration used at this stage is saving investment costs and not environmental considerations. Environmental considerations at this stage raise the issue of a fairly high increase in investment costs even though the reality does not occur as previously described [8]. The increase in investment costs that only ranged from 2% to 6% [13] is not even significant when compared to the benefits gained during the operational period in the form of savings in energy use and reduction of environmental impacts as well as increased comfort and health for its residents. In principle, efforts to minimize environmental impacts at the construction stage are carried out by designing appropriate construction methods so as to minimize damage or balance to existing ecosystems and reduce air pollution in the form of dust or CO 2 emissions [14]. In addition to planning construction methods other efforts that can be done are to help reduce activities that require non-renewable materials, use recycled materials and use energy and other mineral sources to a minimum [15]. Some factors which are considered to hamper efforts to minimize environmental impacts at the construction stage are a lack of knowledge, a lack of firmness in implementing regulations and have not yet embraced the application of this understanding [11].
The process of utilization (operational) often escapes the attention of construction industry players, one of which is because the burden of building operating costs is no longer their responsibility and is borne by building users. Even though the operational period is a long period (decades) compared with building age, most of the conditions that occur at this stage are a direct result of the two previous stages, namely the design and construction stages. If the design and construction process has taken the form of as much as possible to reduce the environmental impact, then this process is just to enjoy the results. However, there are still some things that can be done especially by residents to increase the performance of reducing the impact on the environment, namely by reducing the amount of waste produced every day.
Figure 1. Green Building Scheme
Although every space conditioning (heating, ventilating, air conditioning) activity requires energy, there are several strategies to minimize energy requirements or optimize energy use [16]. Various energy utilization strategies can be seen in Figure 2. The designer can play a large role in the strategy of minimizing energy requirements by utilizing his competence in designing the shape, orientation and envelope of buildings. To minimize energy requirements, buildings in tropical climates are designed to avoid direct heating and even if heating has occurred, what is needed is to cool the building with evaporative, convective and conductive methods. Another thing that can be done is to use the wind to cross ventilate the building by flowing hot air out of the building and using the sun's light for lighting the room as long as possible.
The construction phase is often considered not to use much energy because the implementation time is relatively short compared to the operational period of the building. Nevertheless, savings during the construction period are still needed by determining the right construction method and using energy as needed.
Related to the use of energy in buildings, Sarte [14] proposed two efforts that could be made. First, by reducing energy requirements through building designs that make maximum use of the conditions outside the building for the sake of room conditioning. Second, if the first effort has been optimally done then what needs to be done then is to use energy as efficiently as possible for example by choosing energy-efficient equipment or using additional equipment such as sensors that can help the process of saving energy automatically. The next step is to look for technology options that allow CO 2 emissions reduction as well as renewable energy source options that allow for partial or full replacement of building energy use. Technically, efforts to reduce building energy requirements can begin with an appropriate site analysis, clearly defining project needs and objectives and matching energy availability on site [14]. Because each site is unique, the designer's job is to find and choose the most suitable energy application so that buildings can be produced that are in harmony with their environment, can respond to environmental demands and provide added value in the form of more comfort and inspiration for residents. Human relations and the environment is a unique relationship. In one condition, humans are part of the environment so efforts to reduce the impact on the environment also apply to humans. But at the same time, humans can also position themselves as entities outside the environment because of the ability they have to change the 'environment' as they wish. Human desire to change the environment around them to be comfortable and healthy for example in certain conditions has the potential to damage the environment. The activities of cutting down trees and leveling the land for shelter, making gardens or rice fields, making wells, making roads and so on are part of human efforts to fulfill their desires and this has the potential to damage the environment.
Humans also try to get comfort while in the building. In tropical climates, most of the energy is used to cool the room, while in subtropical countries most of the energy is spent on heating the room by using air conditioning systems. In tropical climates, for example, this system will cool the room and throw hot air out of the room or building. Consciously or not, it reflects the willingness of humans to sacrifice the environment outside the building in order to get comfort inside the building.
One option to still get comfort without compromising the environment or damage the environment as little as possible is to utilize natural air and lighting. A building can be designed by considering the use of natural air and lighting until a time when the expected conditions cannot be met, so it is forced to use mechanical or electrical equipment.
Human comfort is not merely a physical condition but is related to intermediary conditions such as clothes worn and also physiological conditions, such as gender and age [16]. Further physical conditions can consist of thermal conditions (heat), namely: temperature, humidity, air movement; visual condition (vision), namely: contrast, glare, color; acoustic / audial (hearing) conditions, namely: frequency, power level (noise); olfactory conditions (smell), namely: odor, CO 2 , dust; other conditions, namely air pressure Hegger and Zeimer's opinion about comfort is in line with Bougdah and Stephen [17] which states that comfort is a state of mind that is obtained from the physical condition of the environment, the ability of humans to control, and other physiological conditions. Humans as part of the environment also emit and absorb heat. The heat emitted by humans depends on the activity, body size and age, while the heat absorbed depends on the type of clothing worn and sex [17].
Regarding visual conditions, there are three factors that influence the quality of sunlight in a room [18], namely how large the size of the room, how the existence of the barrier and the choice of lighting technology installed in the building facade. A room is declared to have good visual quality if the bright conditions inside the building are the same as the conditions outside so that it is sufficient to carry out Air health conditions in the room also play an important role to support the activities of its inhabitants. There are several things related to pollutants in space [17], first of all pollutants enter the room through various means both through cleaning materials and equipment, office equipment (photocopying, and printers) and also through finishing materials (painting). The best approach to reduce the amount of pollutants is to be careful in determining the specifications of equipment and materials and drain enough clean air to remove pollutants.
In connection with the policy on green buildings, this research tries to map various regulations that have been established so far by the Government of the Republic of Indonesia. This study identifies regulations that are directly or indirectly related to the implementation of green buildings in Indonesia. The identification process is based on the understanding that regulations on green buildings should consider the principles of green buildings (energy-friendly, environmentally friendly and humanfriendly) throughout the building's life cycle process. Buildings can be considered less green if the process of mining the material, the process of producing the material, the process of transporting the material, the building process, the operational process, the maintenance process and the demolition process are not environmentally friendly, human-friendly and energy-friendly. Thus the identification of regulations is also carried out in various ministries such as the ministries of mining and energy, the ministries of industry, the ministries of trade, the ministries of transportation, the ministries of public works, and the national planning agency. Identification is also carried out at various levels of regulation ranging from acts, presidential regulations, ministerial regulations, regional regulations, governor regulations, and regulations of the mayor or regent.
Figure 3. Regulation Identification
The results of the analysis show that the regulations currently available still do not fully accommodate the green building principles and the various regulations that are not synchronized. This condition implies directly on optimization of green building performance. When a stake holder requires 'green material', then the material is not necessarily available on the market. Products available in the market may come from illegal mining, or be processed by methods that are not environmentally friendly, or shipped from places far enough so that the value of embedded energy is high, or sent using transportation that is not environmentally friendly. Green building regulations downstream need to be supported by various regulations upstream so that green building products have high performance in their true contexts. Principally, the government tries to place itself in the midst of various interests. The government is trying to ensure all the parties involved in the construction industry to carry out their obligations and at the same time get their rights. However, in certain parts it is normal that the government behaves in favor of certain interests for the sake of greater interests in the future, one of which is the alignments of the government to the environment. The desire to accommodate various interests in the community is often an obstacle in delaying the establishment or making the contents of the regulation very general because it must accommodate those who colud not be able to implement it. As for those who are able, these conditions are used to apply the rules according to the requirements even though they are able to do better.
Problems with implementing and enforcing regulations also remain an obstacle. Regulations that have been set are usually through a process of re-socialization for a year before being fully implemented. The socialization process for a year is sometimes still considered inadequate, so implementation in the second year is still constrained. Another obstacle is the lack of readiness of the government in implementation due to lack of staff involved or incomprehensible understanding of regulations or incomplete technical guidelines for implementation or trade-offs involved in enforcing regulations.
The main consideration of developers when designing and building buildings is profit. Efforts to bring the issue of green building to the property business tend to be used as marketing objects for the more basic interests of profit. However, the developer will only adopt the concept of a green building if the application of the concept will not make the building to be built more expensive and / or unsold for sale. Developers tend to choose which parts of the green concept that can be adopted that are in line with their main interest, which is making a profit. As long as the application of this concept will have a good impact on the addition of profits (relative to the investment), the developer will be happy to implement it. However, if the application of this concept will only add to the expensive investment or operational costs without the possibility to get additional benefits or get benefits that are not commensurate with the business, then do not expect developers to apply them.
As the owner or manager of the funds, the developer has a large enough force to be able to determine the shape of the resulting building. The philosophical or technical considerations put forward by planning consultants in designing buildings are often broken with practical consideration of financial and developing trends. Financial interest (clappers) is the main consideration in determining the design of buildings to be built.
The obstacle often faced by architect is to convince their clients in this case the developer or building owner that the design has been optimal considering the interests of the environment, users and developers. Architects are often confronted by developers' comments about construction costs as a consequence of optimizing these three interests, including the application of a green building strategy in an effort to adopt environmental interests. The successful implementation of a green building is very dependent on the ability of the architect to convince the developer that the choice of a strategy is the result of optimizing the design and has considered various interests including the interests of the developer Architects or Designers need a variousenergy and lighting simulation software to be able to design buildings with better environmental performance. However, not all architect specialize in having a special division or special staff to handle it. Some architect will only use the software when needed, one of which is to get green building certification. These conditions indicate that the existence of these softwares for the time being is only used to be used as a justification for designs that have been made before and not to be used in selecting various building design options.
The contractor feels comfortable in a position to carry out what has been planned by the architect and developer. The contractor's responsibility is limited to the design and implementation of the construction process, which refers to the design produced by the designer as efficiently as possible to obtain financial benefits. In some conditions, the contractor also carries out a kind of a re-design process (value engineering) aimed at minimizing costs without reducing the quality of the final output. Generally the contractor will propose changes in certain materials with consideration of ease of | 2020-05-21T00:12:49.565Z | 2020-03-01T00:00:00.000 | {
"year": 2020,
"sha1": "21efac0240453f30c6e7d0bad8bb21b9a5fdf714",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1485/1/012050",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "6328dc5818873852eb6668ae2b75cc5eaea8b322",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
} |
1794714 | pes2o/s2orc | v3-fos-license | Design of High data Rate FM-QCSK Chaotic Communication System
The frequency modulated quadrature chaos shift keying FM-QCSK system is one of the most efficient systems in chaotic literature. One of the problems in this system is that half the bit duration is used for sending a chaotic reference signal which leads to increase the energy losses and reduces the data rate. In this paper, a novel scheme to enhance the performance of FM-QCSK system has been proposed. With the proposed scheme, FM-QCSK would be able to operate at higher data rates with reduced bit-error probability BER and energy consumption. The basic modification introduced by the proposed scheme is the use one reference chaotic chip to transmit multi information-bearing chips in both in-phase and quadrature-phase channels. The results showed that the proposed scheme have achieved more than 3 dB and 5 dB gains in SNR for AWGN and Rayleigh multipath fading channels respectively at BER=10 -3 over the conventional one. The results also showed that the optimum number information-bearing chip can be send per reference is 8. The theoretical expression of BER in AWGN channel has also been derived for the proposed scheme.
Introduction
With researching in the chaos-based communication systems, mo re and more methods are applied in this area and more and more modulation schemes are proposed. In which d ifferential chaotic shift key ing (DCSK) [1], using correlation to demodulate, was proposed to solve the problem of chaotic synchronization. These commun ication systems have the wide band characteristic of chaotic signal and advance in resisting multipath fading [2]. To enhance the noise performance of DCSK, FM-DCSK [3] scheme was proposed where frequency modulation is utilized to achieve constant energy per bit for the chaotic carrier. DCSK is a transmitted-reference signaling scheme. For each symbol period, the DCSK signal consists of a piece of chaotic waveform (called reference chip), fo llowed by its non-inverted or inverted copy (called in formation bearing chip), depending on the binary symbol ("0" o r "1") to be transmitted.
QCSK [4] (Quadrature Chaos Shift Keying) which can transmit 2 bits in a sample function was designed by Zbigniew Galais and Gian Maggio to improve the speed of chaos shift keying in 2001. Then in 2006, Yiwei Zhang devised FM-QCSK [5], wh ich enhanced the noise performance of QCSK by using frequency modulation, because it generates constant energy per bit and its frequency spectrum is wideband.
Several different methods have been proposed in the literature to increase the data rate in both DCSK and QCSK systems [6][7][8][9][10]. The simp lest options consist of scaling the informat ion and/or the reference parts of the signal like the work in [6][7][8]. More sophisticated approaches use mu lti-level signal constellations like QAM, M-ary phase shift keying o r mu lti-chaotic basis functions by dividing the symbol period into M-ary t ime slots [9] or by defining a set of orthogonal vectors [10]. All the previous works constrained; in principle, on increasing the space dimension of the chaotic carrier as a way to increase the data rate. However, there were no real concerns to the number of information-bearing chips which are referred to one reference. In this paper, an enhanced version of FM-QCSK scheme is proposed by using the idea of transmitting more than one informat ion-bearing chip for one reference chip and averaging the correlation results for both the in-phase and quadrature phase channels to improve the noise performance, reduce the energy consumption and to increase the data rate of FM-QCSK.
Background Theory
Z. Galias, and G. M . Maggio described in their article [4] that the orthogonal basis of chaotic functions[x(t ) and y(t)] can be easily generated through Hilbert transform and this chaotic sample functions over the interval[0,T] can be defined as follows: E b is the energy associated with chaotic signal x(t) and y(t) in a sample period[0,T]. Therefo re, the formu la can be defined as: The quadrature signal of x(t) is y(t) , so we can get the following equation: The corresponding to two typical versions of QCSK modulation, constellations distribution is shown in Figure 1, where dashed lines represent the decision boundaries respectively. In a FM-QCSK system, the chaotic reference chip c x (t) which is modulated by FM modulator is transmitted in the first half symbol period while the in formation chip m i (t) which is also modulated by FM is t ransmitted in the second half. The ith symbol of the modulated information signal m i (t) can be defined as follows: (4) Therefore, signals transmitted by FM-QCSK can be expressed in a symbol period T: ( ) In the above formula, a i and b i are the mapping coordinate of signal symbol in constellation. The combinations of a i and b i that denote the different symbols are shown in Table 1.
Enhanced FM-QCSK Modulation
The time slots of the original FM-QCSK signal are shown in the upper part of Figure 2. In this figure, R i denote the reference ch ips of the ith bit while a i and b i denote the information-bearing chips of the ith bit for the in-phase and quadrature-phase respectively. One o f the drawbacks of conventional FM-QCSK is that every information bit is transmitted by two chips (the reference and information-bearing chips). Hence, the bit rate (as well as the symbol rate) is halved and the transmitted energy per bit is doubled compared to the conventional binary modulation schemes where every samp le function represents one bit. A possible enhancement of the conventional FM-QCSK scheme is as follows: instead of transmitting only one information-bearing chip after one reference chip, N bits in each channel are transmitted using the same reference. Th is idea is first discussed in [6], but tested in DCSK system where there are no orthogonal chaotic carriers and symbol transmission is introduced as in FM-QCSK. The waveform of the enhanced FM-QCSK scheme is shown in the lower part of Figure 2, where T s denote the duration of one chip and E s is the energy carried by chip. Observe that in a block containing N+1chips for every chip, except the first one, carries information. The enhanced modulation scheme offers two advantages over the conventional one: firstly, the bit duration T is decreased fro m 2T s to ((N+1)/N)T s , i.e., the data rate is increased. Secondly, the transmitted energy per bit E b is reduced fro m 2E s to ((N+1)/N)E s (or equivalently energy per sy mbol is reduced fro m 4Es to 2((N+1)/N)E s . However, the enhanced system may also suffers from the drawbacks of increasing periodic component at frequency 1/T s and its harmonics as well as the increase in the system co mplexity (6) There are two observation signals of the mth symbol at the receiver for the in-phase and quadrature-phase channels. These are defined as: Note that ř m (t) and ř mI (t) signals are the same wh ile ř mQ (t) signal is ř m (t) signal after applying Hilbert transform. Figure 3 shows the implementation of the enhanced FM-QCSK modulator. First the chaos signal is applied to an FM modulator to get constant energy per bit. The enhanced modulator contains of a delay with N taps, the output of each tap is input to a QCSK modulator. The trans mission of each N symbols is preceded by a reference ch ip s0(t), after which the information-bearing chips sm(t) are transmitted. This is done by changing the switch positions at each Ts time instants.
Transmitter and Receiver Configurations
The block diagram of the demodulator contains N delay lines and correlator pairs as shown in Figure 4. The correlator pair outputs sampled at kTs, k=1, 2,…N constitute the elements of observation vectors for in-pahse and quadrature phase channels. They are denoted by z 1I , z 1Q , z 2I , z 2Q ..…z NI , z NQ in Figure 3 and their values are given in Table 2. The transmitted information is carried by the sign and relative orthogonality of the correlation between the reference and information-bearing chips. This informat ion is available for the mth symbol at the output of the mth correlator pair at (m+1)th sampling time instant as shown in Table 2. The estimated information is denoted by a vector Ď=(Ď 1 , Ď 2 …… Ď N ) and obtained from the output of symbol/bit converter.
Performance Evaluation in AWGN Channel
Based on the article by Yiwei Zhang, et al. [5], the BER of FM-QCSK is defined as follows: where T c is the chip duration o f d iscrete chaos signal. In [5], the term T/(2T c ) denoted as K, wh ich represent the number of chaotic samples in the reference signal. It is also shown in [5] that as K increase, the noise performance of QCSK is improved since it is a measure of the length of correlation interval. To get the BER of the enhanced FM-QCSK, we substitute the new values of b it duration ((N+1)/ N)T s and energy per bit ((N+1)/ N)E s in (9). In this way, the BER of enhanced FM-QCSK is:
Simulation Results
The chaotic spreading signal has generated by a discrete-time Hennon map : 3. The discrete signal is offset by -0.5 and scaled by 2 (to obtain zero mean) so that the signal range becomes[-1,1]. According to the article by Jiamin Pan and He Zhang [9], we define T c = 0.05 μs and T=4 μs. Subsequently, the FM modulator is defined as follows: In the above formu la, A c =1 V, f c =36 MHz, and K f =7.8 MHz/ V. Figure 5, shows the plot of BER versus E b /N 0 for conventional FM-QCSK (N=1) and the enhanced FM-QCSK for N=2, 4, 6, 8 and 10 in AW GN channel. The performance of BPSK (the best possible noise performance that can be achieved by any digital modulation scheme over AW GN) is also plotted for the purpose of perfo rmance co mparison. It can be seen in this figure that by increasing N, a significant improvement in the noise performance can be achieved. For example at BER=10 -3 , gains in SNR of 2 dB, 2.7 dB, 3.1 dB, 3.3 dB and 3.4 dB when N=2, 4, 6, 8 and 10 respectively have been obtained using enhanced FM-QCSK scheme over the conventional one. Actually, the reason behind the improvement in the BER using the proposed scheme is that the noise associated with the received signal would be averaged due to the division of the correlation to N t ime slots. However, above certain limit increasing N has little effect on the system noise performance, i.e. threshold effect can be observed. We noticed in our simu lations that this threshold occurs at N=8. However, the performance can be also relatively imp roved by increasing K (the length of correlation interval). Figure 6 shows the plot of BER versus E b /N 0 for conventional FM -QCSK (N=1) and the enhanced FM-QCSK for N=2, 4, 6, 8 and 10 in Rayleigh mult ipath fading channel. In this case we used in our simulat ions two paths; the second path delay was 75 ns with attenuation of -3 d B which represents the specification of mult ipath environ ment inside office buildings. It is obvious in this figure that the performance of FM-QCSK is improved as N increased too. For example at BER=10 -3 , gains in SNR of 3 dB, 4.6 dB, 5.2 dB, 5.6 and 5.8 d B when N=2, 4, 6, 8 and 10 respectively have been obtained. For N values greater than 8, a saturation region is reached such that no more than 0.1 d B imp rovement can be gained. As compared with BPSK, enhanced FM-QCSK offered superior performance starting fro m SNR=5 dB. For examp le at BER= 10 -3 , mo re than 10 d B gain in SNR can be gained by using enhanced FM-QCSK with N=8.
Conclusions
The increase of the data rate and the reduction in the energy consumption are important requirement for any modulation system. In FM-QCSK, these requirements can be optimized by making mu lti information-bearing chips associated with one reference chip. It has been shown by simulations in both AWGN and mult ipath fading channels that the performance of FM -QCSK system has been enhanced by introducing this optimizat ion criteria. Furthermore, the bit error rate has been also reduced. For instant, 3 d B and 5 d B gains in SNR for AW GN and Rayleigh multipath fading channels respectively at BER=10 -3 over the conventional one. The performance of the enhanced scheme is imp roved as the number of information bearing chips for one reference chip is increased up to a certain threshold ( N=8) after which the system co mplexity is increased without gaining considerable improvement. A theoretical expression of the error probability for the enhanced scheme has been derived and its plot versus SNR is very closed to the simu lation results. The possible future work can include analysing the effect of changing the correlation length on the system performance and hardware imp lementation of the proposed system. | 2019-04-13T13:06:02.939Z | 2012-08-31T00:00:00.000 | {
"year": 2012,
"sha1": "3656e3c7daa614835683b67f2e2af9de8b5876f3",
"oa_license": "CCBY",
"oa_url": "http://www.sapub.org/global/showpaperpdf.aspx?doi=10.5923/j.jwnc.20120204.04",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "2105d1e9ea86c26864d81a976f8a002c650770a3",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
9877938 | pes2o/s2orc | v3-fos-license | An improved scoring matrix for multiple sequence alignment
The way for performing multiple sequence alignment is based on the criterion of the maximum scored information content computed from a weight matrix, but it is possible to have two or more alignments to have the same highest score leading to ambiguities in selecting the best alignment. This paper addresses this issue by introducing the concept of joint weight matrix to eliminate the randomness in selecting the best multiple sequence alignment. Alignments with equal scores are iteratively rescored with the joint weight matrix of increasing level (nucleotide pairs, triplets and so on) until one single best alignment is eventually found. This method for resolving ambiguity in multiple sequence alignment can be easily implemented by use of the improved scoring matrix.
Introduction
In the search for DNA regulatory elements such as binding sites, promoter, donor sites, TATA box and genes, the multiple sequences containing these elements have to be aligned against each other. These elements are highly but not absolutely conserved and a weight matrix is used to represent and score the multiple sequences [1]. However, the current motif discovery algorithms based on the weight matrix technique for scoring multiple sequence alignment in terms of information content are not without their limitations [2]. From the analysis of these algorithms, the highest performance coefficient on the binding site level of search is only % 2 . 30 using Motif Sampler [3], which is an algorithm modified from the widely adopted Gibbs Sampling method [4]. This may be a result of randomness in selecting the best alignment from cases whereby there are multiple peaks. Hence, there are rooms for improvement, which is evident from many different approaches that have been developed [5][6][7][8][9].
In this paper, a method of removing the randomness in selection is proposed. Randomness in selection occurs when there is more than one choice of alignments with the highest information content [10]. If one peak is randomly selected, the accuracy of multiple sequence alignment is compromised. This may be the reason that methods based on applied information theory cannot achieve much higher sensitivity, specificity and performance. For example, by randomly selecting two peaks of similar information content, there is a % 50 chance of selecting the wrong peak.
In order to overcome this problem, a simple method is proposed to eliminate the randomness of peak selection and to provide the best alignment, through the use of joint weight matrix (JWM) in this paper. Its flexibility means that a higher-level JWM can be used to work with cases with multiple peaks. The higher the level of JWM used, the lesser will the number of peaks be, until eventually a single peak is obtained. In this paper, JWM has been shown to reduce successfully the number of peaks in multiple sequence alignment.
Systems and Methods
The concept of JWM is presented here to demonstrate how two or more ambiguous selections can be reduced. Two sequences are used in this example. The longer one represents the DNA sequence and the shorter one represents a motif sequence, which is aligned to the former. The motif is assumed to be a perfect weight matrix with % 100 base weightage at each position. The score is then either 1 for match or 0 for mismatch at each position for simplicity of demonstration.
Since the sequence is 7 bp (base pair) long and the motif is 4 bp, the total number of possible shift positions without introducing gaps is in Figure 1. Table 1 shows the sequence alignment. The score for the four possible alignments presents an ambiguous choice between positions 1 and 2 , which are possible alignments with the highest score of 2 . Since there is more than one peak or alignment, the second-level JWM is used to score the alignment. Table 2 shows the result using the second-level comparison.
The result clearly shows that between positions 1 and 2 , the better match for the motif with the DNA is the position 2 with the matching score of 1 as compared with the position 1 with 0 .
Algorithm
Here it is shown how JWM can be integrated with sequence alignment tool to remove the randomness of selection during the alignment process. The following are the additional steps added using JWM: Step 1: Determine a weight matrix at each position i .
Step 2: Calculate the second-level JWM For the second-level JWM, the number of possible combinations of the four bases is 16 4 2 . Hence JWM is a matrix size of 16 by window length.
Step 3: From the weight matrix, the uncertainty of each combination of bases is For the second-level JWM, m is the value of 2 .
Step 4: The information content for each base is then is a small sample correction for i Hs [11].
Step 5: The score for one shifting position is then The shift position (sp) ranges from negative to positive shifting parameter.
Step 6: Shift JWM as predetermined to get the alignment score plot of information content versus shifting position. From the alignment score plot, the highest peak is chosen among the ambiguous choice of the previous set of peaks to be generated.
Step 7:
If there is still ambiguity after using the second-level JWM, a higher-level JWM (three or higher) should be calculated Repeat the Steps 3 to 6 using the higher-level JWM in (6) when there is ambiguity in peak selection if using any lower-level JWM.
Implementation
An example of how JWM is used to eliminate or reduce ambiguity is shown using data from 16 randomly generated sequences of 15 bp (Tables 3 and 4) that bind to OxyR [12]. For illustration purpose, the centre 9 th base is taken to be the start site of transcription, labeled as the position 0 . The alignment score is obtained by using the window of 5 bases from 1 -to 3 and the range of shifting position set from 8 -to 6 with respect to the start site. The sequences are shifted one base at a time and the new alignment score is recalculated based on the simplified sequence logo [13] in Figure 2(a).
Window and shifting parameters are selected such that an ambiguous choice of more than one peak is resolved. By shifting one of the sequences from 8 -to 6 , the alignment score based on window from 1 -to 3 show two peaks at shift positions 5 -and 0 in Figure 2(b). From the simplified sequence logo, the information content prior to shifting of any sequence is is obtained by the end of the shift. The amount of shift required for the 16 sequences to produce shift R is plotted in Figure 2(b), where two peaks are located at shift positions 5 and 0 . The situation is ambiguous and a higher-level search is required by using JWM. The weight matrix is replaced by the second-level JWM in the new search. The new shift R plot based on the higher-level JWM is shown in Figure 2(d).
The new alignment score using JWM shows clearly that the shift position 0 has higher information content, as compared with the shift position 5 -. Hence, the best alignment is the original position 0 . Instead of randomly selecting one of the peaks, it is rational to select the peak with higher information content.
Discussion and Conclusions
In the selection of the best multiple sequence alignment using the conventional weight matrix, it is assumed that the probability of each base is independent of its neighboring one. Output from a multiple sequence alignment program is not always the same. This can be attributed to several factors. One of the important contributing factors is the conventional scoring matrix. The best alignment at each stage is decided by the highest score with the conventional scoring matrix. However, there are cases whereby there is more than one of such score. This creates an ambiguity in selecting the best alignment. A random choice can be made but it may result in a less than optimal alignment.
The following shows examples of ambiguities found using the conventional scoring matrix. The benchmark database (Table 5) consists of DNA sequences containing amelogenin protein in the study of its origin and evolutionary path [14]. Cases of ambiguity during multiple sequences alignment using the conventional scoring matrix are shown in the Figure 3. For example, the ambiguity is found when the sequence 2 (DMSPARC) at position 0 and 3 of window, with window placed at the 18 th base from the start (first base on the left).
Sequence 23
The examples above shows that ambiguities are frequent enough to be of concern during multiple sequence alignment, which may result in a suboptimal alignment. This problem can be overcome by using the proposed joint weight matrix for scoring. The proposed scoring matrix allows a closer look at each alignment by considering two or more bases for each scoring element. By comparing two bases at one time, the probability of the next base is affected by what appears before it. In fact, there are 16 probabilities of a pair of bases as compared with just 4 probabilities if only one base is considered. This increases the depth of search to reduce the number of peaks. Under the Implementation section, it is shown how the second-level JWM can identify the highest peak when a conventional weight matrix could not. This reduces the error that may occur when "conflicts are resolved" by making a "pseudorandom choice" [10].
The higher-level of JWM can be used depending on the level of accuracy required. For example, the second-level JWM may be able to reduce the number of peaks from 5 to 3 . The randomness is reduced when one is choosing the best peak from 3 instead of 5 possible sites. However, if the application requires a level of match to be of greater accuracy, a higher-level of JWM may be needed to proceed. The higher-level of JWM can further filter out more peaks till only one obvious choice is left. Although the higher-level of JWM may require more computation time and additional scan, this may be compensated by the faster convergence of results as a better alignment is selected early in the iterations. This is true especially for cases whereby a large number of iterations are required before a satisfactory convergence can be found [15]. JWM can be used to improve applications using conventional weight matrix system in bioinformatics. Besides aligning DNA sequences, JWM can also be implemented in protein sequence alignment. | 2014-02-28T14:20:00.000Z | 2012-01-11T00:00:00.000 | {
"year": 2014,
"sha1": "88fb36422af94eae15106c9012229205528aded2",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/mpe/2012/490649.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "42dd5c0eec5627f3a78b8bfa3cef694acbbd6d0c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Biology",
"Mathematics"
]
} |
18587155 | pes2o/s2orc | v3-fos-license | Evaluation of a research diagnostic algorithm for DSM-5 neurocognitive disorders in a population-based cohort of older adults
Background There is little information on the application and impact of revised criteria for diagnosing dementia and mild cognitive impairment (MCI), now termed major and mild neurocognitive disorders (NCDs) in the DSM-5. We evaluate a psychometric algorithm for diagnosing DSM-5 NCDs in a community-dwelling sample, and characterize the neuropsychological and functional profile of expert-diagnosed DSM-5 NCDs relative to DSM-IV dementia and International Working Group criteria for MCI. Methods A population-based sample of 1644 adults aged 72–78 years was assessed. Algorithmic diagnostic criteria used detailed neuropsychological data, medical history, longitudinal cognitive performance, and informant interview. Those meeting all criteria for at least one diagnosis had data reviewed by a neurologist (expert diagnosis) who achieved consensus with a psychiatrist for complex cases. Results The algorithm accurately classified DSM-5 major NCD (area under the curve (AUC) = 0.95, 95% confidence interval (CI) 0.92–0.97), DSM-IV dementia (AUC = 0.91, 95% CI 0.85–0.97), DSM-5 mild NCD (AUC = 0.75, 95% CI 0.70–0.80), and MCI (AUC = 0.76, 95% CI 0.72–0.81) when compared to expert diagnosis. Expert diagnosis of dementia using DSM-5 criteria overlapped with 90% of DSM-IV dementia cases, but resulted in a 127% increase in diagnosis relative to DSM-IV. Additional cases had less severe memory, language impairment, and instrumental activities of daily living (IADL) impairments compared to cases meeting DSM-IV criteria for dementia. DSM-5 mild NCD overlapped with 83% of MCI cases and resulted in a 19% increase in diagnosis. These additional cases had a subtly different neurocognitive profile to MCI cases, including poorer social cognition. Conclusion DSM-5 NCD criteria can be operationalized in a psychometric algorithm in a population setting. Expert diagnosis using DSM-5 NCD criteria captured most cases with DSM-IV dementia and MCI in our sample, but included many additional cases suggesting that DSM-5 criteria are broader in their categorization. Electronic supplementary material The online version of this article (doi:10.1186/s13195-017-0246-x) contains supplementary material, which is available to authorized users.
Background
Revised criteria for diagnosing dementia and mild cognitive impairment (MCI), now termed major and mild neurocognitive disorders (NCDs), respectively, in the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) [1], has the potential to significantly impact on clinical and research settings. Recent reviews [2,3] note the increased clarity and structure in DSM-5 NCD for assessing cognitive impairment, decline, and functional impact when compared to DSM-IV dementia or International Working Group (IWG) criteria for MCI [4]. The clearer criteria and greater emphasis on objective measures mean that the DSM-5 NCD categories should be easier to operationalize in large-scale studies of ageing using a psychometric algorithm. Algorithmic approaches to diagnosing NCDs are particularly valuable in resource-intensive population studies [5] and in settings where there is limited access to biomarkers and clinical services. Globally, most dementia cases occur in such settings [6]. Algorithmic approaches to DSM-IV and DSM-III-R dementia diagnosis have been previously published with agreement ranging from κ (Cohen's kappa) = 0.63 to 0.84 [5,7,8]. No study has as yet examined the algorithmic diagnosis of DSM-5 NCD. The present study fills this gap.
Given that both major and mild categories of NCD are designed to be age-and etiology-independent syndromes, it is expected that, when applied to older adults, the prevalence estimates would be higher than for the more ' Alzheimer's-centric' DSM-IV dementia category [2,9], whereas MCI criteria [4,10] are much broader and are not age-or Alzheimer's disease (AD)-specific. Field trials of DSM-5 suggested a similar prevalence of DSM-IV dementia and DSM-5 major NCD [11]. However, a number of recent studies [12][13][14] report differences between the DSM-5 and existing diagnostic systems, with one reporting increased prevalence of diagnosis with DSM-5 criteria relative to DSM-IV and MCI [14], and others reporting decreased diagnosis relative to systems such as 10/66 criteria [12], Petersen MCI criteria [13], and IWG-MCI criteria [14,15]. The variance in findings may reflect differences in the diagnostic systems used for comparison, sensitivity of different cognitive batteries, as well as the samples studied (e.g., memory clinic [14], populationbased cohort [12,13,15], middle-income nations [12,14]). In the context of these mixed findings, it is important to better understand the implications of applying DSM-5 NCD criteria to existing epidemiological studies with well characterized samples that have been followed longitudinally with neurocognitive diagnoses.
The aims of the present study were twofold. The first aim was methodological and sought to develop and evaluate a psychometric algorithm to assess participant data against criteria for the following diagnoses: DSM-5 major NCD, DSM-5 mild NCD, DSM-IV dementia, and IWG MCI. Algorithmic classification was compared to diagnosis of the same categories by experienced clinicians (expert diagnosis). The second aim was to examine the overlap between expertly diagnosed DSM-5 NCDs, DSM-IV dementia, and MCI, and characterize the groups in terms of their neuropsychological and functional profiles.
Participants
The participants were from the Personality and Total Health Through Life Project (PATH) which has been previously described [16]. Briefly, we recruited participants who were residents of the city of Canberra and adjacent town of Queanbeyan, Australia. Participants aged within three narrow cohorts (20)(21)(22)(23)(24)(40)(41)(42)(43)(44), and 60-64 years) were sampled randomly from the electoral roll and invited to participate in a study on the risk and protective factors for common mental disorders. Enrolment to vote is compulsory for all Australian citizens. The study protocol was approved by the Australian National University's Human Research Ethics Committee (Protocols: 2009/039; 2009/ 308; 2012/074; 2006/0314; 2002/0189) and participants provided written informed consent after receiving a complete description of the study. A total of 7485 consented to participate. The present study focuses on the older age cohort whose sample size at wave 1 (data collection 2001-2002) was 2551 (58.3% of the cohort's random sample). Participants were re-assessed every 4 years on a broad range of sociodemographic, health, lifestyle, and neuropsychological measures. Sample retention has been high at each wave (between 85.4% and 88.8%). This study reports data from the 12-year follow-up of the older cohort who were aged 72-78 at wave 4 (data collection 2014-2015).
Interview and assessment
Of the 2048 participants contacted for follow-up at wave 4, 116 were deceased, 259 refused, and 14 were not found ( Fig. 1). Data were obtained from individual face-to-face or telephone interviews conducted with 1644 participants by trained research personnel, including demographic, general health, anthropometric, physiological, and neurocognitive measures.
Demographics, depression and general health survey
An interviewer-administered survey collected data on the level of education, psychological measures, substance and medication use, psychiatric and medical history, including recent major surgery, activities of daily living, housing, home or personal care, and non-English speaking background. Depressive symptoms were screened using the self-report screen for DSM-IV criteria for depression, the Patient Health Questionnaire (PHQ-9) [17].
Cognitive assessment
A battery of neurocognitive measures was developed to address each of the domains described in the DSM-5 [1] (see Additional file 1: Table S1), and administered by trained research interviewers. Measures were selected on the basis of sensitivity to dementia and age-related cognitive impairment as well as efficiency of administration and scoring. Data on behavioral changes were obtained through the informant interview (see later). Briefly, the following measures were used to assess each of the domains: complex attention (Symbol Digits Modalities Test [18], Trail Making Test A [19], Reaction Time Test [20]); executive function (Digit Span Backwards [21], Trail Making Test B (19), Stroop Color Word Test [22], Zoo Map Test [23], Game of Dice Test [24]); learning and memory (California Verbal Learning Test [25], Benton Visual Retention Test (Administration B) [26]); language (Letter Fluency [19], Boston Naming Test-15 item [27], Spot The Word Test [28]); perceptual motor (Purdue Pegboard [29], Ideomotor Apraxia Test (IAT) [30], Benton Visual Retention Test (Administration C) [26]); social cognition (Reading the Mind in the Eyes [31]). Details on test measures are provided in a supplementary methods section (see Additional file 1). Scores were converted to z Fig. 1 Flow of participants through the PATH study and through wave 4. Diagnosis refers to DSM-5 neurocognitive disorders, IWG MCI, and DSM-IV dementia scores by normalizing relative to the whole wave 4 PATH sample data stratified by gender and education (low: 5-10 years, medium: 10-15 years; high: 15+ years).
Screen 1
The data for the 1644 participants assessed at wave 4 were screened for signs of decline based on the criteria detailed in Additional file 1. Briefly, this included either a previous PATH diagnosis of dementia or a mild cognitive disorder, or evidence of current objective cognitive impairment (based on performance ≤6.7 th percentile on at least one cognitive measure, or Mini-Mental Status Examination (MMSE) ≤24), and evidence of subjective decline on the Memory and Cognition Questionnaire (MAC-Q) [32] or decline on the MMSE of >3 points since wave 3, or consistent MMSE ≤24 at waves 3 and 4. Of the participants meeting criteria for any of the above (n = 623), the majority (n = 426) had a detailed informant interview. Of the remaining 1021 participants not meeting the criteria, most (n = 746) received a basic informant interview (Fig. 1).
Informant interview
Participants (n = 1438) consented to have an informant (spouse, friend, neighbor or relative) interviewed by telephone regarding the participant's changes in cognition and activities of daily life. The basic informant interview comprised the Bayer instrumental activities of daily living (IADL) questionnaire [33] and the Informant Questionnaire of Cognitive Decline in the Elderly 16-item Short Version (IQCODE) [34]. The detailed informant interview comprised the Bayer IADL, IQCODE, Dysexecutive Questionnaire (DEX-Q) [23], and Neuropsychiatric Inventory (NPI) [35], as well as questions on medical history (Parkinson's disease, Alzheimer's disease, other dementia, stroke, psychiatric diagnoses, memory complaints), recent behavior including symptoms of delirium, psychosis, hallucinations, alertness and physical function, sensory or motor loss, and onset and progression of cognitive difficulties. The DEX-Q [23] collected data on executive difficulties affecting social and daily activity. The NPI [35] collected data on non-cognitive symptoms of MCI and dementia.
Psychometric algorithm
Those identified by screen 1 (n = 623) had all interview and informant data entered into a case file spreadsheet. To minimize effects of non-response bias, case files with missing informant data (n = 59) were also screened by the algorithm. The algorithm combined the neurocognitive assessment data with the informant and survey data on medical history to operationalize criteria (criterion met/not met) for each diagnostic category: DSM-5 major NCD, mild NCD, DSM-IV dementia, and MCI (see Tables 1 and 2). Details of the neuropsychological battery are provided in Additional file 1. Cognitive scores were standardized relative to the gender-and educationstratified norms (from the whole PATH 60s sample at wave 4) and converted to z scores. Severe cognitive impairment was defined as a z score < -2.0. Given a lack of consensus in the literature regarding appropriate cutoffs for defining mild cognitive impairment, separate algorithmic categories were created using z score > -2.0 and ≤ -1.0, and > -2.0 and ≤ -1.5. In addition to the diagnostic categories of interest to the current study, the algorithm also classified participants according to other categories (e.g., age-associated memory impairment [36], age-associated cognitive decline [37], DSM-IV mild NCD, etc.). Participants not meeting criteria for any diagnostic category were classified as "normal". Those meeting criteria for at least one diagnosis (n = 368) had their data reviewed by the research neurologist ( Fig. 1).
Expert diagnosis and consensus
Case files (n = 368) were reviewed by an experienced research neurologist (CM); these included neuropsychological test data, informant data, structural brain magnetic resonance imaging (MRI) scans to aid differential diagnosis of dementia subtypes (n = 54), a self-reported medication list, and contact details of the participant for further clarification of details relevant to diagnosis (n = 21). The neurologist based her decisions on all available data, guided by the DSM-5 NCD, DSM-IV, and MCI diagnostic criteria, and used clinical judgement to determine whether each criterion was supported by the data. Inter-rater reliability with an experienced psychiatrist (RK) independently reviewing a subsample of 29 cases indicated high agreement for dementia (DSM-IV and DSM5 major NCD: κ = 0.79, 95% confidence interval (CI) 0.54-1.0, p < 0.01), and moderate agreement for mild cognitive disorders (MCI and DSM5 mild NCD: κ = 0.47, 95% CI 0.13-0.73, p < 0.01) which are within the ranges reported in field trials [7,11,38].
Further to estimating inter-rater reliability, consensus diagnosis was conducted by the two physicians and a neuropsychologist (RE) on complex cases identified as meeting at least one of the following criteria: (1) comorbid depression (moderate to severe on PHQ-9); (2) other comorbid psychiatric conditions; (3) stroke; (4) dementia or DSM-5 major NCD without memory impairment. A total of n = 60 met the above criteria and diagnoses were reviewed for consensus.
Statistical analysis
To evaluate the accuracy of algorithmic classification relative to the expert diagnoses, we used the binary algorithmic criteria (equally weighted) as predictors of expert diagnosis in logistic regression models, saving the model predicted probabilities. We then conducted receiver operating characteristic (ROC) analyses of each probability variable against the corresponding binary diagnosis variable. Cross-tabulation and kappa (κ) statistics were used to evaluate agreement between algorithmic and expert diagnosis, with bootstrapping of 1000 samples to estimate 95% CIs on the kappa. Overlap between the different diagnostic criteria when used by clinicians was examined using crosstabs. Generalized linear models (GLM) were used to examine mean differences in each cognitive domain between diagnostic groups identified by the clinicians.
Predictive value of individual algorithmic criteria for identifying algorithm and expert diagnosis
Positive (PPV) and negative predictive values (NPV) of individual criteria (see Additional file 1: Table S2) are presented as functions of source of diagnosis (i.e., algorithm or expert). Predictive values were obtained using crosstabs of observed frequencies of those meeting each criterion against those achieving diagnosis. In general, the pattern of PPV for individual criteria was similar for algorithmic and expert diagnosis.
Overlap between expert diagnosed DSM-5 NCDs and DSM-IV dementia and MCI
Cross-tabulation of expert-diagnosed DSM-5 major NCD against DSM-IV dementia showed a moderate level of overlap (κ =0.49, standard error (SE) = 0.06, p < 0.001) ( Table 3). Of the 30 cases meeting criteria for DSM-IV dementia, 27 (90%) also met criteria for DSM-5 major NCD. The three cases meeting DSM-IV dementia but not DSM-5 major NCD both received AD etiological specifiers and met criteria for DSM-5 mild NCD. The DSM-5 identified 41 additional cases as dementia, representing a 127% increase in dementia diagnosis in the sample relative to DSM-IV, and a high positive predictive value (PPV = 0.88; NPV = 0.90). These additional cases included a few with vascular, fronto-temporal, and Parkinson's specifiers. They also had a higher rate of previous diagnoses (36.6%) relative to cases without any expert-diagnosed dementia (3.4%) (p < 0.001), and a similar rate to those meeting criteria for both DSM-5 and DSM-IV dementia diagnoses (40%) (p > 0.05). Cases qualifying for both DSM-5 major NCD and DSM-IV dementia were also more likely to carry at least one APOE e4 allele (55.2%) compared to those meeting only the DSM-5 major NCD diagnosis (14.6%) (p < 0.001), with the latter being statistically not different from the APOE e4 allele frequency in cognitively normal participants (25.8%) (p > 0.05).
There was a moderate level of overlap (κ = 0.58, SE = 0.04) between DSM-5 mild NCD and MCI diagnosis. Of the 144 cases qualifying for MCI, 119 (82.6%) were also given DSM-5 mild NCD diagnosis. The 25 MCI cases missed by DSM-5 mild NCD did not qualify for a diagnosis of DSM-5 major NCD or any other diagnostic category. They were mostly of the amnestic multi-domain (n = 9) and non-amnestic single domain (n = 9) subtypes. An additional 52 cases also received mild NCD diagnosis, representing an overall 19% increase in mild cognitive disorder diagnoses in our sample (PPV = 0.78; NPV = 0.82).
Characterization of neuropsychological profiles as a function of expert diagnosis overlap
A series of GLMs compared neurocognitive profile as a function of diagnosis. GLM analysis revealed that cases diagnosed with only DSM-5 major NCD had significantly better language (p < 0.01), memory encoding (p < 0.001), and IADL function (p < 0.05) compared to cases that also met DSM-IV dementia criteria (Fig. 3a). Figure 3b presents neuropsychological profiles as a function of DSM-5 mild NCD and MCI. Relative to Table 3 Overlap between expert diagnoses using DSM-5 criteria and DSM-IV for dementia and MCI
Algorithm accuracy
We report the first algorithmic approach to classifying DSM-5 NCDs. The algorithm used had good accuracy when classifying major NCD (κ = 0.72, AUC = 0.95) and DSM-IV dementia (κ = 0.64, AUC = 0.91) and was reasonably accurate when classifying MCI (κ = 0.42, AUC = 0.75) and mild NCD (κ = 0.43, AUC = 0.76). The findings indicate that a psychometric algorithm is capable of predicting clinical diagnosis in a population-based sample of older adults, and is consistent with previous work suggesting better algorithmic prediction of more severe diagnoses compared to milder diagnoses [5,7]. Our findings also support field trials of the DSM-5 NCD [11] which found that the reliability of mild NCD was generally lower and less consistent than that of major NCD, which was very good. The algorithm for DSM-5 criteria produced slightly more accurate prediction of expert diagnosis compared to DSM-IV dementia criteria or IWG MCI criteria, supporting our hypothesis that the clearer, more structured DSM-5 criteria may be easier to operationalize. Agreement between algorithmic and expert diagnosis ranged between κ = 0.42 and κ = 0.72, consistent with previously published algorithms [5,7,8].
We also found that the cognitive cut-off used to define mild impairment (either 1.0 or 1.5 SD) had minimal impact on the rate of diagnosis of either DSM-5 mild NCD or IWG MCI diagnosis.
The individual diagnostic criteria that were predictive of expert-diagnosed major NCD and DSM-IV dementia were similarly predictive of algorithm-defined major NCD and dementia, with cognitive impairment and IADL impact having the highest PPV. Individual criteria were less predictive for the mild diagnoses, but those with highest PPVs included cognitive impairment, subjective concern, and exclusion of dementia (in the case of MCI). The lower predictive value of algorithmic criteria for delirium and other disorders for expert diagnoses suggest greater reliance on clinical judgement when determining their likely impact.
DSM-5 overlap with DSM-IV and MCI, and comparison of neurocognitive profiles
We also found that expert diagnosis of dementia according to DSM-5 had excellent overlap with DSM-IV (90%); however, a large number of additional cases were identified by DSM-5 resulting in a 127% increase in diagnosis. This confirms the findings of Tay et al. [14] in a memory clinic sample (n = 234) where they found that DSM-5 major NCD criteria captured all cases of DSM-IV dementia, but with an additional 39.7% cases. These additional cases, however, had a similar rate of previous diagnoses (either MCI or dementia) to cases meeting only DSM-IV dementia, and a significantly higher rate than those without dementia, suggesting the more inclusive criteria captured additional cases with similarly chronic deficits.
Aside from the different populations, our higher rate of additional diagnosis may reflect our use of more detailed neurocognitive measurement, detailed informant report, and inclusion of etiological specifiers and structural MRI evidence. In the absence of sufficient data on the degree of impairment or biological evidence of change, cases not meeting DSM-IV dementia are more likely to be labeled as mild. While Tay et al. [14] labeled as MCI most of those who were DSM-5 major NCD but not DSM-IV dementia, none of our additional DSM-5 major NCD cases met criteria for MCI. Instead, they were more likely to receive a vascular specifier, frontotemporal or Parkinson's dementia. Although memory impairment was less severe for the group with only DSM-5 major NCD, the relative severity of impairment in other cognitive domains, as well as reported impact on IADLs, show that this group should be considered as dementia. Thus, our findings suggest that additional dementia cases identified by DSM-5 are not necessarily at a milder stage but present with a different neuropsychological profile, and possibly different etiologies, compared to cases meeting dementia criteria for both DSM-5 and DSM-IV where the pattern of impairment and APOE e4 allele distribution is more supportive of AD. Future research including additional biomarkers will enable evaluation of this finding.
Although the mild NCD criteria were not developed as an explicit replacement for IWG MCI, in the context of ageing-associated progressive NCDs, clinicians may consider them as an alternative. Accordingly, diagnosis of DSM-5 mild NCD was highly sensitive to MCI (83%) and showed a moderate agreement with MCI diagnosis (κ = 0.58), albeit with an overall 19% increase in the rate of diagnosis. This contrasts with Tay et al. [14] who reported a decrease of 54% using DSM-5 mild NCD criteria, and attributed this to difficulties defining the level of IADL impairment appropriate for mild NCD. Populationbased samples are more likely to contain individuals with very little functional impairment but sufficient cognitive deficits and decline to warrant a mild NCD diagnosis.
Luck et al. [15] reported a much higher agreement between MCI and DSM-5 mild NCD, but assessed each neurocognitive domain with a single test. Our use of a range of tests and obtaining average performance across the domain is likely more sensitive to true impairment but more variable. In fact, in our sample, 17.4% of MCI cases failed to be captured by DSM-5, and there were differences in neuropsychological profile, such that cases meeting only DSM-5 mild criteria had poorer social cognition and memory, supporting previous findings [15], but better performance on planning and decisionmaking. This suggests the inclusion of a greater range of neurocognitive domains in DSM-5, and particularly the inclusion of social cognition as a criterion, may help capture impaired individuals not detected by MCI criteria. Follow-up studies are required to examine the progression and predictive value of these cases.
Our study is limited by expert diagnosis based on case file review rather than clinical interview; however, this meant that our clinical diagnoses were based on the same data as those operationalized in the algorithm. Nevertheless, further work is required to validate these findings in independent data sets. Strengths include the large, population-based sample, detailed neurocognitive assessment, comparison of different cognitive cut-offs, and a systematic approach to collecting and analyzing evidence for impairment. The findings suggest that clinicians, trialists, and epidemiologists using the DSM-5 criteria should expect higher estimates of disease prevalence and incidence, and the ability to capture a broader range of etiologies and severities compared to DSM-IV and MCI. The findings also suggest that while MCI and mild NCD do overlap, MCI is not fully captured within the mild NCD construct. A similar pattern may be apparent for the forthcoming ICD-11 criteria if it adopts an approach analogous to DSM-5 [39].
Conclusions
In summary, an algorithm-based approach to DSM-5 diagnosis of NCD is feasible in cohort studies. This approach is more accurate at identifying major NCD than mild NCD. DSM-5 is more inclusive of the variety of clinical profiles of major NCD, resulting in higher rates of diagnosis but with good negative predictive power. The findings have implications for understanding the impact on rates of diagnosis when using the revised diagnoses. | 2017-08-01T07:57:04.236Z | 2017-03-04T00:00:00.000 | {
"year": 2017,
"sha1": "59bcfd89c0aa609d0d8e35b720653233da6553f8",
"oa_license": "CCBY",
"oa_url": "https://alzres.biomedcentral.com/track/pdf/10.1186/s13195-017-0246-x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "af7a6452aeeddcd4146810b033fe4aab2a20fa22",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
238864816 | pes2o/s2orc | v3-fos-license | pH-driven enhancement of anti-tubercular drug loading on iron oxide nanoparticles for drug delivery in macrophages
Nanoparticle deployment in drug delivery is contingent upon controlled drug loading and a desired release profile, with simultaneous biocompatibility and cellular targeting. Iron oxide nanoparticles (IONPs), being biocompatible, are used as drug carriers. However, to prevent aggregation of bare IONPs, they are coated with stabilizing agents. We hypothesize that, zwitterionic drugs like norfloxacin (NOR, a fluoroquinolone) can manifest dual functionality – nanoparticle stabilization and antibiotic activity, eliminating the need of a separate stabilizing agent. Since these drugs have different charges, depending on the surrounding pH, drug loading enhancement could be pH dependent. Hence, upon synthesizing IONPs, they were coated with NOR, either at pH 5 (predominantly as cationic, NOR+) or at pH 10 (predominantly as anionic, NOR−). We observed that, drug loading at pH 5 exceeded that at pH 10 by 4.7–5.7 times. Furthermore, only the former (pH 5 system) exhibited a desirable slower drug release profile, compared to the free drug. NOR-coated IONPs also enable a 22 times higher drug accumulation in macrophages, compared to identical extracellular concentrations of the free drug. Thus, lowering the drug coating pH to 5 imparts multiple benefits – improved IONP stability, enhanced drug coating, higher drug uptake in macrophages at reduced toxicity and slower drug release.
Introduction
Nanoparticles have taken the center-stage in drug delivery applications, wherein they can improve drug pharmacokinetics and pharmacodynamics and may also increase drug accumulation in both animal cells and bacteria, proving beneficial to overcome drug resistance [1,2]. Iron oxide nanoparticles (IONPs), due to their biocompatibility and magnetic properties, have found applications in drug delivery, magnetic resonance imaging and treatment of iron deficiencies [3][4][5][6]. The property of hyperthermia has been found to be beneficial in localized drug release, particularly in cancer therapy [7]. In anti-cancer therapy, IONPs have also proven to be beneficial in overcoming multidrug resistance by enabling an increased drug uptake [8]. Similarly, physical combination of stabilized IONPs and anti-tuberculosis drugs improve intracellular drug accumulation through efflux pump inhibition [9] or by enhanced membrane permeabilization [10], in turn improving the bactericidal activity of the drug, too. For biological applications, it is essential for IONPs to be stabilized with the help of stabilizing agents [11,12]. This helps to reduce the nanoparticle toxicity and facilitates the synthesis of stable nanoparticle dispersions, with reduced size or aggregation [11,13,14]. In this regard, the use of drugs as both a stabilizing agent and an antibiotic could prove to be beneficial. To this end, an understanding of drug-nanoparticle interactions can enable the identification of key parameters for optimal drug loading and drug release, too [15].
Fluoroquinolones, a class of broad-spectrum DNA gyrase inhibiting antibiotics, are used as therapeutics for many intracellular pathogens [16][17][18][19][20]. Recently, they have been explored for their activity as an anti-TB drug [16,17]. Currently, they are used as second-line anti-TB drugs against mycobacteria, which are found to be resistant to rifampicin and/or isoniazid, the first-line drugs [17]. Although at present, moxifloxacin, ofloxacin and levofloxacin are the major fluoroquinolones used in tuberculosis therapeutics [21], many clinical isolates of M. tuberculosis have shown resistance towards these drugs, too. Use of iron oxide nanoparticles as drug delivery agents could assist the uptake of drugs and thus, overcome drug resistance [8][9][10]. Fluoroquinolones are known to form complexes with metal ions through bidentate or unidentate co-ordination bonds [22]. Thus, IONPs, through their surface Fe 2+/3+ moieties, could exhibit significant drug loading and therefore, have potential as fluoroquinolone delivery agents.
Norfloxacin (NOR), the most basic fluoroquinolone [23] also exhibits anti-mycobacterial activity [24]. It chemically consists of a quinolone carboxylic acid with fluorine and a piperazine ring [25]. It exhibits a zwitterionic nature, with a pKa 1 and pKa 2 of 6.2 and 8.5, respectively [26]. Zwitterionic molecules like amino acids and amphoteric hydroxy groups get adsorbed onto iron oxide nanoparticles predominately via electrostatic interaction [27,28]. Furthermore, their interaction with IONPs may be via carboxylate groups, amine groups or by neither [27]. pH variations thus can play a key role in promoting interactions between amino acids and metal oxide surfaces [29]. NOR has also been reported to form stable complexes with Fe 2+/3+ [22]. It was also observed in our previous study that IONPs can be loaded with NOR in the absence of stabilizing agents. Drug loading in our previous study was carried out without monitoring pH and was observed to be just 17% [30]. This low drug loading limits the therapeutic applicability of the nanoparticles at high drug concentrations due to toxicity concerns. Thus, we hypothesized that the overall zwitterionic and chelating properties could enable a pH dependent enhanced loading of NOR on to IONPs, independent of any additional stabilizing agent in the formulation, which would in turn permit their application.
In the present work, we used NOR as a model fluoroquinolone and a zwitterionic drug, to explore its interaction with IONPs and further achieve any potential improvement in intra-macrophage delivery and drug accumulation. Being a zwitterionic drug, NOR exists in 3 forms; NOR + (at pH < 6.2), NOR ± (at pH 7) and NOR − (at pH > 8.5). We believe that alteration of the coating pH would alter the attraction of the drug to the IONPs, in turn affecting drug loading on IONPs. Thus, we have selected an acidic pH of 5 and an alkaline pH of 10. To further test the drug delivery capacity of the particles, we investigated the drug and nanoparticle uptake in macrophage cells in vitro, as macrophages are the primary site of infection for many intracellular pathogens including Mycobacterium [31].
Results and Discussion
Iron oxide nanoparticles were successfully synthesized, indicated by the appearance of a black coloration upon addition of ammonia solution (Figure 1). NOR was loaded onto the IONPs at pH 5 or pH 10 and the nanoparticles were subsequently analyzed. It was noted that the efficacy of NOR against Mycobacterium smegmatis remained unaffected by a change of pH, to either pH 5 or pH 10 (Supporting Information File 1, Table S1).
Characterization of uncoated iron oxide nanoparticles and NOR
Uncoated iron oxide nanoparticles (UIONPs) exhibited a hydrodynamic diameter (from DLS) greater than 1000 nm, which was due to the aggregation of ≈10 nm individual UIONPs, as observed by TEM (Figure 2a,b). The size distribution of these particles was also high, i.e., FWHM of 670.94 nm, resulting from the variation in particle aggregate sizes. The XRD pattern obtained for the synthesized particles were in accordance with the pattern observed in the XRD database for iron oxide (COD: 9013529) [32] (Figure 2c). The zeta potential of UIONPs was found to be dependent on the pH of the dispersion media, varying from positive to negative, as the pH was changed from acidic to alkaline (Figure 2d). The standard deviation for zeta potentials at pH of 8 and 9 was negligible and therefore not discernable in Figure 2d. This is due to the interaction of water molecules with the Fe ions on the surface of IONPs, which in turn facilitates protonation and deprotonation with varying pH [33]. Characteristic iron oxide and NOR peaks were observed in their respective FTIR spectra and were used as a reference for comparison with the coated samples (Figure 2e,f). FTIR peaks [34]. The O-H vibrations present in the iron oxide nanoparticles possibly arise from the association of oxygen from the aqueous solution to Fe present on the surface of the nanoparticles. Such Fe-OH associations are often found on iron oxide nanoparticles due to their high reactivity [28,35]. Characteristic peaks for NOR observed at 1258 cm −1 , 1615 cm −1 , 1734 cm −1 , 2852 cm −1 and 3418 cm −1 in Figure 2f are indicative of COOH, NH (quinolone), C=O, CH and NH (piperazine) vibrations, respectively [36].
Characterization of NOR-coated IONPs, coated at pH 5 NOR@IONP, coated at pH 5 (NOR@IONP pH5 ), exhibited a distribution size range of 45 to 110 nm (Figure 3a), which was confirmed by TEM, to be aggregates of 10-12 nm size individual nanoparticles (Figure 3b), clearly indicating a reduced aggregate size in comparison to the UIONPs which was also in-dicated by the reduced FWHM, i.e., 40.72 nm. The XRD pattern confirmed the presence of iron oxide nanoparticles ( Figure 3c). The FTIR spectrum of NOR@IONP pH5 indicated the presence of Fe-O stretching, from the IONPs, along with characteristic peaks of amines. C-F, N-H bending of quinolones and N-H stretching of piperazinyl were observed at 550-650, 1000-1050, 1650 and 3300-3500 cm −1 , indicating the presence of NOR (Figure 3d) [36][37][38]. It was noted that the N-H stretching of piperazinyl and O-H lie within the same wavenumber range, thus, the peak observed at 3422.9 cm −1 could be attributed to both NH from NOR and OH from the surface Fe-OH groups present in the sample. Fluoroquinolones are believed to chelate Fe ions through their carboxylate and amine groups [22]. The FTIR peak shifts observed at 1258 cm −1 (COOH stretching) and 1615.6 cm −1 (NH bending) to higher wavenumbers could possibly be due to such interactions of NOR with Fe 2+/3+ existing on the surface of iron oxide nanoparticles [39,40]. This was also confirmed by the disappearance of the C=O stretching vibration at 1734.5 (Supporting Informa- tion File 1, Figure S1). Such peak shifts are also observed when NOR interacts with metal ions like NiO [41]. The zeta potential of these particles was found to be +29 mV, validating the loading of drug and indicating nanoparticle stability at neutral pH.
Characterization of NOR-coated iron oxide nanoparticle coated at pH 10 NOR@IONP coated at pH 10 (NOR@IONP pH10 ), had a size distribution ranging from 25 to 120 nm, as examined through DLS ( Figure 4a). TEM further confirmed these to be aggregates of 10-13 nm sized individual nanoparticles ( Figure 4b).
The FWHM was also observed to be 46.97 nm exhibiting a narrower size distribution in comparison to UIONPs but a slightly broader in comparison to NOR@IONP pH10 . Thus, coating at pH 10 also promoted a reduction in the nanoparticle aggregate size. A slight shoulder in the hydrodynamic size distribution at 32-40 nm is not a true peak and the distribution is unimodal. The hydrodynamic and TEM data however, do indi- cate the presence of a fraction of smaller sized particles in comparison to even NOR@IONP pH5 . This could be due to a greater charge on the nanoparticles during drug coating which possibly reduces the particle aggregation. The XRD pattern of NOR@IONP pH10 was indicative of the presence of iron oxide nanoparticles (Figure 4c). Further confirmation of iron oxide was found through the Fe-O stretching, observed at 590 cm −1 in the FTIR spectrum [37,38]. The FTIR spectrum of the sample indicates either the absence of NOR or its presence in very minute quantities (Figure 4d). A shift observed in the FTIR peak for OH stretching from 3420 cm −1 to 3440 cm −1 (Supporting Information File 1, Figure S2) could be a result of changes in the intermolecular H bonding, whereby we believe that a fraction of the hydrogen bonding in NOR@IONP pH10 could occur through the amine groups present in NOR and the negatively charged IONPs at pH10 during drug coating. This could be due to the occurrence of negative species of both IONP and NOR at pH 10, resulting in a large electrostatic repulsion and consequent low drug loading. The zeta potential of these particles was indeed found to be −16.5 mV, supporting this.
Comparing the different nanoparticles synthesized, i.e., UIONPs, NOR@IONP pH5 and NOR@IONP pH10 , we observed that the drug coated IONPs have a much lower aggregate size with a reduced hydrodynamic size distribution ( Table 1). The NOR@IONPs also carry a surface charge and neutral pH. Thus indicating that these particles indeed have an improved stability, possibly due to the coating of drug. NOR@IONPpH5 also appear to have a higher drug loading.
It was noted that both NOR@IONP pH5 and NOR@IONP pH10 had 2-3 nm larger individual particle size in comparison to the NOR@IONPs synthesized in our previous study. However, the hydrodynamic diameter in this work is observed to be smaller, with majority of the particles lying between 70-80 nm and 40-70 nm for NOR@IONP pH5 and NOR@IONP pH10 , respectively. Additionally, the zeta potential resembled that of NOR@IONP pH5 but the drug loading achieved was 3 times lower than that achieved in the NOR@IONP pH5 system of this study (Table 2). Thus, NOR@IONP pH5 resembled the NOR@IONPs from our previous study in the surface potential but differed in size.
Drug release and coating estimation
Intracellular pathogens are often contained in vesicles within phagocytic cells like the macrophages. These vesicles are known to present an acidic environment [42], while the normal physiological pH of the cell remains neutral [43]. Thus, nanoparticles used in drug delivery to macrophages would experience both neutral (pH 7.4) and acidic (pH 5) conditions. To investigate any alterations in the drug release profile from the NOR@IONPs based on the cellular pH variations, the drug release was monitored over 48 h in phosphate buffer saline (PBS) release media, maintaining the pH at either 5 or 7.4. We observed that the release kinetics and saturation amounts were identical in both the pH (5 or 7.4) of the release media, irrespective of whether NOR@IONP pH5 or NOR@IONP pH10 was used ( Figure 5). Free drug is observed to have rapid release profile which saturates by 4 h. NOR@IONP pH5 displays an initial burst release up to 1 to 2 h after which a slow release of NOR is observed. This initial burst release could arise from weakly interacted NOR in the surface of the nanoparticles, while a followed slow release could arise from more strongly interacting NOR on the nanoparticles (Figure 5a, inset). The drug release profile from NOR@IONP pH10 resembled that of free NOR where the release was rapid over the first 4-6 h and saturated out there after (Figure 5b, inset). The release of NOR from metal oxides, NiO, is found to follow first order rate kinetics thus we too fitted out drug release plots to a first order model [41]. The drug release rate constants were calculated to be 1.3, 0.7 and 1.1% h −1 for free NOR, NOR@IONP pH5 and NOR@IONP pH10 , respectively (Supporting Information File 1, Figure S3a). Thus confirming that a slower drug release rate for NOR@IONP pH5 . This release profile was also observed to be in accordance with the previously synthesized NOR@IONPs, where it was observed that NOR is released rapidly over the first 3-4 h but is slow and sustained after 4 h [30]. Additionally, the rate constant obtained for previously synthesized (non-pH characterized) NOR@IONPs was estimated to be 0.47% h −1 which is slightly reduced in comparison to the NOR@IONP pH5 system as well (Supporting Information File 1, Figure S3c). Thus, NOR@IONP pH5 enables a slow drug release although the rate of release is marginally greater than the NOR@IONPs in our previous work.
The estimated drug coating for NOR@IONP pH5 and NOR@IONP pH10 were in the range 50.2 ± 7.4 (mean ± standard deviation) µg/mg of nanoparticle and 6.5 ± 2.1 µg/mg of nanoparticle, respectively. Thus, confirming that a greater drug-nanoparticle attraction occurs at acidic coating pH of 5, which in turn enhances drug loading on IONPs, even in the absence of any extraneous linker molecules. The low drug loading at a coating pH of 10, could be verified by the absence of any prominent NOR spectral peaks in the FTIR spectrum of NOR@IONP pH10 , as described earlier (Figure 4d). It was also noted that the drug coating achieved through acidification of the medium during drug loading is 3 times greater than determined for NOR@IONPs synthesized in our previous study, where the drug coating was estimated to be 17.13 µg/mg of the nanoparticle [30].
At pH 5, we know from the zeta potential that, IONPs express a positive charge (Figure 2d). Therefore, an electrostatic interaction would occur between IONP and the negatively charged carboxylate group of NOR, as carboxylate would be present on the zwitterionic NOR ± molecule at this acidic pH. The percentage of zwitterionic NOR ± form, estimated through the Henderson-Hasselbalch equation is only 5.9% (Supporting Information File 1, Table S2) [29,44,45], with the positively charged NOR + form being the dominant rest amount of about 94%. So, IONP cannot electrostatically interact with NOR + at this pH 5 and hence more NOR would not have coated on IONP, too. However, in spite of this, the coating efficiency achieved at pH 5 ranged as high as 43-51%, which is much greater than the percentage of NOR ± at this pH (Supporting Information File 1, Table S3). This is possible because NOR contains other electronegative groups, like fluoride, or π-π electron rich regions (quinolone ring and ketone), which can interact with positively charge IONP at pH 5. This in turn has resulted in the observed enhanced drug coating efficiency of 43-51% (Supporting Information File 1, Figure S5) [28].
On the contrary, at pH 10, as per zeta potential, IONP has a negative charge and can only interact with the positive part of the zwitterionic NOR ± . However, the percentage of NOR ± is only 3.1% at pH 10 (Supporting Information File 1, Table S2). In fact, the coating efficiency also ranges from 4.9-9.2% (Supporting Information File 1, Table S3) [29,44]. This is statistically identical to the 3.1% zwitterionic fraction of NOR (NOR ± ), at pH 10. This again indicates that, at pH 10, the drugnanoparticle interaction is only through the electropositive (-NH) group present in NOR ± (Supporting Information File 1, Figure S5). Furthermore, as NOR does not contain any addi- The black columns depict free drug treatment, blue diagonal patterned columns depict treatment via NOR@IONP pH5 and red checked columns depict treatment with NOR@IONP pH10 . Statistical significance was estimated using the student's t-test where "***" represents α ≤ 0.001. All data is plotted as the mean and standard deviation of 3 biological replicates.
tional electropositive groups in its structure, hence no further enhancement in drug coating can be achieved at pH 10 (unlike as in pH 5); it remains low, as measured in the experiments.
Drug delivery application of NOR@IONPs
The efficacious concentration of NOR (in RPMI media supplemented with 10% fetal bovine serum (FBS)) against extracellular M. smegmatis is 8 µg/mL (Supporting Information File 1, Figure S6). To enable an intra-macrophage bacterial clearance however, a higher NOR concentration would be required.
Therefore, to investigate the use of nanoparticles for the drug delivery in macrophage cells, a NOR concentration that exceeds 8 µg/mL (i.e., 32 µg/mL) was selected. When treated with an extracellular NOR concentration of 32 µg/mL, the NOR uptake in macrophage cells was found to be 0.3 pg/cell over 48 h. A drug delivery via NOR@IONP pH5 and NOR@IONP pH10 at extracellular NOR concentration of 32 µg/mL resulted in increased uptake of 7 and 12 pg/cell, respectively, over identical treatment conditions (Figure 6a). The increased uptake could be due to active engulfment of nanoparticles by macrophages, due to their larger size (20-120 nm size range) and surface charge (+29 mV or −16.5 mV) [46], which results in simultaneous internalization of larger amounts of drug. Figure 6b shows that the nanoparticles are also taken up efficiently by the macrophages. Additionally, a higher uptake of NOR@IONP pH10 nanoparticles is entirely because of the higher amount of nanoparticles required to achieve identical drug concentrations, a result of low drug loading. To achieve an extracellular NOR concentration of 32 µg/mL, the nanoparticle concentration of NOR@IONP pH5 and NOR@IONP pH10 was 0.625 and 3.5 mg/mL, respectively (Figure 6b). Thus, via the NOR@IONP pH5 nanoparticle system, a 22-fold increase in the intra-macrophage NOR concentration was achieved, even at a lower nanoparticle concentration. The drug uptake is also greater than the uptake observed through our previous work, where the NOR@IONPs enhance the drug uptake of only 7-fold. Furthermore, the relative viability of NOR, NOR@IONP pH5 and NOR@IONP pH10 for 32 µg/mL drug concentration are found to be 102.1, 107.5 and 30.1%, respectively. Thus, both NOR and NOR@IONP pH5 administered at this concentration, exert no toxicity towards the macrophage cells. This is in accordance with our previous study [30] on macrophages, where we reported that the NOR becomes toxic at concentrations greater than 100 µg/mL, while the toxicity of IONPs greatly increases above concentrations of 1 mg/mL. It was also noted that the NOR@IONP pH5 exhibit reduced toxicity in comparison to NOR@IONPs from our previous study which exhibited a relative viability of 50% when administered at a NOR concentration of 32 µg/mL (data not shown) [30].
We therefore concluded that the NOR@IONPs enhance drug uptake in comparison to the free drug. In the case of NOR@IONP coated at pH 10, however, the large amount of nanoparticle required for achieving the desired extracellular drug concentration may not be useful for application (Figure 7). In our present study, NOR@IONP pH5 however, attains the desired extracellular drug levels with a much lower nanoparticle concentration (0.625 mg/mL), thus proving it to be nontoxic.
Since increasing the drug loading (achieved in case of NOR@IONP pH5 ) enables the use of a reduced nanoparticle concentration (to reach the desired extracellular drug concentration), the limitation imposed by the nanoparticle toxicity is overcome by the NOR@IONP pH5 system. Thus, the use of pH 5 for drug loading onto IONPs provides the desirable additional benefit of the reduced toxicity towards macrophage cells.
Conclusion
The general inability to increase the intracellular drug concentration in macrophages (having engulfed pathogens, like Mycobacterium) results in high drug dosage requirement for the pathogen clearance. This is a major hurdle in tuberculosis treatment. In turn, a high dosage causes toxic side effects from either the drug or the carrier nanoparticle, necessitating new delivery systems to enhance the intracellular antibiotic concentration and to avoid toxicity. Furthermore, the choice of linkers and stabilizing agents can also elicit toxicity. In an attempt to overcome these limitations of low drug loading and toxicity of stabilizing agents, we studied the ability of a zwitterionic drug to coat iron oxide nanoparticles, while also enhancing the particle stability. In this regard, two different NOR-coated IONP systems (NOR@IONPs) were synthesized -namely with a drug coating at pH 5 or at pH 10, respectively. These nanoparticles were stable in aqueous dispersion, due to electrostatic repulsion from the existing charge on their surfaces. We find that, compared to pH 10, an acidic pH of 5 enhances the drug coating on IONPs, in the range of 4.7 to 5.7 times, achieving a NOR loading efficiency almost equivalent to polymeric nanoparticles composed of poly (3-hydroxybutyrate) [41]. This high drug loading was also reflected by the presence of prominent NOR peaks in the FTIR spectrum of NOR@IONP pH5 , compared to that in NOR@IONP pH10 . Moreover, as desired in a formulation, the rate of drug release from NOR@IONP pH5 over the initial 4 hours was slower than the release of the free drug, while the drug release from NOR@IONP pH10 was identical to that of the free drug. So, a combination of higher drug loading and slower release profile indicates a beneficial increased attraction between NOR and IONPs at the lower coating pH of 5. This enhanced interaction, we believe, is due to the electronegative nature of the drug, which facilitates the interaction with positively charged IONPs at an acidic pH. Furthermore, considering the similarity in structure and basic chemical com-position of fluoroquinolones, we expect this to be applicable to the other antibiotics of this class, like ciprofloxacin and moxifloxacin.
The quantification of the intra-macrophage accumulation of NOR shows that, cellular uptake is greatly facilitated by drug coated systems too, compared to free drug. Although both drugcoated nanoparticle systems enhance the drug uptake, due to lower drug loading in case of NOR@IONP pH10 , a higher concentration of nanoparticles would be required to achieve the same extracellular drug concentration. This is disadvantageous, as the higher concentration of iron oxide will induce toxicity in macrophage cells [30].
Thus, our study shows that, zwitterionic drugs can serve as both stabilizing agents and antibiotics, for drug delivery via iron oxide nanoparticles. Additionally, a mere adjustment of the drug coating pH to 5, can greatly enhance drug loading and achieve slower drug release, too. Interestingly, however, irrespective of the drug coating pH, the amount of drug released is independent of the pH of the release medium. NOR@IONP pH5 nanoparticles thus constitute a stable iron oxide nanoparticle system, with higher drug loading, slower drug release, reduced toxicity and enhanced uptake in macrophage, which can lead to its application in the treatment of intracellular pathogens.
Synthesis
The synthesis of IONPs and NOR@IONPs was carried out analogously to our previous study [30]. Specifically, IONPs were synthesized by co-precipitation of ferrous and ferric chloride salts [47]. An aqueous solution of Fe 2+ :Fe 3+ , taken in the molar ratio of 1:2, was stirred at 700 rpm with nitrogen purging, for 10 min, at 80 °C. The reduction to Fe 3 O 4 was carried out with the addition of 15 mL of 25% ammonia solution. Stirring and reaction was continued for 20 min more, at 80 °C. The dispersion was then allowed to cool, and the nanoparticles were magnetically separated out and washed with milliQ water. IONPs synthesized from 100 mL reaction was dispersed in 100 mL milliQ water and coating of NOR was carried out with a solution of 1 mg/mL drug concentration. The dispersion of nanoparticles and drug was adjusted for pH, to allow coating at either pH 5 or pH 10, while being sonicated for 20 min. This step was added in order to ensure pH monitoring for drug coating and was not present in our previous work. A step involving the stirring of the coated nanoparticle suspension at 80 °C was omitted in this work. Finally, the coated nanoparticles were separated from non-adsorbed drug, by centrifugation at 10,000 rpm for 1-2 h (centrifugation speed and time were increased in comparison to our previous work in order to facilitate the settling of smaller sized nanoparticles synthesized in this work). The pellet was oven-dried at 60 °C and then powdered using a mortar and pestle. The desired amount of the particle was re-suspended in water, by probe-sonication for 30 s (50% amplitude, 2 s on, 2 s off pulse) (Sonics, Vibra Cell, USA). NOR@IONP synthesis, at each of the respective pH, were performed with 3 distinct replicates.
Nanoparticle characterization
The hydrodynamic diameter of nanoparticles was determined after re-dispersing in milli-Q water and loading the sample in a cuvette for dynamic light scattering (DLS) measurements using a ZetaSizer (Malvern Instruments, UK). The refractive index and temperature used for the size measurement was 2.34 and 25 °C, respectively. Three replicates were measured for each sample. In addition, the nanoparticle diameter was also measured from transmission electron microscopy (TEM) images, which were obtained using the JOEL-JEM 2100F TEM 200 kV, USA. For TEM imaging, re-dispersed nanoparticle samples were diluted and 10 µL was loaded on a formvar coated Cu grid. The sample was dried using an infrared lamp. The images obtained were analyzed using the ImageJ software.
X-ray diffraction (XRD) was obtained by loading powdered nanoparticle samples for analysis in the PANalytical, X'Pert Pro, UK, facilitated with a Cu Kα radiation source. The Fourier transform infrared (FTIR) spectra of the nanoparticles and drug were collected using a 3000 Hyperion Microscope with a Vertex 80 FTIR system. Nanoparticles re-dispersed in milli-Q water were loaded into a folded capillary cuvette and the zeta potential was measured using the NanoS Zeta Sizer, Malvern Instruments, UK.
Drug coating and release
NOR coating on the nanoparticle was estimated via drug release, where it was assumed that at 48 h, complete drug release occurs, while the equilibrium is achieved between the drug concentration within the dialysis bag and that in the release media. This equals 92.5% of drug release. The drug release was monitored over 48 h, by sampling 1 mL of release medium at each time point and subsequently correcting the volume reduction due to sampling loss. The concentration of NOR released was estimated fluorometrically (Ex/Em: 280/420 nm) (SpectraMax M5, Molecular Devices) using a known NOR concentration versus fluorescence standard, prepared with dilution in phosphate buffer saline (PBS) (same as the release medium).
Drug and nanoparticle uptake in macrophages THP1 cells, a monocyte cell line, was differentiated into macrophage cells through a treatment with 25 ng/mL of phorbol 12-myristate acetate (PMA), for 24 h, at 37 °C and 5% CO 2 . Post incubation, PMA was washed off and fresh RPMI-1640 supplemented with 10% FBS was added and the cells were allowed to stabilize for 24 h under identical conditions of temperature and CO 2 . Differentiation was confirmed by the visualization of adhered cells to the bottom of the culture well, as the THP1 monocytes are suspension cells. The differentiated THP1 cells, i.e., the macrophages, were then treated with 32 µg/mL of NOR, in either free or nanoparticle-coated form, for 48 h. The extracellular drug and nanoparticles were washed off with ice cold PBS and cells lysed overnight, with 0.1 N glycine-HCl, at pH 3.5. A fixed volume of each cell lysate was sampled for ICP-AES (inductively coupled plasma-atomic emission spectrophotometer) (ACROS, Simultaneous ICP Spectrometer, SPECTRO Analytical Instruments GmbH, Germany) to estimate the amount of Fe present in each sample. Cell lysates were pelleted out and the supernatant was used for spectrofluorometric estimation of intracellular NOR (Ex/Em: 280/420 nm) (SpectraMax M5, Molecular Devices). The NOR standard curve in this study was prepared by diluting the drug in 0.1 N glycine-HCl, pH 3.5.
Toxicity
Monocyte, THP1 cells were seeded at 3 × 10 5 cells/mL and differentiated into macrophages by treating with 25 ng/mL PMA for 24 h. Differentiated THP1 cells (macrophages) were stabilized in fresh RPMI media, supplemented with 10% FBS for 24 h at 37 °C and 5% CO 2 . These macrophage cells were subsequently treated with 32 µg/mL of NOR, either in its free drug form or via NOR@IONP, for 48 h. Following the drug/ nanoparticle treatment the cells were trypsinized using 1X trypsin-EDTA (HiMedia, India). The cell counts of the trypsinized cells were then determined using the trypan blue assay. Briefly, a fixed volume of animal cell sample was taken and diluted using a 0.4% solution of trypan blue (HiMedia, India). The diluted cell suspension was then gently vortexed and 10 µL loaded on to a hemocytometer for cell counting. The relative viability of the sample was determined with respect to the non-treated control.
Supporting Information
Supporting Information File 1 Supplementary Information. Data that supports the experimental choices and data analysis. | 2021-10-15T12:16:36.740Z | 2021-10-07T00:00:00.000 | {
"year": 2021,
"sha1": "39b65ab0b674b7cb5ced59275371407d3ff0970f",
"oa_license": "CCBY",
"oa_url": "https://www.beilstein-journals.org/bjnano/content/pdf/2190-4286-12-84.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "39b65ab0b674b7cb5ced59275371407d3ff0970f",
"s2fieldsofstudy": [
"Medicine",
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
52347194 | pes2o/s2orc | v3-fos-license | Galectin‐3 promotes CXCR2 to augment the stem‐like property of renal cell carcinoma
Abstract Although targeted therapy is usually the first‐line treatment for advanced renal cell carcinoma (RCC), some patients can experience drug resistance. Cancer stem cells are tumour‐initiating cells that play a vital role in drug resistance, metastasis and cancer relapse, while galectins (Gal) participate in tumour progression and drug resistance. However, the exact role of galectins in RCC stemness is yet unknown. In this study, we grew a subpopulation of RCC cells as tumour spheres with higher levels of stemness‐related genes, such as Oct4, Sox2 and Nanog. Among the Gal family, Gal‐3 in particular was highly expressed in RCC tumour spheres. To further investigate Gal‐3's role in the stemness of RCC, lentivirus‐mediated knockdown and overexpression of Gal‐3 in RCC cells were used to examine both in vitro and in vivo tumorigenicity. We further assessed Gal‐3 expression in RCC tissue microarray using immunohistochemistry. Upon suppressing Gal‐3 in parental RCC cells, invasion, colony formation, sphere‐forming ability, drug resistance and stemness‐related gene expression were all significantly decreased. Furthermore, CXCL6, CXCL7 and CXCR2 were down‐regulated in Gal‐3‐knockdown tumour spheres, while CXCR2 overexpression in Gal‐3‐knockdown RCC restored the ability of sphere formation. Gal‐3 overexpression in RCC promoted both in vitro and in vivo tumorigenicity, and its expression was correlated with CXCR2 expression and tumour progression in clinical tissues. RCC patients with higher co‐expressions of Gal‐3 and CXCR2 demonstrated a worse survival rate. These results indicate that highly expressed Gal‐3 may up‐regulate CXCR2 to augment RCC stemness. Gal‐3 may be a prognostic and innovative target of combined therapy for treating RCC.
cell RCC (ccRCC), its overall efficacy rate is restricted by its toxicity. Molecular targeting drugs, including tyrosine kinase and mTOR inhibitors, have been approved for treating advanced RCC. 1,2 Nevertheless, long-lasting treatment responses cannot be achieved, and the overall survival rate is still poor due to drug resistance.
Cancer stem cells (CSCs) may contribute to drug resistance in human solid tumours. 3,4 The frequency of functionally defined CSCs varies among different patients. With self-renewal as one of their hallmarks, CSCs can initiate tumour formation and metastasis. Furthermore, CSCs can be identified by in vitro sphere-forming assays and common cell surface markers, especially CD133 and CD44.
CSCs can express ATP-binding cassette (ABC) transporters to become more resistant to chemotherapy compared to the bulk of a tumour cell mass. 3,4 The tumour microenvironment supports cancer progression and CSC formation through growth factors, cytokines and chemokines. For example, within the tumour microenvironment, endothelial cells produce angiocrine factors and myofibroblasts secrete the stem cell factor, CXCL12 and Wnt to modulate the stemness of CSCs. 5 Another critical component of the tumour microenvironment, galectin can control immune surveillance and aid tumour metastasis. 6 Galectins (Gals) are galactoside-binding lectins that contain conserved carbohydrate-recognition domains (CRDs) to bind β-galactose.
According to their structural features, galectins are classified into the following three categories: prototype, chimera type and tandemrepeat type. The chimera galectin type has only one member, Gal-3, 7 whose expression is required for initiating the transformed phenotype of tumours by interacting with oncogenic Ras. 8 The Gal-3-RAS interaction promotes the RAS anchorage to the plasma membrane, which results in the constitutive activation of phosphatidylinositol 3-kinase and Raf-1. 9 The tumorigenic potential of Gal-3 may also function through binding with β-catenin or transcriptional factors to increase the expressions of cyclin D and c-MYC and augment cell cycle progression. 10 Furthermore, intracellular Gal-3 inhibits cell death induced by cisplatin and paclitaxel, thus contributing to cancer cells' drug resistance and CSC formation. 11,12 Extracellular Gal-3 has in vitro angiogenic activity by inducing the migration of endothelial cells. 13 Increased protein levels of Gal-3 are correlated with the poor survival of various cancers, including leukaemia, lymphomas, breast cancer and thyroid cancer. 6 Gal-3 was overexpressed in RCC patients with distant metastasis. 14 While chemokines and their receptors influence the initiation and progression of tumours in the tumour microenvironment, their role in the Gal-3-promoted CSC formation and drug resistance of RCC remains unclear. 5 In this study, we found that Gal-3 was highly expressed in the CSCs of RCC, as well as the clinical tissues of advanced RCC. Silencing Gal-3 in RCC cells decreased CSC formation, drug resistance and CXCR2, while CXCR2 overexpression in Gal-3-knockdown cells restored the tumorigenesis ability. Our results indicate that highly expressed Gal-3 may enhance the stemness property of RCC by promoting CXCR2.
| Cell lines
We obtained the human RCC cell lines Caki-1 and ACHN (VHL wild type) from the American Type Culture Collection. We purchased the human RCC cell line A-498 (VHL mutation) from Bioresource Collection and Research Center (BCRC; Hsinchu, Taiwan). These cell lines were cultured as described in a previous study. 15
CXCR2 by lentivirus-mediated system
Lentivirus-mediated silencing and overexpression of Gal-3 and CXCR2 of the RCC cells were performed as described in a previous study. 15 We obtained pLKO.1 plasmid containing shRNA targeting human Gal-3 (shGal-3#1, Clone ID TRCN0000029305; shGal-3#2, Clone ID TRCN0000029307) and CXCR2 (shCXCR2, Clone ID TRCN0000009138) from the National RNAi Core Facility (Academia Sinica, Taipei, Taiwan). Full-length DNA encoding Gal-3 and CXCR2 genes were amplified using RT-PCR and cloned to pLAS2w. The primer sequences for cloning the full length of Gal-3 and CXCR2 are listed in Table S1. To knockdown Gal-3 in RCC spheres, A-498-derived primary tumour spheres were dissociated into single cells, re-seeded in a 10% FBS-RPMI medium, infected with shGal-3 or shLuc lentivirus, and then cultured to form secondary spheres (2S).
| RT-qPCR
RT-qPCR was performed as described in a previous study. 15 The specific primers used in the RT-qPCR are presented in Table S2.
| Cell migration and invasion assays
We evaluated tumour cell migration and invasion assays using transwell assay (Costar, 8-μm pore; Corning, NY) as described in a previous study. 15
| Colony formation assay
RCC cells (1 × 10 3 ) were suspended in 0.33% Bacto-agar (Sigma-Aldrich) and then layered over 0.5% Bacto-agar in six-well plates. On day 30, we counted the colonies after fixing them with methanol and staining them with Giemsa.
| Sphere formation
RCC cells were cultured in a tumour sphere medium (Gibco, BRL, Life Technologies) that contained serum-free DMEM/F12 (1:1) medium, 1X B27 supplement, 20 ng/mL human recombinant basic fibroblast growth factor (bFGF) and 20 ng/mL epidermal growth factor (EGF). RCC cells were seeded at 500 cells per 96-well (or 8 × 10 4 per 6-well), and the tumour sphere medium was replaced with fresh medium every 3-4 days. ACHN cells were cultured for 14 days, while A-498 and Caki-1 cells were cultured for 21 days. We used a microscope to count the spheres. Primary spheres (1°sphere) were dissociated to single cells and re-seeded to yield the second generation (2°sphere).
| In vivo tumour growth
We purchased male NOD/SCID mice (NOD.CB17-Prkdcscid/ IcrCrlBltw) from BioLASCO Taiwan Co., Ltd. and maintained them under specific pathogen-free conditions at the Animal Center of National Yang-Ming University, as approved by the university's Institutional Animal Care and Use Committee. To establish a xenograft tumour model, empty vector (ev)-or Gal-3-infected Caki-1 (3 × 10 3 -10 5 /100 μL) monolayer or sphere cells were subcutaneously implanted into the abdominal flanks of six-to eight-week-old male NOD/SCID mice. We measured tumour size with a calliper and calculated it as length × width × height (in mm 3 ) every week.
| Tissue microarray (TMA) and immunohistochemistry (IHC)
We purchased tissue microarray slides from Biomax (US Biomax Inc., Rockville, MD) and performed IHC as described in a previous study. 15 TMA slides were incubated with the Gal-3 or CXCR2
| Statistical analysis
Data are expressed as the mean ± SD. Differences between two groups were determined using Student's t test. We adopted the Sur-vExpress 16 web-based tool to analyse the gene expression of Gal-3 and CXCR2 in ccRCC (accession no. KIRC-TCGA). Survival durations were analysed using the Kaplan-Meier method and compared in the patient groups with the log-rank test. Using Cox survival analysis, we classified a population of ccRCC patients into high-risk and low-risk groups in accordance with their prognostic index. Statistical significance was set at P < 0.05.
| Enrichment of renal CSCs
To determine whether cultured human RCC cell lines contained a population of CSCs, RCC cells were cultured in a defined serumfree selection tumour sphere medium for a few days. The morphology of the RCC cell spheres is shown in Figure 1A. We observed only 9% sphere formation in A-498, 7% in Caki-1 and 11% in ACHN cells ( Figure 1B). The stemness-associated genes were analysed using RT-qPCR, and the results showed that the mRNA levels of Nanog, Sox2, Oct4, CD44, CD133, ABCB1, ABCC1, ABCG2 and Notch1 were significantly increased in RCC tumour spheres compared with parental cells ( Figure 1C). Furthermore, we adopted Western blotting to confirm the protein levels of Nanog, Sox2 and Oct4 in three RCC tumour spheres (
| Galectin-3 was highly expressed in the tumour spheres of RCC cells
Galectins have been reported to promote cancer cells' chemoresistance and CSC formation. 12,17 Therefore, we analysed the galectin levels in renal CSCs using RT-qPCR. Regarding the galectin family, the expression of Gal-2, Gal-3, Gal-4 and Gal-7 was significantly increased in A-498 CSCs compared with parental cells. Of those, Gal-3 demonstrated a more than 30-fold increase in RCC tumour spheres ( Figure 1E). We then utilized other RCC cells to verify whether Gal-3 was also up-regulated in these renal tumour spheres and found that Gal-3 mRNA expression demonstrated a significant sevenfold increase in the tumour spheres of Caki-1 and ACHN cells ( Figure 1F). Western blotting was further adopted to confirm the Gal-3 expression in RCC cells. Compared with parental cells, tumour spheres expressed levels of galectin-3 protein that were twice as high ( Figure 1F).
| Knockdown of galectin-3 in parental RCC cells decreased self-renewal capacity and drug resistance
To determine the role of Gal-3 in cell motility and the sphere-form-
| Galectin-3 maintained the stemness properties of renal CSCs
To further examine the role of Gal-3 in maintaining CSCs, we infected A-498 primary tumour spheres with shGal-3 lentivirus and then cultured them to form secondary spheres. First, we used Western blot to confirm galectin-3 knockdown in secondary spheres (Figure 3A). Furthermore, silencing Gal-3 significantly reduced the sphere formation ( Figure 3B), anchorage-independent growth (Figure 3C), migration and invasion ( Figure 3D) ability of the secondary sphere cells. Therefore, we also observed Gal-3 to participate in the maintenance of the stemness features of renal CSCs.
| Down-regulation of chemokine/cytokine expression in Gal-3-knockdown RCC tumour spheres
To study the molecular mechanism of Gal-3 in the CSCs, we analysed chemokine and chemokine receptor levels using RT-qPCR. As shown in Figure 4A,
| Suppression of CXCR2 led to decreased sphere-forming ability in RCC cells
We silenced CXCR2 in parental A-498 cells to investigate the role of CXCR2 in CSC formation. Compared to cells infected with the control virus expressing shLuc, cells infected with the shCXCR2 virus expressed lower levels of this particular chemokine receptor (Figure 4C). Furthermore, sphere-forming ability was significantly downregulated by 50%-60% in the shCXCR2-infected RCC cells ( Figure 4D).
| Overexpression of CXCR2 in shGal-3-infected RCC cells restored cell motility, colony formation and self-renewal capacity
To explore the role of CXCR2 in the formation of Gal-3-mediated
| Overexpression of galectin-3 in RCC cells promoted sphere-forming capacity and in vitro and in vivo tumorigenicity
We used lentivirus-mediated Gal-3 overexpression in Caki-1 cells to further confirm the role of Gal-3 in renal CSC formation. First, Gal-3 overexpression in RCC cells was confirmed by RT-qPCR and western blot ( Figure 6A). Compared to parental RCC cells, sphere cells with an empty vector expressed higher levels of galectin-3, Oct4, Nanog, Sox2 and CD44 ( Figure 6B). Overexpression of Gal-3 considerably promoted these stemness genes and CXCR2 expression in RCC sphere cells ( Figure 6B). Furthermore, migration, invasion ( Figure 6C), colony formation ( Figure 6D) and sphere-forming ability ( Figure 6E) were all significantly up-regulated in Gal-3-infected RCC sphere cells.
To assess the tumour growth capacity of renal cancer stem cells, we (data not shown). Furthermore, tumours generated by the Gal-3-overexpressed RCC spheres were larger than the RCC spherederived tumours ( Figure 6F). These results suggest that Gal-3 overexpression in RCC sphere cells promote in vivo tumour growth.
| Galectin-3 expression correlated with CXCR2, tumour progression and prognosis in RCC tissues
To investigate the expression of Gal-3 and CXCR2 in human RCC tissues, we performed immunohistochemistry staining on tissue microarrays that contained samples from 75 patients with ccRCC.
We observed higher Gal-3 expression in the advanced stages (III+IV) and grade (poorly differentiated) RCC tissues ( Figure 7A and B).
CXCR2 expression was significantly correlated with tumour differentiation ( Figure 7C) but not RCC stage (data not shown). Furthermore, Gal-3 expression was significantly higher in the CXCR2 high expression group than in the low expression group ( Figure 7D). To further study the correlation between the expression levels of Gal-3/CXCR2 and patient prognosis, we used the online tool SurvExpress 16 to analyse 415 patients with various stages of ccRCC. Using Cox survival analysis, we found that patients with higher co-expressions of Gal-3 and CXCR2 had a significantly worse survival rate ( Figure 7E) and that Gal-3 expression correlated with CXCR2 expression, tumour progression and prognosis in RCC.
| DISCUSSION
The overexpression of Gal-3 is associated with the increased invasiveness of many kinds of tumours. Higher levels of Gal-3 are found in the sera of cancer patients with metastasis. 18 Gal-3 promotes cancer progression through intra-and extra-cellular mechanisms in the tumour microenvironment. Intracellular Gal-3 interacts with RAS and β-catenin to enhance cell transformation and proliferation. [8][9][10] Furthermore, Gal-3 augments tumour stem cell property and drug resistance through its interaction with β-catenin. 12 Several chemokine and chemokine receptor genes, such as CXCR4, CXCR7 and CCL5, 19,20 are the downstream genes of β-catenin. In this study, we found that Gal-3 overexpression may promote CXCR2 to augment the stemness property of RCC. In our previous study, cancer spheres secreted higher levels of Gal-3, while Gal-3 knockdown reduced secretion levels. Recombinant Gal-3 promotes cancer sphere formation. 12 Furthermore, previous studies have demonstrated Gal-3 to interact with epidermal growth factor receptor (EGFR) and transforming growth factor-β receptor (TGFβR). 9 Therefore, extracellular Gal-3 may also stimulate sphere formation in collaboration with EGF or bFGF signalling in the tumour sphere medium.
Existing evidence indicates that drug resistance regulation by Gal-3 may result from intracellular effects on the apoptotic pathways. 11 The anti-apoptotic mechanisms of Gal-3 include: (1) the phosphorylation status of Gal-3, 21 (2) the Gal-3 translocation from the nucleus to the cytoplasm, 22 (3) the regulation of mitochondrial membrane potential, 23 (4) the modulation of survival signalling pathway, 24 and (5) the regulation of the caspase pathway. 25 Furthermore, Gal-3 plays a crucial role in regulating the Wnt/β-catenin signalling pathway. The best evidence so far of the importance of the Wnt pathway to CSCs biology has been reported in myeloid leukaemia, but its contribution has also been reported in the maintenance of the CSCs of melanoma, breast, colon, and lung cancers. 12,26 Therefore, with regard to inhibiting the common upstream regulator of Wnt/β-catenin signalling, Gal-3 may be an effective target for cancer stem cell therapy.
Clear evidence has shown that CXCR2 and its associated ligands play important roles in various types of cancer. Most of the ELR + CXC chemokines that have been described as promoters of tumour bodies reduce lung tumour growth in mice. 30 is a prognostic factor for the overall survival of RCC. CXCL7 promotes RCC cell proliferation both in vitro and in vivo, and a CXCL7/ CXCR2 blockade by antibody or inhibitor reduces tumour growth in mice. 34 Altogether, these findings indicate the importance of CXCR2 in the progression and targeting therapy of RCC. 43 In summary, we have demonstrated that highly expressed Gal-3 can up-regulate CXCR2 to augment the stemness property of RCC.
Gal-3 and CXCR2 expressions were correlated with RCC tumour progression, and Gal-3 expression correlated with CXCR2 expression in RCC tissues. As we found that higher co-expressions of Gal-3 and Patients were divided into two groups (low and high) based on whether their CXCR2 levels were below or above the median value. Galectin-3 expression levels were compared between low and high levels of CXCR2 in ccRCC tissues. *P < 0.05. (E) Kaplan-Meier curves according to galectin-3 and CXCR2 expression levels in patients with ccRCC were conducted using SurvExpress web-based analysis. Censoring samples are shown as "+" marks. Data set, concordance index (CI) and P-value of the log-rank test are shown. Red and green curves denoted high-risk (high expression levels) and low-risk (low expression levels) groups, respectively | 2018-09-26T06:26:18.294Z | 2018-09-24T00:00:00.000 | {
"year": 2018,
"sha1": "4b1ba159a979e516ebfd2313dd45e50bb7b18556",
"oa_license": "CCBY",
"oa_url": "https://www.onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jcmm.13860",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4b1ba159a979e516ebfd2313dd45e50bb7b18556",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256900957 | pes2o/s2orc | v3-fos-license | Natural candidates for the underlying symmetry of the Higgs field
As a small step toward understanding the nature of the Higgs field, the results of a study on its symmetry structure are announced. First, a toy model for the vacuum of a finite Higgs field (as the underlying field of the Steinberg group SU 2 ( q 2 ) ) is developed with the aim of estimating its boson mass scale. Next, to find the actual underlying symmetry of the Higgs field, natural candidates beyond the Standard Model are explored. As a result of the exploration, the Weyl group of E 8 in 8 DOF, as well as the Conway group Co 0 in 24 DOF, are introduced as the "natural candidates" and the Fischer-Griess monster group in 24 DOF (that also matches the symmetry group of the FLM CFT model with the central charge of 24) is introduced as the "most natural candidate". Then, by applying the latter’s specifications (order and the number of DOF) to the toy model, the corresponding boson mass is calculated at 125 . 4 GeV, which validated the "most natural candidate". Afterward, replacing SU 2 ( q 2 ) by SU 5 ( q 2 ) in the toy model, for the GUT scale, a prediction of 6 . 6 × 10 17 GeV (comparable to its conventional scale of 10 16 GeV) is reported. In the last prediction, first, it is observed that to reach a higher scale, the model delicately asks for a candidate with a smaller symmetry. Then, the other two natural candidates’ specifications are applied to the toy model and the same value of 1 . 9 × 10 19 GeV (close to the Planck scale 1 . 22 19 GeV) is produced by both. In sum, the results suggest that the post-GUT age having a Fischer-Griess monster structure and the pre-GUT era having a Co 0 (or Weyl ( E 8 ) ) structure are imaginable, where the latter evolves to the former in the GUT epoch.
I. INTRODUCTION
In the progress of the quantum field theory, the continuum assumption of the infinite number of microstates or elements (that is also referred to as points) for a physical field, was a source of undesirable divergences and unphysical results. "These problems have stopped the development of quantum field theory for almost twenty years and were resolved with the introduction of Renormalization. The interpretation given to this procedure has evolved over the decades. In recent years, it has become customary to regard continuum field theories as approximations to more fundamental theories. This justifies the use of a cut-off, a lattice spacing, or some other kind of regularization that effectively suppresses the degrees of freedom associated with very small distances" [1].
Let us look at one of these problems as the initial motivation for this letter. Based on the Standard Model, the Higgs potential has a φ 4 form with the Mexican-hat shape where its vacuum microstates at the minimum potential form a circle like Φ e iθ [2]. The system is symmetric with respect to a continuous U(1) rotation and is capable of spontaneous symmetry breaking during its evolution. Once this symmetry breaks, the system falls into one of these microstates. All these microstates have exactly the same energy and, therefore, the same probability to be picked. This probability is simply 1 N o. of microstates . Since the magnitude of the corresponding mass-energy is proportional to this probability, assuming an absolutely infinite number of microstates implies an absolute zero value for the mass-energy which is incorrect (based on the experimental observation of the vacuum expectation value). Further, an absolute zero probability obviously does not let any of the microstates be chosen.
On the other hand, it turns out that instead of an absolutely infinite number of microstates, an extremely large but finite number of elements leads to physical massenergy. Such a finite number of elements for a field inspires a finite symmetry group for the theory. So, the candidate symmetry group should be a finite one. According to the Standard Model, the Higgs field is a complex scalar (infinite) field of the Lie group SU(2). Therefore, a genuine candidate is the finite version of the SU(2) which is the Steinberg group SU 2 (q 2 ) of Lie type.
II. TOY MODEL
A. Introducing the finite field Let us consider the underlying field of Steinberg group SU 2 (q 2 ) of Lie type, F q , as the candidate for a finite Higgs field in the vacuum. The elements of F q , which are the roots of unity, are all lying on the unit circle and are given by F q = exp(2πi n q ), where n = 0, 1, 2, ..., q − 1 [3]. This form is comparable with the continuous form e iθ in the infinite fields. A comparison between the two fields is shown in table 1. Based on the argument in the introduction, we assume that q is extremely large. Therefore, the corresponding set of points (elements) is extremely dense such that from the perspective of physic (with a certain scale) it can be considered a smooth manifold. This may justify the use of some related mathematical properties if required. However, we emphasize that the only purpose of introducing this toy model is the derivation of a simple mass relation to estimate its mass scale and not a rigorous mathematical approach.
B. Introducing the number of degrees of freedom
Let us assume that the number of degrees of freedom (DOF) of the model is c. Hence, the number of elements for each degree of freedom will be q/c.
C. Boson mass
One can obtain the mass as the eigenvalue of the Klein-Gordon equation (i.e., the formal equation for scalar fields) ( + m 2 q )F q = 0 with a discrete D'alembert operator in the Planck units. The eigenvalues of an operator form its discrete spectrum that is its allowed frequencies (defined as the number of occurrences of a repeating event per unit of time). Among q identical microstates in c DOF, the expected number of times for a single state to occur (be picked) is q/c indicating an expected value of c/q for the frequency. Thus, m q = c q . Another approach is to calculate the magnitude of the field and multiply it by the corresponding number of DOF (as an equivalent charge). Since all the microstates are equiprobable, the magnitude of the field becomes the probability of each microstate, that is 1/q. Hence, the corresponding mass becomes m q = c × 1/q. Note that this is the same as 1 q/c , where q/c is the probability of each microstate for each degree of freedom.
So far, we obtained the mass m q corresponding to the real (holomorphic) sector of the model. By the complexification of symmetry, mq is introduced as the mass cor-responding to the complex conjugate (antiholomorphic) components (with the same DOF). Then, the total mass is the sum of the two sectors.
III. RESULTS
So far, in the previous section, a simple model is developed as a tool to find an estimate of the boson mass. In this section, we search beyond the Standard Model for the genuine symmetry structures that can describe the Higgs field. Then, we use the toy model to assess the promising candidates.
A. Introducing the candidates
In the study of fundamental forces, the Standard Model offers the strong, weak, and electromagnetic interactions in a gauge QFT with the symmetry product group G SM = SU(3)× SU(2)× U(1). Based on the Grand Unified Theory (GUT), at high energies of the early stage universe, namely the unification epoch, these three interactions were merged into a single unified interaction. Several theories guessed the symmetry of the GUT era [4]. Regardless of their details, all of them should contain G SM as a subgroup. The smallest simple Lie group with this property is SU(5) which has 5 2 − 1 = 24 generators implying 24 DOF (or massless bosons) prior to symmetry breaking. Accordingly, the Higgs field would be a Hermitian traceless 5 × 5 matrix field with 24 DOF.
Thus, we introduce 24 as the first favorable value as well as the least upper bound for c (since SU(5) is the smallest simple Lie group that contains G SM ). Hence, the other options for c are the positive integers less than 24. Here, the list of Kissing Numbers (KN) helps to find the next favorable value for c. In that list, the exact values of KN are only solved for dimensions 1, 2, 3, 4, 8, and 24 [5] [6]. For other dimensions, it was impossible, so far, to exactly solve the problem. Hence, computational methods are developed to calculate upper and lower bounds. This phenomenon that the KN problem is exceptionally solved for dimensions 8 and 24 is incredibly valuable. Apparently, this is due to the fact that highly symmetrical lattices, namely the E 8 lattice [7][8] [9] and the Leech lattice [10] [11], exist in these dimensions. These exceptions are exactly the favorable values of c that we are searching for. Indeed, the assumption that q has to be extremely large means that whatever the underlying structure is, it has to be highly symmetrical (which also excludes relatively simple dimensions 1, 2, 3, and 4).
The E 8 and Leech lattice produce optimal sphere packing respectively in 8 and 24 dimensions [12] [13]. Additionally, it is recently proved that they are universally optimal. Meaning that between all point configurations of the same density, they provide the minimum possible Gaussian energy [14]. The close relation between them can be explained by the fact that the Leech lattice can be constructed via 3 copies of, its cousin, the E 8 lattice. Between these two extraordinary symmetrical structures, the Leech lattice is more exceptional and, therefore, more desirable because it produces an excellent sphere packing density of 0.00193 [13] while the E 8 lattice has a sphere packing density of 0.2536 [12] wherein a lot of space is left over (i.e., less efficient). This means that the Leech lattice is more efficient and well-ordered.
Thus, so far, we found a couple of natural playgrounds. Next, we should find the eligible players (symmetry groups) who are able to play in such symmetrical playgrounds and then select the most natural (symmetrical) ones. Regarding the first one, the symmetry group (automorphism group) of E 8 lattice is the Weyl group of the E 8 root system which has an order around 6.96 × 10 8 [15]. We introduce this as the "first natural candidate". Regarding the second one, the symmetry group of the Leech lattice is the Conway group Co 0 , which has an order around 8.31 × 10 18 [16] [17]. We introduce this as the "second natural candidate". However, Co 0 is not the only eligible player on the Leech lattice. Many other sporadic groups can be constructed based on the Leech lattice, e.g., the Mathieu groups (M 11 , M 12 , M 22 , M 23 , M 24 ) [15], the other members of the Conway groups (Co 1 , Co 2 , Co 3 ) [15], and the Fischer-Griess monster group [18] [19][5] [15]. Among these groups, with an extraordinary giant order around 8.08 × 10 53 [15], the Fischer-Griess monster group (i.e., the largest sporadic group) is the most symmetrical and, therefore, the "most natural candidate" in 24 DOF.
Noteworthy, a well-known physical model is already established whose attributes match the attributes of the "most natural candidate". Particularly, in 1984, Frenkel, Lepowsky, and Meurman (FLM) constructed a model of a holomorphic Conformal Field Theory (CFT) with the Fischer-Griess monster symmetry and the central charge of 24 [20] [21]. Note that the number of DOF is often an explanation of the central charge in theories containing a central extension of the symmetry group[1].
B. Most natural candidate validation: Higgs scale
Now, we select the "most natural candidate" and evaluate it using the toy model developed in the previous section. If this candidate is the right candidate, applying its specific attributes (i.e., order and dimension) to the toy model should produce a correct Higgs mass. Therefore, 1. applying the number of DOF of the "most natural candidate" to the toy model, 2. applying the order of the "most natural candidate" to the toy model, that implies [3] q 3 − q ∼ = 8.08 × 10 53 Solving for q provides a couple of complex roots (that are meaningless as q is a positive integer) and a real root of q ∼ = 9.314 × 10 17 . Now, the boson mass can be calculated as This value is almost identical to the experimentally measured mass of the Higgs boson (125.1 GeV) in the Large Hadron Collider (LHC) in 2012 [22]. Thus, the "most natural candidate" passed the designed test and is validated. Before jumping to the predictions of the GUT and the Planck scales, the following argument shed more light on this validation. In the process of exploring the candidates, we stated that the larger groups are more natural than the smaller ones. However, this is not the case in general. The property that is more important than the size is the structural complexity. Although the size of the Fischer-Griess monster group is maximum among the sporadic groups, it is so small compared to some simple groups like SL 25 (3) or A 166 which have an order around 10 297 . What makes the Fischer-Griess monster group exceptional (and therefore natural) among the simple groups is its structural complexity. Its elements cannot be represented easily like the linear representation in SL 25 (3), or permutation representation in A 166 , or other simple groups whose elements have simple representations.
Thus, in a complex nonlinear nature, the Fischer-Griess monster group seems the most natural candidate for the underlying symmetry of the Higgs field, due to its high structural complexity which is reflected in its huge order too. Here, it is helpful to explain why the simple estimation of the toy model (which has a simple linear representation) could produce such an accurate estimation of a highly complex structure like the Fischer-Griess monster group? To this question, the Renormalization Group theory of phase transition comments that the properties of a system near a phase transition depend only on a few general features such as the number of dimensions (c in the toy model), symmetry (q in the toy model), and range of the interactions [23].
C. GUT scale prediction
In the same manner, one can calculate the GUT scale too. To this aim, noting the mentioned role of SU(5) in the GUT, we need to replace SU 2 (q 2 ) group with SU 5 (q 2 ) as the finite version of SU(5). Following the same steps, the equality of the orders suggests |SU 5 (q 2 )| = |Fischer − Griess monster| (7) that yields [3] q 10 (q 2 − 1)(q 3 + 1)(q 4 − 1)(q 5 + 1) ∼ = 8.08 × 10 53 (8) Solving for q provides 24 roots. The only real positive root is q GUT = 176.25. Now, the dimensionless scale can be calculated Then, the dimensional scale becomes This scale is comparable to the customary anticipation of 10 16 GeV for the GUT scale [24], and is at the order of the GUT scale reported in [25] for E 8 hidden sector in the supersymmetry context.
D. Planck scale prediction
Assuming a symmetry group of SU N (q 2 ) in the toy model, the mass scale would be a function of c, q, and N . The largest power of q in the order function of SU N (q 2 ) is N 2 − 1 [3]. So, when the order of a candidate is large, the q (N 2 −1) term is the dominant term in the order function. Meaning that q (N 2 −1) ∼ = order and therefore m ∼ = 2c (order) −1 (N 2 −1) . When a candidate is selected, c and order are known. However, N depends on the toy model. Indeed, N = 5 is the maximum N , where N 2 − 1 reaches the number of DOF of the candidate. Thus, the toy model delicately suggests that the calculated GUT scale (6.64 × 10 17 GeV) is the highest scale that the Fischer-Griess monster symmetry can produce. Consequently, to reach a higher scale (the Planck scale), a smaller symmetry is required.
On the other hand, it is known that a drastic change happens at the GUT epoch, and the nature of the system before and after such a transformation is fundamentally different [4]. Meaning that different underlying structures should govern in pre-and post-GUT realms. This makes sense physically because a young universe with unified interactions should be structurally simpler than the evolved older one and, to mathematically describe it, a simpler structure should be required. Accordingly, to find such a simpler and smaller structure, let us return to the "first natural candidate" and the "second natural candidate". As it is computed next, applying the properties of each of them to the toy model generates the same value of 1.948 GeV which is remarkably close to the Planck scale (i.e., 1.22 × 10 19 GeV).
As earlier discussed, just like the GUT epoch, in the Planck epoch, whatever the actual order function is, its largest power of q is 24. Therefore, let us consider the same SU 5 (q 2 ) group while one can select other alternatives whose largest power of q is 24 in their order function and obtain almost the same scales). Thus, for the Co 0 group, q 10 (q 2 − 1)(q 3 + 1)(q 4 − 1)(q 5 + 1) = 8.31 × 10 18 (11) Solving for q, the real positive value of q ∼ = 6.148. Therefore, The dimensional scale becomes For the Weyl group of E 8 , q 10 (q 2 − 1)(q 3 + 1)(q 4 − 1)(q 5 + 1) = 6.96 × 10 8 (14) Solving for q, the real positive value of q ∼ = 2.35. Therefore, The dimensional scale becomes It is known that before and after a phase transition with spontaneous symmetry breaking, the number of DOF should be conserved [2]. Similarly, regardless of the type of transition at the GUT epoch, one can also expect a conservation of the number of DOF there. Hence, Co 0 seems a more natural alternative first because it preserves 24 DOF of the post-GUT period. The second reason is that the Fischer-Griess monster group has the genes of Co 0 in its DNA making an evolution conceivable from Co 0 . In sum, in this case, the game is played on a 24 DOF Leech lattice playground, and the less complex Co 0 emerges to a larger more complex structure of the Griess-Fischer monster. However, the E 8 case is also imaginable. The reason is that the GUT transition is such fundamental that even changed the number of DOF of the system.
IV. DISCUSSION
The reported results in this letter suggest that a Fischer-Griess monster structure corresponding to the pre-GUT age and a Co 0 (or Wry group of E 8 ) structure corresponding to the post-GUT era are conceivable. As a reporting letter, conclusive judgments about them are avoided. Here, we elaborate on some points that were postponed to this section.
• A couple of mathematical points need to be clarified. First, for a field F q to be unique, q must be a power of a prime [3]. The closest power of a prime is found as 9650949592 2 . Second, some authors denote the Steinberg groups by A n (q 2 ), while others denote them by A n (q). It is because two fields are involved: one quadratic of order q 2 , and the second, its fixed field of order q. Particularly, if F q is a (unique up to isomorphism) finite field of size q, there is a unique quadratic separable extension of F q , such that the extension field is a finite field of order [3].
• We have mentioned that the FLM CFT model has similar specifications as the "most natural candidate". While such a match is impressive, the appearance of the CFT is not a surprise, because the statistical and thermodynamic systems are often conformally invariant at their critical points of the phase transition [1].
• Except for the FLM model, in recognizing the physical importance of the Fischer-Griess Monster group, two primal studies should be mentioned. In [26], the FLM CFT model is proposed as a possible holographic dual for pure gravity in AdS3. Second, in [27], it is uncovered that the FLM CFT is really an exceptional CFT in c = 24 DOF. Particularly, its results suggest the existence of 71 holomorphic CFTs, among which the FLM model is the only CFT that does not contain a Kac-Moody subalgebra.
• The reduced Planck mass as the character mass of the Planck units is used in dimensionalizing the dimensionless mass. The reason is that the reduced Planck mass ( c 8πG ) is more fundamental than the Planck mass ( c G ) since the gravity is also manifested in it. In fact, the difference is an 8π factor that comes from the Einstein field equation G µν = 8πG N T µν and it is believed that the scale at which quantum gravity becomes relevant is the reduced Planck scale [28] [29].
• As the toy model introduced a highly dense collection of points (elements) on the unit circle, it is informative to also address the mechanism of phase transition of the Lee-Yang type. According to the Lee-Yang theory, in the infinite-size limit of a finitesize system, when the complex zeros of the partition function become numerous and may become dense or condense along a certain arc, a phase transition can be triggered. Particularly, in the original Ising model considered by Lee and Yang, with the change of variable ρ = e πz , the Lee-Yang theorem implies that all complex zeros (ρ), lie on the unit circle. [30].
• Such a phase transition in this letter, which is also related to the Higgs field and electroweak symmetry breaking, offers deep connections to mathematics (esp. number theory and group theory). It is also informative to mention a number-theoretic phase transition with spontaneous symmetry breaking that is studied in [31]. Finally, as a tempting subject to study, let us address a possible connection between the partition function introduced in [31] and the above-mentioned Lee-Yang theorem related to complex zeros of the partition function. Here, the model proposed in this letter (and particularly the Fischer-Griess monster group) can be a connecting link.
[2] M. E. Peskin and D. V. Schroeder, An introduction to | 2023-02-17T06:42:14.802Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "fa3027c9b6a13eb489fbc15be1c5663de2a756cb",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "fa3027c9b6a13eb489fbc15be1c5663de2a756cb",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
238533057 | pes2o/s2orc | v3-fos-license | Does bitcoin provide hedge to Islamic stock markets for pre- and during COVID-19 outbreak? A comparative analysis with gold
This paper applies the DCC-FIGARCH model to investigate the role of Bitcoin as a hedge and safe haven for Islamic stock markets in comparison with gold. We use daily data for the period January 2010–May 2020, which covers the recent COVID-19 pandemic. Empirical results show that the dynamic correlation between Bitcoin and Islamic stock markets is low and usually negative during major economic and political events suggesting that Bitcoin qualifies as a safe haven against Islamic stock markets downturns. Extending our analysis to portfolio management, findings reveal that the diversification benefits of Bitcoin are most times stable and increase significantly during turbulent periods. Thereby, adding Bitcoin in a portfolio of Islamic stocks reduce the risk of portfolio. Finally, as regards the COVID-19 outbreak period, we find that the hedging strategy involving Bitcoin leads to a higher cost during the crisis. These results provide substantial recommendations for Islamic investors and portfolio managers.
Introduction
Since the emergence of cryptocurrencies in 2009, investors, professionals and academics have raised the issue of their ability to hedge traditional assets. In addition, the last decade has witnessed the occurrence of several crises including the recent COVID-19 outbreak. This pandemic has raised the uncertainty and thus the investment-risk on international financial markets. In these circumstances, investors seek to reduce the risk of their investment and to accomplish an optimal portfolio diversification by involving new financial assets such as Bitcoin. This is due to the failure of gold to preserve its traditional role as a strong safe haven for post Global financial crisis period. Bekiros et al. (2017) and Shahzad et al. (2019) explain the weakness ability of gold to hedge by the quickening in the financialization process of commodity market and the reaction of gold prices to the different events of the recent years.
Despite the diversity of cryptocurrencies, Bitcoin is arguably the most widely used currency in the world. Compared to other cryptocurrencies, Bitcoin retains for 10 years its first place with a 51% of the capitalization of all the crypto-assets. In addition, the Bitcoin market has experienced a swift growth and substantial development over the last decade. More precisely, the capitalization has significantly increased between 2014 Q3 and 2019 Q3. It goes from around USD 30 million to almost USD 20 billion with a growth rate of almost 1000%. Fig. 1 illustrates the evolution of the prices of Bitcoin and gold (in logarithmic scale) between 2011 and 2020. Prices are plotted in logarithmic scale in order to visualize periods of extreme increase for the two markets. Some divergences appear between the two assets. The highest value of gold prices is observed during 2012, while the Bitcoin prices reach its maximum value during 2018. This difference may hide dissimilar hedging opportunities for stock markets. Indeed, the hedging opportunity between assets depends on the extent and the sign of correlation between them. However, the divergences in the dynamic prices of gold and Bitcoin lead to differences in their dynamic correlation with Islamic stock indices. This can so provide disparity in their ability to hedge Islamic stocks.
Therefore, several previous studies attempt to compare the role that the two assets can play as a hedge and safe haven for various commodity and financial assets. Klein et al. (2018) compare the volatility, correlation and portfolio performance of Bitcoin and gold for the period July 2011-December 2017. They conclude that the correlations of Bitcoin with the other markets are most of time opposite to those of gold. They also find that Bitcoin's volatility dynamics share some aspects with gold but cannot serve as a safe haven during market downturn which represents the principal feature of gold. Finally, they affirm that Bitcoin cannot be considered as new gold. Shahzad et al. (2019) test whether Bitcoin exhibits the safe haven property for extreme stock market movements. Then, they compare this property to that of gold and general commodity index. They use data for several stock market indices for developed and emerging economies during the period July 2010-February 2018. The main conclusion of the study is that the three assets (Bitcoin/Gold/commodity index) display an overall weak safe haven property for most markets. Further, using the rolling window analysis, their findings show that the safe haven feature is time-varying and differs across the considered stock markets. Even for Islamic stock markets, the role that Bitcoin can play as a hedge and/or safe haven for Islamic investments is not clear. For instance, Mensi et al. (2020) investigate the co-movements between Bitcoin and some world and regional Islamic stock indices as well as Sukuk markets using wavelet-based approach. The noteworthy finding of the paper is that the dynamic correlation and the benefits of portfolio diversification between the considered assets vary across time and frequencies. More precisely, the co-movement between Bitcoin and Islamic stock market is stronger and in same direction at low frequencies indicating that gains of diversification are less important for long-run investments than short-run investments. Withal, the opposite directions of co-movement found at high frequencies suggest that relevant hedging benefits could be achieved in the short-term through diversification between Bitcoin and Islamic equity assets.
More recently, some researches have shifted to the safe haven properties of Bitcoin during the COVID-19 outbreak. Thereby, these studies have attempted to verify the superiority of the Bitcoin compared to gold in terms of portfolio diversification. This is due to the high volatility observed in international stock markets during the pandemic which led to substantial increase of investment-risk. Consequently, investors lean towards alternative assets such as Bitcoin to reduce the risk of their portfolios. Mariana et al. (2021) suggest that Bitcoin exhibits short-term safe haven features before and during the pandemic despite the fact that it is more volatile than gold and S&P 500. Pho et al. (2021) reveal that the choice of the diversifier depends on the investor's degree of risk aversion. More precisely, they conclude that for China, gold is a preferred portfolio diversifier than Bitcoin for risk-averse investors and vice versa for risk-seeking investors.
This paper aims at studying the role of Bitcoin as a hedge and/or safe haven for Islamic stock markets while comparing it with gold. We thus proceed with a number of research issues. First, we intend to examine the stylized facts of Bitcoin return towards specifying the best fitting model. Second, we seek to explore the dynamic and nonlinear comovement of Bitcoin with Islamic equity indices while comparing the results with gold price dynamics. Third, we pursue to discover managerial implications regarding portfolio designs and hedging strategies. Note that the economic and theoretical logic of this aim has been developed for an international investor who wishes a successful investment strategy far from discussing why an Islamic investor will prefer a speculative asset for profit and hedging.
The economic and theoretical nexus between cryptocurrencies and Islamic stock markets has been recently documented by Narayan et al. (2019), Mensi et al. (2020) and Rehman et al. (2020). Mensi et al. (2020) suggest that this relationship is more indirect than direct. In fact Bitcoin influences the monetary system and overall stock markets through three channels namely monetary aggregates, foreign exchange rates and inflation. According to the authors, the adoption of Bitcoin can replace conventional money or inherit one or more of the roles of money. This phenomenon leads to the reduction of the circulation of money and consequently the demise of the quantity theory of money. Narayan et al. (2019) point out that Bitcoin can influence the marginal cost and inflation given that Bitcoin is not only used as an investment asset but it is also perceived as a store of value. Thus, an increase of value and wealth in Bitcoin may stimulate the demand for goods and services and exert upward pressure on prices of inputs of these goods and services. As Bitcoin is not controlled by monetary policy, any action from central bank or policy makers other than regulation to preclude Bitcoin to defuse increasing inflation would fail. Tschorsch and Scheuermann (2016), Mensi et al. (2020) and Rehman et al. (2020) suggest that Bitcoin has its own features and dynamic behavior. More precisely, Bitcoin differs from fiat currencies in terms of operating system, liquidity, maturity, valuation, underlying assets and speculation. Rehman et al. (2020) report that Bitcoin offers an effective diversification opportunity due to its low correlation with conventional and Islamic equity markets. Some existing literature finds that the inclusion of Bitcoin in a diversified portfolio enhances its expected profit despite the volatile behavior of Bitcoin (Mariana et al., 2021). Therefore, others highlight that even if Bitcoin can improve the return of portfolios, its high volatility makes the performance improvement hard to achieve (Smales, 2019;Chemkha et al., 2021).
The paper contributes to the existing literature on asset management in three ways. Firstly, while few recent papers were developed to examine the role of Bitcoin as a hedge and safe haven for conventional stock markets (Chkili, 2016;Mensi et al., 2018), no one has been interested in the dynamic relationship between Bitcoin and Islamic stock markets except Mensi et al. (2020) and Rehman et al. (2020). However, those studies did not clarify the role that this new asset can play as a safe haven for Islamic stock markets since they don't consider the time-varying co-movement between the two assets. Such analysis requires the specification of the best approach that incorporates all time series properties such as volatility clustering, asymmetry and long memory. The latter has greater importance given that shocks of volatility decay slowly which can influence the dynamic relationship between the variables. Secondly, we examine the co-movement between Bitcoin and the Islamic equity indices using the FIGARCH model that considers the long memory in the conditional volatility process. This feature is omitted by previous existing literature. Such model allows us on the one hand to analyze the evolution of the correlation between the two markets and, on the other hand, to build the optimal portfolio design that Islamic investor can possess in order to reduce the risk of their investments. Finally, we compare the ability of Bitcoin to hedge Islamic investments with gold that is considered as a sturdy haven in the previous studies. However, a further analysis is devoted to the period of COVID-19 to achieve the optimal portfolio management during stock market downturn. Indeed, the existing literature (Mensi et al. (2020) and Rehman et al., 2020) is limited to the pre-crisis period and to the best of our knowledge, ours is the first study that examines the role of Bitcoin as a hedge and safe haven for Islamic stock market risks before and during the COVID-19 outbreak. It is worth noting that this task is essential for international investors given the instability of conventional and Islamic markets during the recent pandemic crisis which can lead to great uncertainty on future investments.
The remainder of the paper is organized as follows. Section 2 presents a short literature review related to the subject. Section 3 describes the data and empirical methodology. The empirical results are discussed in section 4. Section 5 concludes the paper.
Literature and theoretical background
The development of Bitcoin as a new asset has raised the question about its ability to compete with gold in its traditional role as a haven. The analysis of the ability of gold to hedge stock markets dates to the works of Baur and McDermott (2010) and Baur and Lucey (2010). These papers reveal that gold is a hedge against stock market uncertainties and can be considered as a safe haven for extreme market conditions. However, recent papers provide mixed results according to the empirical methodology and sample countries under study. For example, Chkili (2016) and Mensi et al. (2018) suggest that gold plays an inspiring role for conventional and Islamic market investments providing an alternative opportunity of less risky investments. Raza et al. (2016) reveal that the impact of gold prices varies through markets. This impact is positive on the BRICS stock market and negative on the stock markets of Mexico, Malaysia, Thailand, Chile and Indonesia. In the same vein, Akkoc and Civcir (2019) point out the presence of time-varying co-movement between gold and Turkish stock market when volatility is high, suggesting that gold cannot be considered as a safe haven against volatility risk.
In addition, the emergence of the COVID-19 pandemic has encouraged research on the ability of gold to maintain its traditional role as a safe haven during crises (Akhtaruzzaman et al., 2021;Salisu et al., 2021;Drake, 2021). Akhtaruzzaman et al. (2021) reveal that the dynamic correlation between gold and some equity returns is not stable during the whole period of the COVID-19 pandemic. Their findings show that the correlation is negative during Phase I of the pandemic suggesting that gold acts as a safe haven asset for stock market indices during this period. However, the safe haven property of gold has weakened during Phase II when governments proceeded with fiscal and monetary stimulus package. Salisu et al. (2021) point out the ability of gold market to secure investments during the pandemic more than other assets.
The fast development of Islamic stock markets has encouraged researchers to verify the ability of gold as hedge instrument for Islamic financial assets. The results of most studies confirm the role of gold as a safe haven during extreme Islamic stock market conditions (Chkili, 2017;Maghyereh et al., 2019). Furthermore, according to Chkili (2017) the value of gold tends to increase over time even during financial slumps. This allows gold to play a fundamental role in managing Islamic assets. Maghyereh et al. (2019) investigate the dynamic relationship between gold, sukuk and Islamic equity market. They report, on the one hand, that gold hedges sukuk's risk at short and medium horizons. On the other hand, they reveal that gold serves as a perfect diversifier and hedging instrument across all investment horizons.
The emergence of Bitcoin, the last decade, has attracted national as well as international investors. As a new asset, Bitcoin offers a potential diversification opportunity. In this framework, the question of whether Bitcoin can provide hedge for other assets has been raised by several research studies. Most of these studies verify whether Bitcoin can be a substitute for gold by playing the role of a hedge or safe haven for other financial markets. Using different GARCH models, Guesmi et al. (2018) examine the proprieties of Bitcoin in some financial markets. They reveal some interaction effects between Bitcoin and financial variables. Moreover, the Bitcoin market allows hedging risk of investment for financial assets. Shahzad et al. (2019) compare the safe haven property of Bitcoin and gold during extreme market conditions. They verify whether such property is similar or different for the two assets. From a sample of developed and emerging economies, they conclude that both Bitcoin and gold can be regarded as a weak safe haven asset in most cases. More precisely, the safe haven roles are not stable but varying over time and differ across markets. Thus, gold is the best safe haven during extreme down movements of stock markets in developed economies given that Bitcoin fails to offer such property. However, Chinese investors can resort to Bitcoin to reduce the risk of their investments. Using the wavelet VaR approach, Bouri et al. (2020) conclude to the superiority of Bitcoin compared to gold and commodities in terms of diversification benefits. Their findings show also that Bitcoin can be classified as the most suitable safe haven followed by gold. Mizerka et al. (2020) suggest that Bitcoin acts differently across developed and emerging countries. More precisely, they find that Bitcoin serves as a strong hedge on emerging stock markets whereas it acts as a weak hedge on developed markets.
In the same vein, Shahzad et al. (2020) find that Bitcoin and gold present substantial difference in their safe haven and hedging properties for the G7 countries. Gold exhibits indisputable safe haven ability for most G7 stock indices. Moreover, gold is regarded as the most effective hedge for the stock indices of France, Germany, Italy, Japan, the United Kingdom, the United States and the MSCI G7 index, whereas Bitcoin is an effective hedge for the Canadian stock index. Będowska-Sójka and Kliber (2021) compare the safe-haven properties of gold, Bitcoin and Ether. They find that only gold serves as a strong safe haven against the stock market indices. look at the safe haven properties of three cryptocurrencies namely Bitcoin, Ethereum and Tether. They find that Bitcoin and Ethereum cannot act as safe haven for international equity markets. However, Tether acts as a safe haven asset for all international indices.
The analysis is also extended to Islamic equity markets. In this context, two papers have been recently developed Mensi et al., 2020). Mensi et al. (2020) conclude that some diversification benefits exist when Islamic investors introduce Bitcoin in their investments. They employ the wavelet technique to investigate the co-movement between the two assets and to test whether Bitcoin can act as a hedge and safe haven for major Islamic market indices. Their findings show that the co-movement is stronger and in the same direction at low frequencies, indicating that the benefits of diversification is lower for long-term investors compared to short-term investors. Besides, they state that the co-movement in the opposite direction at high frequencies represents a favorable situation for investors to reduce the risk of their investments in the short run through a diversification strategy involving Bitcoin and Islamic stocks. Using the time varying copulas approach, Rehman et al. (2020) examine the risk dependence between Bitcoin and major Islamic indices. They find that the investment in Islamic equity indices serves as an effective hedge in a portfolio along with Bitcoin. Table A1 provides some previous studies on the role of gold and Bitcoin for the hedging of both conventional and Islamic stock markets.
Data
The dataset consists of the closing daily prices of six Dow Jones Islamic market (DJIM) indices (World, USA, Europe, Asia/Pacific, GCC and developed), Bitcoin and gold for the period January 2011 to May 2020. The considered stock markets allow us to verify how Bitcoin and gold can serve as a hedge and safe haven for various global and subregional Islamic equity indices. As Bouri et al. (2017) and Mensi et al. (2019), we use the CoinDesk 1 price index as a measure of the Bitcoin prices. The gold and the Islamic stock market indices are extracted from Datastream. We calculate daily return as the difference between the natural logarithm of two consecutive prices (Mensi et al., 2020: Guesmi et al., 2018Chkili et al., 2014) r t = logP t − logP t− 1 Table 1 reports some descriptive statistics and preliminary tests. As shown, Bitcoin exhibits the highest mean return (39.28%) while the DJIM developed market has the lowest mean return (0.26%). The Bitcoin market is also the most volatile market (as measured by the standard deviation) followed by DJI Europe market while the DJI World market appears as the most stable. The Jarque-Berra test statistics are significant in all cases rejecting the null hypothesis of normality for all series. The Ljung-Box and the ARCH test statistics are all significant at 1%. This indicates the presence of serial correlation and ARCH effect, confirming the use of GARCH models.
Panel B presents results of some unit root tests. From the results of the ADF and PP tests, we reject the null hypothesis of unit root given that all calculated statistics are lower than the critical values. This result is confirmed by the two GARCH-based unit root tests namely Perron (2006) and Lee and Strazicich (2003). We can then conclude that all the return series are stationary and can be considered for an empirical analysis.
Panel C reports the pair-wise correlations between Bitcoin, gold and each Islamic stock market index. We can see that all the unconditional correlations are low. This result motivates us, on the one hand, to analyze the role that Bitcoin can play as a hedge and safe haven for Islamic investments. On the other hand, it is to compare their characteristics to gold. In other words, we verify whether Bitcoin can substitute gold in its traditional role as an effective hedge and safe haven. Fig. 2 displays the evolution of prices for Bitcoin, gold and the six considered Islamic equity indices. The Bitcoin prices present a stable period until 2016. From 2017, it experiences a bullish phase to reach its highest value in 2018. Henceforth, the prices seem unstable until the end of the reporting period. Gold shows a first period of expansion between 2012 and 2014. Then, the prices decrease to reach a minimum value in 2014 and afterwards they increase gradually starting from 2015. The Islamic markets experience a continuously increasing trend with some little swings within the study period. Fig. 3 plots the return series. We observe that periods of high (low) volatility tend to be followed by periods of high (low) volatility. This characteristic corresponds to the volatility clustering and justifies the use of the GARCH models to an appropriate description of the return volatility dynamics.
Empirical methodology
The main objective is to verify whether Bitcoin can serve as a hedge and/or safe haven for Islamic stock markets while comparing its dynamics to gold. More precisely, we inspect if Bitcoin can play the role of safe haven for extreme stock market conditions. However, such analysis requires an investigation of the time-varying relationship between Bitcoin, gold and Islamic markets using the suitable model. For this purpose, we employ the multivariate DCC-FIGARCH model that incorporates the long memory feature in the volatility dynamic process. This feature is omitted in the related literature. This framework is based on the Dynamic Conditional Correlation (DCC) process of Tse and Tsui (2002) augmented by the FIGARCH volatility model developed by Baillie et al. (1996).
Let y t be a return series, the conditional mean equation is defined as follows: where y t is the vector of return on gold, Bitcoin and Islamic stock markets. θ is the vector of estimated coefficients. ε t is the vector of error terms and H t is the conditional variance-covariance matrix. Following Tse and Tsui (2002) this matrix is defined as follows: where R t is the (N × N) symmetric matrix of conditional correlations. D t denotes the (N × N) diagonal matrix of conditional standard deviations that is defined as: h t is assumed to follow univariate FIGARCH process: Where d is the fractional differencing parameter that should satisfy the appropriate stationary condition of 0 ≤ d ≤ 1, to guarantee the existence of the variance. (1 − L) d is the fractional differencing operator. β(L) and φ(L) are polynomials in the lag operator of orders p and q, respectively.
The matrix R t is specified as: where Q t is a symmetric positive definitive conditional variancescovariances matrix: Q t * is a diagonal matrix containing the elements of Q t .
Note that Q is unconditional covariance of the standardized errors of univariate FIGARCH mode determined as: Finally, the dynamic conditional correlation is expressed as follows:
Long memory results
The choice of the appropriate model is based on its ability to capture the stylized facts of time series such as clustering volatility, persistence and long memory. The latter represents, in the recent years, an important feature in the conditional variance process of both commodity and financial time series. Given that, we start our analysis by testing the presence of long memory property in the considered markets. We use two tests namely the GPH and Robinson tests. As suggested by several previous studies (Mabrouk and Saadi, 2012;Chkili et al., 2012;Mensi et al., 2014;Mabrouk, 2016), these tests should be applied to squared returns and absolute returns. Table 2 reports the calculated values of the two tests for various bandwidths. From the reported results, we observe that the statistics of the two tests are significant at 1% level. This finding is detected for all considered markets and for all selected bandwidths. We thus reject the null hypothesis of no long-range memory in favor of 1 https://www.coindesk.com. the alternative hypothesis of the presence of long memory property in the variance process of the commodity and financial markets. Similar results are highlighted by Chkili et al. (2014) and Aloui and Mabrouk (2010) for commodity market and El Mehdi and Mghaieth (2017) for Islamic stock markets. However, Mensi et al. (2019) point out the existence of long memory in both mean and volatility of cryptocurrency market. So, we choose the multivariate DCC-FIGARCH model to incorporate this attribute in our analysis and describe the time-varying relationship between Bitcoin, gold and Islamic stock indices. Table 3 presents the estimation results for the relationship between Bitcoin, gold and Islamic stock markets. Referring to the Akaike information criterion and Hanann-Quin information criterion, the AR(1) is the appropriate model for the conditional mean equation. As shown, the AR(1) coefficients are positive and statistically significant for most Islamic equity indices suggesting that returns are affected by their own past values.
FIGARCH estimation results
With respect to the conditional variance equation, we observe that the ARCH parameters are significant for Bitcoin, gold, Asia and GCC Islamic market indices. This indicates that the volatility of these markets is affected by its past own shocks. However, the GARCH parameters are significant for all markets indicating that conditional volatility depends significantly on its past values. In addition, the fractional differencing parameters (d) obtained from the FIGARCH model are highly significant in all cases. This finding confirms the results of LM tests and proves the presence of long memory property in the conditional volatility dynamics for all examined markets. Our findings are thus in line with several papers in the previous literature. For instance, El Mehdi and Mghaieth (2017) apply GARCH-family models for some conventional and Islamic stock markets. They conclude to the presence of long memory process in all the stock market volatility dynamics. Chkili (2021) tries specifying the appropriate model for the volatility dynamics of Bitcoin. The author employs two types of frameworks namely long memory and Markov switching models. His empirical results show strong evidence of long memory process in the conditional variance. The average conditional correlations (ρ 31 ) between bitcoin and Islamic markets have negative but non-significant values indicating that Bitcoin can offer a new investment opportunity when combined with Islamic stocks. Such analysis requires a thorough investigation for the evolution of the dynamic conditional correlation between markets. Fig. 4 traces the time-varying conditional volatility for all considered markets. Some high volatility common phases can be detected for the Islamic stock markets. More precisely, all Islamic stock markets, except the GCC, exhibit a first period of high volatility between 2011 and 2012 which coincides with the Euro-Zone debt crisis (EZDC). This crisis has appeared in Greece and has affected most conventional and Islamic markets worldwide. The plots also show that conditional variance of Islamic markets increase significantly between 2015 and 2016, as a response to the oil price shock. Finally, the drop of financial markets during the last year followed by a turbulent period can be explained by the effects of the Covid-19 pandemic. Several studies conclude that during the current pandemic, the indices of developed and emerging markets have plummeted, followed by a spike in volatility (Izzeldin et al., 2021;Uddin et al., 2021;Liu et al., 2021;Hasan et al., 2021). Hasan et al. (2021) evaluate the effect of COVID-19 outbreak on both conventional and Islamic equity market indices. Specificcaly, they seek each difference in the volatility dynamics of Islamic and conventional markets during the COVID-19 crisis. Their findings point out that both markets switch to volatile regime since the COVID-19 was affirmed a global pandemic in the beginning of March 2020.
Time-varying dynamic correlation
Fig . 5 shows the evolution of the time-varying correlation between Bitcoin and each Islamic stock market. We also report the dynamic relationship between gold and the different Islamic markets for comparative purposes. The dynamic correlation for the couple Bitcoin/ DJIM indices is low for all plots. More interestingly, the correlations switch between positive and negative values during the period under investigation. Therefore, we can see a significant drop in the conditional correlation during the year 2011 and for the period 2013-2014. During these two episodes, the correlation is often negative, attesting that the two markets evolve in opposite directions. Not surprisingly, most conventional and Islamic stock markets have experienced a fall in their prices in response to the Eurozone crisis (EZDC) of 2011-2012, and the falling of crude oil prices in 2014. This last event is transmitted to the financial markets given the intense financialization of commodity markets, in particular oil market. Some previous studies point out similar results (Creti et al., 2013;Zhang et al., 2017;Ding et al., 2021). Fig. 5 also depicts a low correlation phase that spans between 2015 and 2017. This period is characterized by the decrease in the Islamic stock indices. The more important decrease is observed for the GCC Islamic market which achieves its minimum value for the period under analysis. Indeed, the fall of the SSE Composite Index about 43% in mi-2015 in just over 2 months is followed by a turbulence phase of Chinese stock market, a devaluation of the Yuan, a slowing of the Chinese economic growth and a fall of oil prices. As a result, investors sold shares globally, which in turn culminated in the decline of the value of conventional and Islamic stock prices globally.
The low or negative correlations between the two markets, observed during the period of falling DJIM indices, suggest that Bitcoin can serve as a safe haven during extreme Islamic stock market conditions. Comparing this finding to the gold results, we notice that usually the dynamic correlation for the pair gold/Islamic markets is higher than the correlation for the pair Bitcoin/Islamic markets. This suggests that gold Note: C(m) and C(v) are the constants of the mean and variance equations, respectively. Standard deviations are reported in parentheses. p-values of statistical tests are presented in brackets. Asterisks ***, **and * denote the significance at 1%, 5% and 10% levels, respectively. has ceded its role as a haven to new assets such as Bitcoin. However, the latter appears to be a strong refuge during this past decade. We also notice that unlike Bitcoin, the price of gold decreased during the period 2015-2016, which explains the positive correlation between gold and Islamic market indices. This result is in line with Bouri et al. (2020) who reveal that the overall connectedness between gold/commodities/Bitcoin and the stock markets is not very strong over time. In addition, the wavelet coherency approach results show that Bitcoin displays the weakest dependence with equity markets. Whereas, Shahzad et al. (2019) find that Bitcoin, gold, and commodities exhibit a weak safe haven propriety for most stock market indices.
The last period of low correlation has emerged at the end-2019 and coincided with the period of the COVID-19 downturn which led to a severe global economic recession phase. The impacts of the COVID-19 pandemic became more visible in the beginning of 2020 with the stock market crashes. This finding confirms the role that Bitcoin can play during the severe economic, political and social events. More precisely, Bitcoin can serve as a safe haven during COVID-19 stock market shocks. Huang et al. (2021) assert that Bitcoin can act as a safe haven in Europe, the UK, and the US during the virus pandemic, suggesting that investors in these countries should hedge their portfolios using Bitcoin. Mariana et al. (2021) find that the daily returns of Bitcoin and Ethereum are negatively correlated with S&P 500 return during the COVID-19 pandemic suggesting their safe haven features during extreme stock market downturn. Therefore, our results are different from Shehzad et al. (2021) who state that most of the time during the COVID-19, gold investments were evinced to be more beneficial than Bitcoin.
Portfolio designs
It is worth noting that the ambition of international investors is to achieve the optimal portfolio allocation. In this respect, financial analysts and portfolio managers attempt to specify the suitable model that describes the volatility dynamics of financial and commodity markets. They seek then to propose the optimal portfolio design. Using the estimation results of our long memory model, we determine the optimal portfolio weights composed by Bitcoin and Islamic stocks. The objective is to build the portfolio that allows to minimize the risk without lowing expected returns. Following Kroner and Ng (1998), Chkili et al. (2014) and Chkili (2016), the optimal holding weight of Bitcoin in a one dollar invest in Bitcoin/Islamic stocks portfolio is calculated as follows: Following Kroner and Ng (1998), the optimal weight of Bitcoin is between 0 and 1, so that: where h S t and h b t designate, respectively, the conditional variances of the Islamic stock market and Bitcoin derived from the estimation of the FIGARCH model. h SB t measures the covariance between the Islamic stock market index and Bitcoin at time t. We can also compute the weight of Islamic asset in the one-dollar Bitcoin/Islamic stocks portfolio as the difference between one and the weight of Bitcoin (1 − w SB t ). Finally, we note that this methodology is applied likewise to the portfolio composed by gold and Islamic stocks.
The summary statistics of the optimal weights for both Bitcoin and gold are reported in Table 4. Starting with the Bitcoin, the average value of the portfolio weight ranges between 0.0357 for the Islamic world index and 0.0528 for the Islamic Europe index. This indicates that for one-dollar Bitcoin/Islamic stock's portfolio, 3.57 cents should be allocated to Bitcoin while the remaining 96.43 cents should be invested in the world Islamic market index. Regarding the GCC Islamic market, the optimal investment weights are 4.46 and 95.54 respectively. More precisely, to reduce the risk of their portfolio without lowering the expected returns, investors in the GCC's countries should invest 4.46% of their wealth on the Bitcoin market while 95.54% should be devoted to hold Islamic equities. Quite similar results are uncovered by Rehman et al. (2020). The authors suggest that investors accomplish diversification benefits by investing in Bitcoin along with Islamic stocks.
Panel B exhibits statistics of the optimal weight values between gold and Islamic stock index. We see that for all considered Islamic markets the mean values are high compared to Bitcoin. The lowest value is observed for the DJIM World/Gold pair which equal to 0.3803. However, the highest weight is detected for the Islamic Europe index which reaches the value of 0.5262. This suggests that in order to minimize the risk without diminishing the profitability of their investments, Islamic investors should invest between 38.03% and 52.62% of their budget in gold market and the rest in Islamic equity markets. Fig. 6 plots the evolution of the optimal weights of Bitcoin and gold over time. We see that the conditional weight is not stable but varies through periods and across markets. More interestingly, the optimal weight for the Bitcoin/Islamic equity pairs is weaker and more stable than that for the gold/Islamic equity pairs. This suggests that the opportunities of diversification are entirely different and distinguishable between the two assets.
For most of the Islamic stock markets, we witness a common evolution of Bitcoin weight during the period under investigation. The diversification benefits for Bitcoin are stable for most of time and increase significantly during certain periods. Notably, the periods of increasing values coincide with the periods of major economic and political events that occurred during the period under study. This suggests that Bitcoin provides an outstanding diversification opportunity to Islamic portfolio investments during turbulent periods. For instance, the share of Bitcoin in most considered portfolios increases to 20% during the period 2015-2016 which has experienced an important fall in the Islamic market indices. Overall, this finding confirms the role that this crypto-asset can play in terms of diversification and highlights the emergence of a new hedging instrument. Several previous studies (Guesmi et al., 2018;Kliber et al., 2019;Symitsi and Chalvatiz, 2019; make the same conclusion indicating the existence of substantial benefits for the inclusion of Bitcoin in the investment portfolio.
Hedge and risk reduction
Some previous studies suggest that Bitcoin can be used to hedge financial portfolio and to reduce risk. More precisely, it comes to determine the appropriate position in different markets. For this purpose, we calculate the optimal hedge ratio β SB t using the FIGARCH estimation results. This means that a short position (selling) in the Islamic stock market index should be hedged by a long position (buying) of β SB t dollar in the Bitcoin market. The optimal hedge ratio is developed by Kroner and Sultan (1993) as follows: This methodology is also applied by previous literature to check the hedge ability among several commodities and financial assets (Mensi et al., 2018;Chkili et al., 2014;Hamoudeh et al., 2010) or among Bitcoin and different asset types (Guesmi et al., 2018). Given our objective of comparison, we calculate in the same way the optimal hedge ratio for gold.
To refine our analysis, we also calculate the hedging effectiveness (HE) index. It represents an indicator of the performance of the applied hedging strategy. Practically, this index measures the gain or loss in the variance of hedged portfolio compared to the unhedged portfolio. The hedging effectiveness index is calculated as follows: Note that the variance unhedged represents the risk of the portfolio composed only by Islamic stocks and the variance hedged measures the variance of Islamic stocks/Bitcoin or Islamic stocks/gold portfolios. The HE index raises with the effectiveness of the hedging strategy. From a portfolio perspective, an increase in the HE index means a portfolio risk reduction and indicates that the hedging strategy is effective.
The results for the two assets are displayed in Table 5. We report the hedge ratio, the HE index and the variances for the hedged and unhedged portfolios, respectively. Regarding the Bitcoin results, we observe that the hedge ratio is negative for all Islamic market indices except for Islamic Europe equity index. It varies between − 0.0048 for the Asia-Pacific Islamic market and 0.0043 for the Europe Islamic market. The negative values of the hedge ratio indicate that the hedging benefits can be achieved by taking either short or long position for both considered assets (Bitcoin and Islamic stocks). For example, a USD 1000 short position in the Asia-Pacific Islamic market should be hedged by another short position for USD 4.8 in the Bitcoin market. Therefore, the positive values suggest that investors should take two inverse positions. A USD 1000 short position in the Europe Islamic market is hedged by taking a long position in the Bitcoin market of USD 4.3. The ability of Bitcoin to hedge financial assets is supported by previous literature (Guesmi et al., 2018: Shahazed et al., 2019Garcia-Jorcano and Benito, 2020;Wang et al., 2021).
Looking at the gold results, we notice that all the hedge ratios are higher than those of the Bitcoin. This suggests that Islamic investors need much smaller budget in Bitcoin to hedge Islamic equity investments. More precisely, the average values of the hedge ratio switch between 0.0036 for the GCC Islamic market and 0.0792 for the Europe Islamic market. The positive average value of 0.0792 for the couple DJI Europe/gold indicates that a USD 1000 short position in Europe Islamic stocks can be hedged by implementing a long position in the gold market of USD 79.2. This amount is reduced to USD 3.6 for the GCC Islamic market. The evolution over time of the hedge ratio between Bitcoin (gold) and each Islamic market is displayed in Fig. 7 (Fig. 8). The most important finding that we can notice is that the hedge ratio is not stable Finally, we calculate the hedging effectiveness from the variances of the hedged (incorporating Bitcoin or gold and Islamic stocks) and unhedged portfolio (incorporating only Islamic stocks). A great HE value means a perfect hedging strategy. Comparing the HE for the two assets, the results show that Bitcoin exhibits higher HE values than gold for all Islamic equity markets except for the Europe Islamic market. This suggests that Bitcoin possess hedging benefits more substantial than the traditional hedge asset. The harvest result confirms previous research arguing that Bitcoin is isolated from financial assets and hence offers risk diversification possibilities for portfolio investment (Corbet et al., 2018;Bouri et al., 2020).
The COVID-19 outbreak effects
The COVID-19 pandemic has increased the uncertainty and volatility of stock markets worldwide. Consequently, the risk associated with conventional and Islamic stock investments has significantly risen. In these circumstances, international investors seek to reach the optimal portfolio management and to determine the effective hedging strategies. Thereby, this period requires further analysis to investigate the potential diversification opportunity involving Islamic stocks and Bitcoin or gold. We also verify the ability of Bitcoin and gold to hedge Islamic investment during this pandemic turmoil. To achieve this issue, we compute the optimal portfolio weights, the hedging ratio and the hedging effectiveness index. Finally, to cheek the successfulness of hedging instru- Note that positive values of ΔHE suggest beneficial hedging strategy during COVID-19 crisis. The period of crisis spans from January 1, 2020 to May 20, 2020 which covers the first wave of the pandemic. 2 The results are reported in Table 6. As shown, the optimal weights for the Bitcoin are lower than the gold in all cases. This indicates that Islamic investors required less of Bitcoin than gold to reduce the risk of their portfolio during the COVID-19 outbreak. In addition, the optimal weight values are slightly more important during the pandemic suggesting that investors should invest more in the Bitcoin or gold market during the recent crisis. For example regarding the Bitcoin findings, the optimal portfolio weight skips from 0.0357 (DJIM world/Bitcoin) and 0.0496 (DJIM USA/Bitcoin) for the whole period to 0.0457 and 0.0709 during the COVID-19 outbreak, respectively. This finding confirms the role that Bitcoin can play for Islamic investments. As regards the hedge ratio, the table displays an increase in their absolute values compared to full sample. Thus, the hedging strategy leads to higher costs during the crisis. However, the hedging strategy shows greater performance during the COVID-19 outbreak as the ΔHE values are positive in all cases except for DJIM Asia Pacific. This result confirms that Bitcoin provide beneficial reduction of investment-risk during the turmoil period. Wang et al. (2021), Rubbaniy et al. (2021) and Kumar (2020) find evidence of benefits from diversification with Bitcoin during the COVID-19 pandemic.
Discussion and conclusion
The instability of financial markets owing to different financial crises has amplified uncertainty of international investments. This uncertainty 8. Timeline of dynamic hedge ratio for Gold obtained from the estimation of the FIGARCH model. 2 We choose the starting of COVID-19 period as January 1, 2020 in accordance with the coronavirus timeline. In fact, the first case of COVID-19 was reported to the World Health Organisation (WHO) by China is December 31, 2019. Thus, January 1, 2020 the WHO goes on to an emergency footing for dealing with a major disease outbreak. https://www.weforum.org/agenda/ 2020/04/coronavirus-spread-covid19-pandemic-timeline-milestones/ persisted over the last two years with the raising trade conflict between the USA and China and the economic turmoil associated with the COVID-19 pandemic. Given these circumstances, it becomes challenging for investors to accomplish diversification benefits. Even the diversification opportunity offered by commodity markets has weakened with the process of financialization of these markets. All these events encouraged international investors to look for new financial markets such as Bitcoin that would provide a potential hedging opportunity.
In this study, we have verified whether Bitcoin can serve as a hedge for Islamic equity markets. Notably, we have investigated the benefits of hedging through the diversification between Bitcoin and Islamic equity indices. Such analysis allows us to compare its hedging ability to that of gold. We have used the DCC-FIGARCH model that considers some stylized facts of time return series such as volatility clustering, conditional heavy tails and long memory property. The findings show that all the time series exhibit the presence of long memory propriety in the conditional volatility dynamics of the considered equity and commodity markets. More interestingly, shocks of volatility persist and don't decease swiftly. The analysis of the dynamic correlation shows some divergences between the couple Bitcoin/Islamic index and the couple gold/Islamic index. Interestingly, the correlation is time-varying, and its evolution varies across markets. In addition, this correlation is lower for the pair Bitcoin/Islamic index than for the pair gold/Islamic index. The connection of Bitcoin to Islamic market shows also variation across periods and it is most times close to zero or negative. This suggests that some diversification benefits exist between the two assets.
The results provide important implications for Islamic investors and portfolio managers. In fact, the risk management requires better understanding of the type of links between assets. Thus, the composition of the optimal portfolio depends on the market conditions. On the whole, Islamic investors should invest more in Bitcoin market during bearish market conditions than during normal periods. This allows investors to achieve the benefits of diversification through computing optimal weights of each asset which lead to lowering portfolio risk without decreasing the expected returns. Urom et al. (2020) point out that international investors should consider Bitcoin as part of their portfolio diversification. Bouri el al. (2017) also underline that Bitcoin can serve as an effective diversifier for most financial and commodity assets.
Our results for the COVID-19 period show that Bitcoin offers a better diversification opportunity to reduce the risks of major Islamic equity markets. More interestingly, investors should place a lower amount on Bitcoin market during the COVID-19 compared to the pre-COVID-19 period in order to minimize their portfolio risk. The fund managers and financial analysts can rely on these findings in order to help investors reach a well-diversified portfolio during the COVID-19 crisis and achieve an optimal risk reduction.
Finally, our results highlight the ability of Bitcoin to hedge Islamic market risk. Hence, its hedging property differs according to market conditions. Mensi et al. (2020) and Sensoy (2016) argue that the systematic risk of Islamic equity markets varies over time which make hedge ratio not constant. However, in average, a low amount of Bitcoin provides a high hedging effectiveness of Islamic investments. This amount has substantially increased during the recent COVID-19 crisis suggesting that hedging strategies require higher costs during crisis period.
Declaration of Competing Interest
We declare that there are no conflicts of interest. Note: w is the optimal portfolio weight. HR and HE are the optimal hedged ratio and the hedging effectiveness index, respectively. ΔHE measures the difference between the HE values during COVID-19 pandemic and the HE values in the full sample period. | 2021-10-11T13:07:38.806Z | 2021-10-01T00:00:00.000 | {
"year": 2021,
"sha1": "6fcdfbb0253f81cff7a749c994f038650596034d",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.resourpol.2021.102407",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "697de7f7933a70bc159fce192b537c75ca17eb76",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": [
"Medicine"
]
} |
203994641 | pes2o/s2orc | v3-fos-license | Several Problems with Froude-Number Based Scale Modeling of Fires in Small Compartments
The Froude-number based reduced-scale modeling is a technique commonly used to investigate the flow of heat and mass in building fires. The root of the method is the thermodynamic model of a flow in a compartment and several non-dimensional flow numbers based on the proportionalities of the Navier-Stokes and heat transfer equations. The ratio of inertial forces to the buoyancy forces, known as the Froude-number, plays a pivotal role within these proportionalities. This paper is an attempt to define the range of credible scale modeling using the Froude-number. We verify the credibility of the modeling by small fire (approximately 150 kW) in a small compartment, comparing data from a physical test (scale 1:1 and 1:4) and the numerical model’s data (Fire Dynamics Simulator, scales 1:1, 1:2, 1:4, 1:10, 1:20, and 1:50). The scope of the research covers a wide range of fires, with observed change of the flow from turbulent to laminar. The results show that the applicability of Froude-number reduced-scale modeling has limitations related to the scale. Therefore, it should be applied with care following sensibility analysis. We propose a method for sensibility analysis using Computational Fluid Dynamics (CFD) modeling.
Introduction
There is continuous progress in the reflection of the fire phenomenon in computer models [1]. There is also a growing number of applications of computer software to complex fire problems [2]. Computer models are continually gaining accuracy, as is the scope of the modeled fire phenomena. However, due to limitations in representation of many fire-related phenomena, computational costs, and uncertainties related to numerical investigation, physical experimentation in reduced-scale modeling is still a popular tool. Since the first studies on the smoke control of factories performed by Thomas et al. [3], reduced-scale research allowed for the development of better techniques and solutions to prevent fires and limit their consequences to occupants, buildings, and the environment. Such studies could also be used in forensic applications [4,5] and building design.
There are two governing non-dimensional numbers in low-Mach-fire-related flows, namely the Reynolds (Re) and Froude (Fr) numbers. However, the Fr scaling conflicts with the Re preservation by providing two different results for velocity in the scale models. There are analyses [6] that prove that in fire-related flows, one should preserve the Froude-number, not the Reynolds number when scaling is applied. In this case, so-called partial scaling is adopted that favors the Fr over the Re [7]. Quintiere et al. mentioned that maintaining sufficient height of the scale model compartment (>0.3 m) may be adequate to maintain flow turbulence [5]. [8].
The approach of the Fr scaling must be used with care, despite its popularity. Spalding emphasized that partial scaling is "an art" and not a science, due to problems in properly relating the behavior of the full-scale system to the one of the model [9]. Among these problems, the most common are with the scaling the chemistry of combustion and the flow turbulence [9]. Nevertheless, Froude-number scale modeling of fires has proven useful in understanding various fire phenomena, and their consequences [3,[10][11][12].
In this research, we focus on some issues identified with the practical use of the Froude-number scale modeling of a small fire ( HRR of approximately 150 kW) in a small-sized compartment (approximately 10 × 10 × 4 m³). We have observed a discrepancy in the measurements of the temperature between the full-and reduced-scale experiments that match observations previously reported in the literature. Furthermore, the experiment was recreated with the use of CFD modeling in larger number of scales as an attempt to confirm the experimental observations, investigate the turbulent flow structure in more details, and form generalised conclusions. The temperature of smoke measured in the reduced-scale analyses was lower than in a full-scale experiment, despite conforming to the Froude-number scaling principles. The discrepancy was bigger when the scale used wassmaller. Moreover, in very small scales we have observed problems with maintaining a sufficiently high Reynolds number, and the fire behavior has changed from turbulent to laminar, leading to significant error in the modeled smoke temperatures and invalidating the scientific value of these experiments.
Froude-Number Based Fire Modeling
In the early 1960s, mass fires were considered to be an extremely complex phenomena that did present formidable scaling problems. A body of literature did exist (reviewed by Williams [9]) related to the scaling of specific, simple types of fires. It was in the review paper by Williams [9] where the foundation for future scaling of fires was laid, based on the combination of several different models based on the principal laws, mathematical operations, and empirical observations. The paper The approach of the Fr scaling must be used with care, despite its popularity. Spalding emphasized that partial scaling is "an art" and not a science, due to problems in properly relating the behavior of the full-scale system to the one of the model [9]. Among these problems, the most common are with the scaling the chemistry of combustion and the flow turbulence [9]. Nevertheless, Froude-number scale modeling of fires has proven useful in understanding various fire phenomena, and their consequences [3,[10][11][12].
In this research, we focus on some issues identified with the practical use of the Froude-number scale modeling of a small fire (HRR of approximately 150 kW) in a small-sized compartment (approximately 10 × 10 × 4 m 3 ). We have observed a discrepancy in the measurements of the temperature between the full-and reduced-scale experiments that match observations previously reported in the literature. Furthermore, the experiment was recreated with the use of CFD modeling in larger number of scales as an attempt to confirm the experimental observations, investigate the turbulent flow structure in more details, and form generalised conclusions. The temperature of smoke measured in the reduced-scale analyses was lower than in a full-scale experiment, despite conforming to the Froude-number scaling principles. The discrepancy was bigger when the scale used wassmaller. Moreover, in very small scales we have observed problems with maintaining a sufficiently high Reynolds number, and the fire behavior has changed from turbulent to laminar, leading to significant error in the modeled smoke temperatures and invalidating the scientific value of these experiments.
Froude-Number Based Fire Modeling
In the early 1960s, mass fires were considered to be an extremely complex phenomena that did present formidable scaling problems. A body of literature did exist (reviewed by Williams [9]) related to the scaling of specific, simple types of fires. It was in the review paper by Williams [9] where the foundation for future scaling of fires was laid, based on the combination of several different Energies 2019, 12, 3625 3 of 20 models based on the principal laws, mathematical operations, and empirical observations. The paper introduced 29 dimensionless groups that describe the scaling principles for fire phenomena. In this modeling technique, some of the phenomena are simplified, and others are omitted on purpose. However, the overall image of the reduced-scale fire may be useful and provide a reasonably good picture of the characteristics of the full-scale system that is of interest. The concept of reduced-scale modeling was summarized by Quintiere in reference [13] and more recently in references [5,14].
In the technique of scale modeling using dimensionless numbers of similarity, the starting point is the Buckingham theorem, also known as the "Pi Theorem" [14]. The theorem states that: "Each function involving a certain number n of physical variables ai, and these variables are expressible in terms of k independent fundamental physical quantities, then the original expression is equivalent to an equation involving a set of p = n -k dimensionless parameters Π constructed from the original variables. If all dimensionless parameters Π are identical, the phenomenon will be identical despite the different parameters of the ai type." The statement above (Pi Theorem) is based on the principle of dimensionless unity. Moreover, it states that if the equation expresses a proper relation between variables in physics, the process shall be dimensionally homogeneous, and its parameters will have the same dimensions [15].
Thus, a dimensionally homogeneous equation containing n mathematical variables: can be recorded in the dimensionless form: It was already identified by Williams [9] that creating a model that conserves all 29 dimensionless groups would be formidable. Thus, subsets were created, of which the basic subset is presented below. The geometrical scale of the model is the ratio of the characteristic dimensions of the scaled-down model (index "m") to the corresponding dimensions in the full scale (index "f").
x m x f Froude-number similarity (4), convection and radiation groups, and gas-phase heat release were grouped together with fuel gasification and loading groups, and ambient atmospheric condition groups [9].
These were foundations for future scaling principle developments. The heat release rate in small scale is determined by the scaling of Zukoski number (5), also known as dimensionless group Π 9 .
In modern fire modeling, it may be assumed that two fires can are similar if the following requirements are met [8]: • The Froude-numbers of both of the fires are equal; • All geometrical features related to the fire and the environment are scaled with the same scale; • The fires are occurring at well-ventilated conditions, i.e. the combustion is not significantly influenced by the reduced-scale, and the combustion efficiency in full and reduced-scale is similar; • The flow in the buoyant plume is turbulent. If the Froude similarity criterion is met, and Reynolds criterion is satisfied, the other relevant parameters that describe the flow of mass and heat in the compartment will scale as summarized in reference [16] and listed below.
•
The heat release rate of the fire [kW] .
For a more thorough introduction to Fr scaling of fires, the reader is kindly referred to reference [14]. The Fr scaling approach can be potentially used in an investigation of other phenomena relevant to fire safety, where buoyancy is the dominant force in the model. An example can be flammable gas dispersion, and a recent example of full-scale research can be found in reference [17]. However, in scaling of such phenomena other set of relations may apply, as the change of density in Equation (4) will not be related to heat release, as in Equations (5) and (6).
The abovementioned set of similarities was used widely in fire safety science, in a vast array of scales. Fire sizes in literature review range from 0.31 kW to 100 MW, and the scales from full scale (1:1, [18]) to 1:48 [19]. A summary of investigated research performed in reduced-scale is shown in Table 1.
Experimental Setup
The Froude-number based scaling technique was used in a series of full-scale (1:1) and small scale (1:4) physical experiments on the development of a hot smoke layer in a small size fire within a small, non-ventilated compartment. The n-Heptane was used as a fuel in all of the experiments, and the size of the fire was scaled through reducing the size of the pan and the amount of fuel. A secondary goal of the experiment was to verify the concept of a novel optical densitometer, which was thoroughly described in reference [8].
The physical experiments in scale 1:1 were performed in the Building Research Institute smoke detector testing chamber (Figures 2a and 3). The dimensions of the chamber are 9.60 × 9.80 × 4.00 m. The chamber was sealed, but not airtight (the leakages are not known to the authors, although no visual observation of smoke leakage to the building was observed). All doors and ventilation openings of the chamber were sealed during the experiments. Walls of the chamber are built with gypsum plasterboard (2 × 12 mm) over the aluminum structure.
Two experiments, each consisting of three repeats were performed. In the first series a fuel tray with dimensions of 0.33 × 0.33 m 2 was used (further referred to as series A), and in the second series, a fuel tray with dimensions of 0.50 × 0.50 m 2 was used (series B). In both full-scale experiments, the fuel was 1 l of n-Heptane. The Heat Release Rate (HRR) was determined through mass loss rate measurements of the fuel tray, with the assumed Heat of Combustion value Hc = 44,400 kJ/kg. No ventilation was used in the experiment. The reduced-scale experiment was performed in a scaled-down model of the test chamber (1:4) (Figures 2b and 3), with the dimensions of 2.40 × 2.45 × 1.00 m 3 . All physical features of the compartment were further scaled down accordingly in reduced-scale experiments, except the fuel tray. The size of the fuel tray was first determined through geometrical scaling and then refined based on mass-loss measurements of the combustion of n-Heptane so that the similarity of (5) and (9) is explicitly met. The correction to the size of the tray was within 10% of the geometrical size. The experiment overview is given in Table 2. Each of the experiments was repeated three times, and the conclusions are formed based on the averaged values. m³. All physical features of the compartment were further scaled down accordingly in reduced-scale experiments, except the fuel tray. The size of the fuel tray was first determined through geometrical scaling and then refined based on mass-loss measurements of the combustion of n-Heptane so that the similarity of (5) and (9) is explicitly met. The correction to the size of the tray was within 10% of the geometrical size. The experiment overview is given in Table 3. Each of the experiments was repeated three times, and the conclusions are formed based on the averaged values. The sketch presenting both of the testing chambers, and the location of measurement equipment is shown in Figure 3. Each of the testing chambers was equipped with: -4 type K 1mm Ni-Cr thermocouples (T1-T4) placed in the corners of the compartment, used to measure the average temperature of the smoke layer, extended uncertainty of the measurement was estimated at 0.3 • C; -1 type K 1mm Ni-Cr thermocouple placed in the plume centerline, underneath the ceiling (T5), used to measure the peak temperature of the smoke upon entering the smoke layer, extended uncertainty of the measurement was estimated at 0.3 • C; -load cell with a resolution of 0.01 g was used to measure the mass-loss rate of fuel in the tray in both experiments, with extended uncertainty of 0.02 g; The sketch presenting both of the testing chambers, and the location of measurement equipment is shown in Figure 3. Each of the testing chambers was equipped with: -4 type K 1mm Ni-Cr thermocouples (T1-T4) placed in the corners of the compartment, used to measure the average temperature of the smoke layer, extended uncertainty of the measurement was estimated at 0.3 °C; -1 type K 1mm Ni-Cr thermocouple placed in the plume centerline, underneath the ceiling (T5), used to measure the peak temperature of the smoke upon entering the smoke layer, extended uncertainty of the measurement was estimated at 0.3 °C; load cell with a resolution of 0.01 g was used to measure the mass-loss rate of fuel in the tray in both experiments, with extended uncertainty of 0.02 g; Figure 3. The overview of the test chambers, with the localization of the measuring equipment (T1-T4 -four thermocouples that were placed symmetrically in four corners of the compartment, T5thermocouple that was placed above fuel tray).
Numerical Simulations
The fire experiments (scale 1:1 and 1:4) described in Section 3.1 were recreated with Fire Dynamics Simulator (FDS) code (version 6.7.0). FDS is a Computational Fluid Dynamics (CFD) code developed for modeling of low Mach number fluid flows, with an emphasis on smoke and heat transport as a result of fires. The FDS software can analyse the transport of heat and combustion products of a fire, heat transfer through radiation and convection, as well as conduction (1-D Fourier's equation). The radiative heat transfer is solved using the finite volume method (FVM). The program solves the Navier-Stokes equations using a second-order finite differences scheme. The
Numerical Simulations
The fire experiments (scale 1:1 and 1:4) described in Section 3.1 were recreated with Fire Dynamics Simulator (FDS) code (version 6.7.0). FDS is a Computational Fluid Dynamics (CFD) code developed for modeling of low Mach number fluid flows, with an emphasis on smoke and heat transport as a result of fires. The FDS software can analyse the transport of heat and combustion products of a fire, heat transfer through radiation and convection, as well as conduction (1-D Fourier's equation). The radiative heat transfer is solved using the finite volume method (FVM). The program solves the Navier-Stokes equations using a second-order finite differences scheme. The modeling includes relatively low-velocity flows (Ma < 0,1) and non-compressible gases that are gases for which density variation at low flow velocities can be disregarded. The FDS code primarily employs the Large Eddy Simulation (LES) method for the turbulence modeling, with dynamic Smagorinsky model being the default model for the turbulent viscosity. Log-law wall functions are used to characterize the flow at the boundaries. The LES modeling assumes that only the largest flow scales (large eddies) are calculated directly from the transport equations, while below the so-called Smagorinsky filter length the turbulence is solved with a sub-scale model. The size of Smagorinsky filter is determined as the cubic root of the smallest cell in the model. The fire phenomena is introduced as a single step mixing controlled combustion reaction, for which the molar fractions of products are defined, as well as the yields of relevant products (especially CO, CO 2 , and soot). The space discretization is performed with a uniform Cartesian type mesh, without any wall mesh boundary layers. For more information relevant to the FDS model used in this analysis, and the complete definition of the underlying physical models the reader is kindly referred to reference [34], and for information on the validation of the model is referred to reference [35]. The scope of the application of FDS makes it especially fit to simulate the heat and mass transfer phenomena in small fire experiments, where the buoyant forces within the smoke plume can be considered as the dominant forces.
In order to determine the scope of Fr scaling applicability to small fires. Figure 4 shows a 1:1 scale 3-D computer model used in numerical calculations. To prepare the CFD model geometry, we used GUI software called PyroSim, developed by Thunderhead Engineering [36]. The dimensions of the model (full scale) were 9.6 m × 9.6 m × 4.2 m. complete definition of the underlying physical models the reader is kindly referred to reference [34], and for information on the validation of the model is referred to reference [35]. The scope of the application of FDS makes it especially fit to simulate the heat and mass transfer phenomena in small fire experiments, where the buoyant forces within the smoke plume can be considered as the dominant forces.
In order to determine the scope of Fr scaling applicability to small fires. Figure 4 shows a 1:1 scale 3-D computer model used in numerical calculations. To prepare the CFD model geometry, we used GUI software called PyroSim, developed by Thunderhead Engineering [36]. The dimensions of the model (full scale) were 9.6 m × 9.6 m × 4.2 m. 2-D slices were used to illustrate the temperature, velocity and pressure fields. Moreover, a matrix of point measurement devices has been prepared to measure velocity flow and the temperature distribution, forming an array in two planes: Y = 5.0 and Z = 4.0 with a 0.2 m interval. In the reduced-scale simulations, the location and interval have been scaled following the geometrical scale of the model. A total number of 12 computer simulations were performed, i.e. six simulations per series (following the same notation as full-scale experiments described in 3.1). Table 4 2-D slices were used to illustrate the temperature, velocity and pressure fields. Moreover, a matrix of point measurement devices has been prepared to measure velocity flow and the temperature distribution, forming an array in two planes: Y = 5.0 and Z = 4.0 with a 0.2 m interval. In the reduced-scale simulations, the location and interval have been scaled following the geometrical scale of the model. A total number of 12 computer simulations were performed, i.e. six simulations per series (following the same notation as full-scale experiments described in 3.1). Table 3 Following the geometrical scale, the HRR of the test-fire was reduced (Equation (6)), as well as the simulation time (Equation (7)). Before starting the calculations, the time step of recording the results in individual simulation has been determined so that regardless of the length of calculations, 200 records are obtained. This allows us to compare the results of numerical analyses in dimensionless time, regardless of the scale of analysis. However, for clarity, all results are shown in scaled-up time, as in the scale 1:1 (Equation (7)). Table 4 shows scaled-down calculation time and time step of records for each simulation.
Mesh Sensitivity Analysis for CFD Simulations
The mesh size may be an important factor in the CFD analyses. In this case, a regular cubic grid was used in CFD simulations. The grid size must be small enough to model the turbulent effects properly. For the used LES method, a spatial resolution of 1/4 < R < 1/16 is recommended [37]. This spatial resolution is defined as R = ∆/D*, where ∆ is the element size and D* is the characteristic diameter of the plume, obtained from the Froude-number calculated as [38]: It can be noted that for averaged temperatures, the difference between 0.10 m and 0.05 m mesh are below 10 °C, and are smaller than the differences between 0.20 m and 0.10 m. However, the differences in the maximum centerline plume temperatures between 0.10 m and 0.05 m are much lower and significantly smaller than between 0.20 m and 0.10 m. Based on these findings, the 5 cm mesh was chosen for further simulations. For simulations in reduced scale, the D* was maintained, and the mesh size was scaled accordingly.
Experimental Research
The HRR value for each experiment was plotted based on the moving average value (5 s averaging time) mass loss rate measurements, and the assumed Heat of Combustion of the n-Heptane, as shown in Figure 6. In the case of Series A (1:4 scale), the representation of HRR in the early part of the experiment was not detailed enough, although the duration of the combustion was close to the predictions of the analytical model. The average HRR value was similar between reduced-and full-scale experiments (less than 5% difference after scaling up the time).
Experimental Research
The HRR value for each experiment was plotted based on the moving average value (5 s averaging time) mass loss rate measurements, and the assumed Heat of Combustion of the n-Heptane, as shown in Figure 6. In the case of Series A (1:4 scale), the representation of HRR in the early part of the experiment was not detailed enough, although the duration of the combustion was close to the predictions of the analytical model. The average HRR value was similar between reducedand full-scale experiments (less than 5% difference after scaling up the time). (6) and (7).
The mean temperature measured (mean value of thermocouples TC1-4) showed a good fit in terms of the shape of the temperature profile and the peak value timing (see Figures 7 and 8). Average temperatures in reduced-scale were up to 30% lower than in the full-scale (Figure 8a,b), which can be considered significant and in line with the previous findings in the literature [4,39]. The plume centerline temperatures were in good fit (Figure 8c,d), with the exceptions of maximum temperatures during the HRR peak. The temperatures in the middle of the compartment during cooling down period were also in good agreement. It should be noted, that expected temperatures in reduced-and The mean temperature measured (mean value of thermocouples TC1-4) showed a good fit in terms of the shape of the temperature profile and the peak value timing (see Figures 7 and 8). Average temperatures in reduced-scale were up to 30% lower than in the full-scale (Figure 8a,b), which can be considered significant and in line with the previous findings in the literature [4,39]. The plume centerline temperatures were in good fit (Figure 8c,d), with the exceptions of maximum temperatures during the HRR peak. The temperatures in the middle of the compartment during cooling down period were also in good agreement. It should be noted, that expected temperatures in reduced-and full-scale should be similar if conditions for Froude-number similarity are met. To identify the source of this discrepancy, the series of numerical simulations were performed (Section 3.2) and the results are shown in Section 4.2. Additional results, related to the smoke obscuration measurements performed during the abovementioned experiments can be found in reference [8]. These will not be discussed here, as the Additional results, related to the smoke obscuration measurements performed during the abovementioned experiments can be found in reference [8]. These will not be discussed here, as the Additional results, related to the smoke obscuration measurements performed during the abovementioned experiments can be found in reference [8]. These will not be discussed here, as the result analysis of the experiment was performed primarily through the analysis of the smoke plume and smoke layer temperatures.
Numerical Modeling
The mean temperatures obtained in the experimental research were compared with 1:1 and 1:4 CFD model predictions (see Figure 9). In the first 150 s of simulations, the results for 1:1 scale were in a good fit between CFD and the vscale model. However, further into the experiment, some discrepancies occurred. The temperatures in numerical analysis were lower than in the experiment, with the maximum observed difference of 14 • C (series B, 1:1 scale). For scale 1:4, the CFD gave higher temperature than the scale model in the initial part of the experiment (which may be related to lower initial HRR in the physical experiment, see Figure 7). However, in the latter part of the simulation the agreement was very good (less than 10% difference). The differences in measured temperatures between scale 1:1 and 1:4 were slightly smaller than those observed between scales 1:1 and 1:4 in the physical experiment. result analysis of the experiment was performed primarily through the analysis of the smoke plume and smoke layer temperatures.
Numerical Modeling
The mean temperatures obtained in the experimental research were compared with 1:1 and 1:4 CFD model predictions (see Figure 9). In the first 150 s of simulations, the results for 1:1 scale were in a good fit between CFD and the vscale model. However, further into the experiment, some discrepancies occurred. The temperatures in numerical analysis were lower than in the experiment, with the maximum observed difference of 14 °C (series B, 1:1 scale). For scale 1:4, the CFD gave higher temperature than the scale model in the initial part of the experiment (which may be related to lower initial HRR in the physical experiment, see Figure 7). However, in the latter part of the simulation the agreement was very good (less than 10% difference). The differences in measured temperatures between scale 1:1 and 1:4 were slightly smaller than those observed between scales 1:1 and 1:4 in the physical experiment.
(a) Series A, mean layer temperature (b) Series B, mean layer temperature Figure 10 presents the mean smoke layer temperature (measured 20 cm underneath the ceiling) for all scales investigated in the CFD analyses. As observed in the experimental part, also in the numerical calculations, the mean layer temperature decreases with the scale. For scales 1:1, 1:2 and 1:4 the temperature differences are within 10% (1:1 vs 1:2) and 20% (1:1 vs 1:4) limits. The difference between 1:1 and 1:10 scale is significant, not only in the value of the temperature but also in the temperature increment. For scale 1:1 the temperature increases in the duration of the fire, while for scale 1:10 it stabilises around 75th second of the experiment. Similar differences were also observed for the maximum centerline plume temperature, as shown in Figure 11. Figure 10 presents the mean smoke layer temperature (measured 20 cm underneath the ceiling) for all scales investigated in the CFD analyses. As observed in the experimental part, also in the numerical calculations, the mean layer temperature decreases with the scale. For scales 1:1, 1:2 and 1:4 the temperature differences are within 10% (1:1 vs 1:2) and 20% (1:1 vs 1:4) limits. The difference between 1:1 and 1:10 scale is significant, not only in the value of the temperature but also in the temperature increment. For scale 1:1 the temperature increases in the duration of the fire, while for scale 1:10 it stabilises around 75th second of the experiment. Similar differences were also observed for the maximum centerline plume temperature, as shown in Figure 11. result analysis of the experiment was performed primarily through the analysis of the smoke plume and smoke layer temperatures.
Numerical Modeling
The mean temperatures obtained in the experimental research were compared with 1:1 and 1:4 CFD model predictions (see Figure 9). In the first 150 s of simulations, the results for 1:1 scale were in a good fit between CFD and the vscale model. However, further into the experiment, some discrepancies occurred. The temperatures in numerical analysis were lower than in the experiment, with the maximum observed difference of 14 °C (series B, 1:1 scale). For scale 1:4, the CFD gave higher temperature than the scale model in the initial part of the experiment (which may be related to lower initial HRR in the physical experiment, see Figure 7). However, in the latter part of the simulation the agreement was very good (less than 10% difference). The differences in measured temperatures between scale 1:1 and 1:4 were slightly smaller than those observed between scales 1:1 and 1:4 in the physical experiment.
(a) Series A, mean layer temperature (b) Series B, mean layer temperature Figure 10 presents the mean smoke layer temperature (measured 20 cm underneath the ceiling) for all scales investigated in the CFD analyses. As observed in the experimental part, also in the numerical calculations, the mean layer temperature decreases with the scale. For scales 1:1, 1:2 and 1:4 the temperature differences are within 10% (1:1 vs 1:2) and 20% (1:1 vs 1:4) limits. The difference between 1:1 and 1:10 scale is significant, not only in the value of the temperature but also in the temperature increment. For scale 1:1 the temperature increases in the duration of the fire, while for scale 1:10 it stabilises around 75th second of the experiment. Similar differences were also observed for the maximum centerline plume temperature, as shown in Figure 11.
Modeling the Temperature of the Smoke
The data from CFD analyses was post-processed in form of spatio-temporal graphs. The x-axis presents the position on a line drawn through the middle of the room at a height of 4.00 m above the floor (with middle of the line being the plume), Figure 12. The y-axis presents the time of the experiment, scaled up following Equation (7), and the colour represents the temperature, Figure 13 and 14. The measurement resolution is 20 cm. From plots, the differences in modeling the temperature in different scales are evident. For series A, the differences between scales 1:1 and 1:2 are smaller than 10%, and for series B, the results for scales 1:1, 1:2, and 1:4 show similar degree of agreement. In case of scales 1:10, 1:20, and 1:50 the results can be considered to be wrong, and to have no scientific value. It is interesting, as scales 1:10-1:25 are commonly used in research (see Table 1). It should be noted, that in case of this study, the fire modeled can be considered "small" and the findings may not be relevant to modeling of large fires (such as fires of tunnel vehicles or large compartment fires). Nevertheless, observed discrepancies indicate that the applicability of the Froude-number scaling theory should not be assumed for every research, and a scale sensitivity analysis should precede such efforts.
Authors cannot identify the sole reason for the observed discrepancies, although a careful examination of the assumptions and results of the modeling indicate some possible problems with maintaining the correct flow turbulence (further discussed in Section 5.2) and modeling the thermal inertia of model boundaries (further discussed in Section 5.3).
Modeling the Temperature of the Smoke
The data from CFD analyses was post-processed in form of spatio-temporal graphs. The x-axis presents the position on a line drawn through the middle of the room at a height of 4.00 m above the floor (with middle of the line being the plume), Figure 12. The y-axis presents the time of the experiment, scaled up following Equation (7), and the colour represents the temperature, Figures 13 and 14. The measurement resolution is 20 cm. From plots, the differences in modeling the temperature in different scales are evident. For series A, the differences between scales 1:1 and 1:2 are smaller than 10%, and for series B, the results for scales 1:1, 1:2, and 1:4 show similar degree of agreement. In case of scales 1:10, 1:20, and 1:50 the results can be considered to be wrong, and to have no scientific value. It is interesting, as scales 1:10-1:25 are commonly used in research (see Equations (6)- (11)). It should be noted, that in case of this study, the fire modeled can be considered "small" and the findings may not be relevant to modeling of large fires (such as fires of tunnel vehicles or large compartment fires). Nevertheless, observed discrepancies indicate that the applicability of the Froude-number scaling theory should not be assumed for every research, and a scale sensitivity analysis should precede such efforts.
Authors cannot identify the sole reason for the observed discrepancies, although a careful examination of the assumptions and results of the modeling indicate some possible problems with maintaining the correct flow turbulence (further discussed in Section 5.2) and modeling the thermal inertia of model boundaries (further discussed in Section 5.3). (a) Series A, plume centreline temperature (b) Series B, plume centreline temperature Figure 11. Comparison of the centerline plume temperature measurements in numerical experiments with different scales. The time value is scaled following Equation (7).
Modeling the Temperature of the Smoke
The data from CFD analyses was post-processed in form of spatio-temporal graphs. The x-axis presents the position on a line drawn through the middle of the room at a height of 4.00 m above the floor (with middle of the line being the plume), Figure 12. The y-axis presents the time of the experiment, scaled up following Equation (7), and the colour represents the temperature, Figure 13 and 14. The measurement resolution is 20 cm. From plots, the differences in modeling the temperature in different scales are evident. For series A, the differences between scales 1:1 and 1:2 are smaller than 10%, and for series B, the results for scales 1:1, 1:2, and 1:4 show similar degree of agreement. In case of scales 1:10, 1:20, and 1:50 the results can be considered to be wrong, and to have no scientific value. It is interesting, as scales 1:10-1:25 are commonly used in research (see Table 1). It should be noted, that in case of this study, the fire modeled can be considered "small" and the findings may not be relevant to modeling of large fires (such as fires of tunnel vehicles or large compartment fires). Nevertheless, observed discrepancies indicate that the applicability of the Froude-number scaling theory should not be assumed for every research, and a scale sensitivity analysis should precede such efforts.
Authors cannot identify the sole reason for the observed discrepancies, although a careful examination of the assumptions and results of the modeling indicate some possible problems with maintaining the correct flow turbulence (further discussed in Section 5.2) and modeling the thermal inertia of model boundaries (further discussed in Section 5.3).
Problems with Maintaining the Reynolds Number and Laminar Flames
In Section 2, we have mentioned a rule of thumb that the flow turbulence should be maintained, which is closely related to the Reynolds number of the modeled flow. Quintiere et al. mentioned that this is usually achieved in model compartments with height > 0.3 m [5]. A simplification in this aspect is necessary, as the conservation of Froude and Reynolds numbers in the same model may be difficult. From the definition of the Reynolds number: it can be noted that scaling of the velocity or density of the fluid would invalidate the Froude similarity. Thus, if one would need to conserve the Reynolds number while following the Froude relationship, it would require scaling of the kinematic viscosity of the medium, which is not practical. However, if the flow is mainly driven by the buoyancy and is highly turbulent (we propose a rule of thumb value of Re > 10,000), the further increase of the Re number will have a limited effect on the
Problems with Maintaining the Reynolds Number and Laminar Flames
In Section 2, we have mentioned a rule of thumb that the flow turbulence should be maintained, which is closely related to the Reynolds number of the modeled flow. Quintiere et al. mentioned that this is usually achieved in model compartments with height > 0.3 m [5]. A simplification in this aspect is necessary, as the conservation of Froude and Reynolds numbers in the same model may be difficult. From the definition of the Reynolds number: it can be noted that scaling of the velocity or density of the fluid would invalidate the Froude similarity. Thus, if one would need to conserve the Reynolds number while following the Froude relationship, it would require scaling of the kinematic viscosity of the medium, which is not practical. However, if the flow is mainly driven by the buoyancy and is highly turbulent (we propose a rule of thumb value of Re > 10,000), the further increase of the Re number will have a limited effect on the fluid dynamics of the smoke plume or layer. In such a case, the omission of the Reynolds similarity is justified.
In practice, the flows in full fires scale are turbulent. However, once scaled down to small scale, the flow turbulence may be insufficient to justify the omission of the Reynolds scaling. In such case, the buoyant plume will not mix with the surrounding air, and the entrainment will not represent the behavior of the large scale plume. Such a problem was observed in the numerical experiments for scales 1:20 and 1:50, where laminar plumes were observed. The illustration of the plumes is shown in Figure 15. The approximated values (for velocity averaged over 30 s) of Re number for the plume flow are shown in Table 5. The approximated Reynolds number for small scales (1:10, 1:20 and 1:50) indicates, that flow structure in these scenarios was laminar (significantly below Re = 2100). In case of Series A, the Re number for scale 1:4 was most likely in the transition range between laminar and turbulent flow (Re = 2982), while in Series B the flow was turbulent (Re = 5562). This observation is coherent with the temperature differences between scales, observed between the series A and B. For scale 1:2 in series B, which was in good agreement with full-scale research, the Re number was above 10,000. A series of snapshots presenting the structure of the plume in CFD simulations (coloured by the temperature) is shown in Figure 15. The temperature scale for each picture was individually exaggerated, to highlight the plume shape (and the flow structure), illustrating the visible differences in the plume shape between large and small scale simulations.
The Influence of the Thermal Inertia of the Boundaries
Scaling of the thermal inertia (k·ρ·c) of the physical model boundaries is often disregarded in reduced-scale research. In short-duration experiments (shorter than few minutes) such as the one shown in this paper, this may not have a significant impact on the measured temperatures, as the heat transfer to the boundaries will be insignificant compared to radiation. However, in the case of long experiments or very large fires, the differences in heat transfer in full-and reduced-scale may be important. It should be noted, that in many research available in the literature, the full-scale materials such as concrete or masonry, are represented in the reduced-scale with steel and glass. This is due to the ease of the model building process, and the ability to observe the experiments, at the cost of completely different heat transfer at the model boundaries.
The problems with the scaling of heat transfer were thoroughly described by Quintiere [14],where dimensionless groups Π3, Π5, Π6, Π7, and Π8 should be preserved. However, this leads to inconsistencies with Equation (11) and in the scaling of the convective heat transfer coefficient (and the Nusselt number). To cope with that several strategies may be used to maintain partial scaling. One is to maintain dimensionless groups Π5 and Π8 as constant, to preserve wall conduction effects [14].
Additional considerations must be done for radiative heat flux, especially for large fires in which the radiation flux is significantly larger than convection heat flux [14]. In some cases in which radiation is believed to dominate, the Equation (14) can be ignored, while maintaining (16) instead: In practice, the thermal parameters describing the wall (k, ρ, c) and the width of the walls (δ) can be chosen accordingly to the scale. The sensitivity of the scale model to these values may be verified in numerical CFD analysis, where usually these parameters can be easily defined by the user. It should be noted that the convective heat transfer will also depend on the Nusselt number, which is related to flow velocity and turbulence. This means that considerations related to the flow turbulence from Section 5.2 are even more relevant. Nevertheless, scaling of the thermal inertia and/or width of the model boundaries is possible, and in research where convection and conduction are important to the modeled phenomena, a sensitivity analysis should be performed.
The Influence of the Thermal Inertia of the Boundaries
Scaling of the thermal inertia (k·ρ·c) of the physical model boundaries is often disregarded in reduced-scale research. In short-duration experiments (shorter than few minutes) such as the one shown in this paper, this may not have a significant impact on the measured temperatures, as the heat transfer to the boundaries will be insignificant compared to radiation. However, in the case of long experiments or very large fires, the differences in heat transfer in full-and reduced-scale may be important. It should be noted, that in many research available in the literature, the full-scale materials such as concrete or masonry, are represented in the reduced-scale with steel and glass. This is due to the ease of the model building process, and the ability to observe the experiments, at the cost of completely different heat transfer at the model boundaries.
The problems with the scaling of heat transfer were thoroughly described by Quintiere [14], where dimensionless groups Π 3 , Π 5 , Π 6 , Π 7 , and Π 8 should be preserved. However, this leads to inconsistencies with Equation (11) and in the scaling of the convective heat transfer coefficient (and the Nusselt number). To cope with that several strategies may be used to maintain partial scaling. One is to maintain dimensionless groups Π 5 and Π 8 as constant, to preserve wall conduction effects [14].
Additional considerations must be done for radiative heat flux, especially for large fires in which the radiation flux is significantly larger than convection heat flux [14]. In some cases in which radiation is believed to dominate, the Equation (14) can be ignored, while maintaining (16) instead: In practice, the thermal parameters describing the wall (k, ρ, c) and the width of the walls (δ) can be chosen accordingly to the scale. The sensitivity of the scale model to these values may be verified in numerical CFD analysis, where usually these parameters can be easily defined by the user. It should be noted that the convective heat transfer will also depend on the Nusselt number, which is related to flow velocity and turbulence. This means that considerations related to the flow turbulence from Section 5.2 are even more relevant. Nevertheless, scaling of the thermal inertia and/or width of the model boundaries is possible, and in research where convection and conduction are important to the modeled phenomena, a sensitivity analysis should be performed.
The scaling of the thermal inertia of boundaries together with the introduction of heat sinks in the gas phase of the model tunnel can be used to increase the length of a pseudo tunnel model [40]. In this approach, the heat sinks simulate the effects of heat transfer along a significantly longer section of a tunnel, which may be useful for studying flow dynamics in long tunnels, without the need for building excessively large reduced-scale models. However, this approach was only tested in linear (close to 1D) buildings and may require further validation [40].
Conclusions
The paper has shown observed discrepancies and problems with the application of Froude-number scaling for modeling compartment fires. The experiments were performed with a wide range of Reynolds numbers, showing the essential role that turbulent flow has on the temperatures in the plume and the compartment. The previously mentioned rule of thumb value of Re = 10,000 was confirmed, to sufficiently minimize the error of the method related to the flow turbulence.
In case of small scales (1:2 and 1:4) the average temperatures measured were up to 30% lower than in the full-scale experiment, however in most of the experiment duration this difference was up to 10% (which in the opinion of the authors can be considered as an acceptable value). The temporal change in the temperature was well represented in small scale. These results indicate that the scaling method can be useful for investigation of the flow of smoke in buildings. For smaller scales (1:10 and smaller) the differences in the temperatures measured were significant, and in case of very small scales (1:20 and 1:50), the results have no scientific value due to change of the flow from turbulent to laminar.
CFD modeling with FDS software did sufficiently represent the full-and reduced-scale experiments and was used to analyze a wide array of scaled fires. A similar approach can be used before future experiments, to verify the sensitivity of the experiment to the scale, and estimate the Reynolds number of the flow. Furthermore, the numerical modeling may help with investigating the effects of materials used in the reduced-scale model on the heat transfer to the model boundaries.
As first discussed by Spalding, and mentioned by Williams [9] and Quintiere [14], the reduced-scale modeling is an art that requires the user to choose which dimensionless relations are conserved, and which are omitted. To maintain the high scientific value of scaled-down experiments, the user should make informed decisions, and use modern tools (such as CFD modeling) to assess the model sensibility to the changes introduced in the reduced-scale. As shown in this paper, just following the basic scaling theory is not sufficient to guarantee that the results of reduced-scale experiment are valid. Also, the temperature should not be considered as the only variable to be assessed. Other parameters such as flow velocity, mass concentration of pollutants, or heat fluxes may be useful for evaluating the validity of the model.
Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. | 2019-09-26T09:05:43.642Z | 2019-09-23T00:00:00.000 | {
"year": 2019,
"sha1": "8408fbed918c6bd08a4eab49262b232a639330a6",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/12/19/3625/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "18653fd50463c129684aea6ef5c4f32fdf7812ba",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
29403979 | pes2o/s2orc | v3-fos-license | Emergence of quasi-units in the one dimensional Zhang model
We study the Zhang model of sandpile on a one dimensional chain of length $L$, where a random amount of energy is added at a randomly chosen site at each time step. We show that in spite of this randomness in the input energy, the probability distribution function of energy at a site in the steady state is sharply peaked, and the width of the peak decreases as $ {L}^{-1/2}$ for large $L$. We discuss how the energy added at one time is distributed among different sites by topplings with time. We relate this distribution to the time-dependent probability distribution of the position of a marked grain in the one dimensional Abelian model with discrete heights. We argue that in the large $L$ limit, the variance of energy at site $x$ has a scaling form $L^{-1}g(x/L)$, where $g(\xi)$ varies as $\log(1/\xi)$ for small $\xi$, which agrees very well with the results from numerical simulations.
I. INTRODUCTION
After the pioneering work of Bak, Tang and Wiesenfeld (BTW) in 1987 [1], many different models for selforganized criticality have been studied in different contexts; for review see [2][3][4][5]. Of these, models in the general class known as Abelian distributed processors have been studied a lot, as they share an Abelian property that makes their theoretical study simpler [2]. The original sandpile model of Bak et al. [1], the Eulerian walkers model [8], and the Manna model [6] are all members of this class. Models which do not have the Abelian property have been studied mostly by numerical simulations. In this paper, we discuss the Zhang model [7], which does not have the Abelian property.
In the Zhang model, the amount of energy added at a randomly chosen site at each time step is not fixed, but random. In spite of this, the model in one dimension has the remarkable property that the energy at a site in the steady state has a very sharply peaked distribution in which the width of the peak is much less than the spread in the input amount per time step, and the width decreases with increasing system size L. This behavior was noticed by Zhang using numerical simulations in one and two dimension [7], and he called it the 'emergence of quasi-units' in the steady state of the model. He argued that for large systems, the behavior would be same as in the discrete model. Recently, A. Fey et al. [9] have proved that in one dimension, the variance of energy does go to zero as the length of the chain L goes to infinity, but they did not study how fast it decreases with L.
In this paper, we study this emergence of 'quasi-units' in one dimensional Zhang sandpile by looking at how the added energy is redistributed among different sites in the * Electronic address: tridib@tifr.res.in † Electronic address: ddhar@theory.tifr.res.in avalanche process. We show that the distribution function of the fraction of added energy at a site x ′ reaching a site x after t time steps following the addition is exactly equal to the probability distribution that a marked grain in the one-dimensional height type BTW model added at site x ′ reaches site x in time t. The latter problem has been studied recently [10]. We use this to show that the variance of energy asymptotically vanishes as 1/L. We also discuss the spatial dependence of the variance along the system length. In the large L limit, the variance at site x has a scaling form L −1 g(x/L). We determine an approximate form of the scaling function g(ξ), which agrees very well with the results of our numerical simulations.
There have been other studies of the Zhang model earlier. Blanchard et al. [11] have studied the steady state of the model, and found that the distribution of energies even for the two site problem is very complicated, and has a multi-fractal character. In two dimensions, the distribution of energy seems to sharpen for larger L, but the rate of decrease of the width is very slow [12]. Most other studies have dealt with the question as to whether the critical exponents of the avalanche distribution in this model are the same as in the discrete Abelian model [13,14] . A. Fey et al.'s results imply that the asymptotic behavior of the avalanche distribution in one dimension is indeed the same as in the discrete case, but the situation in higher dimension remains unclear [15,16].
The plan of the paper is as follows. In Section II, we define the model precisely. In Section III, we show that the calculation of the way the energy added at a site is distributed among different sites by toppling is same as the calculation of the time-dependent probability distribution of the position of a marked grain in the discrete Abelian sandpile model. This correspondence is used in Section IV to determine the qualitative dependence of the variance of the energy variable at a site on its position x, and on the system size L. We propose a simple extrapolation form that incorporates this dependence. We check our theoretical arguments with numerical simulations in Section V. Section VI contains a summary and concluding remarks. A detailed calculation of the solution of an equation, required in Section IV, is added as an Appendix.
II. DEFINITION AND PRELIMINARIES
We consider our model on a linear chain of size L. The sites are labelled by integers 1 to L and a real continuous energy variable is assigned to each site. Let E(x, t) be the energy variable at site x at the end of the time-step t. We define a threshold energy value E c , same for each site, such that sites with E(x, t) ≥ E c are called unstable, and those with E(x, t) < E c are called stable. Starting from a configuration where all sites are stable, the dynamics is defined as follows.
(i) The system is driven by adding a random amount of energy at the beginning of every time-step at a randomly chosen site. Let the amount of energy added at time t be ∆ t . We will assume that all ∆'s are independent, identically distributed random variables, each picked randomly from an uniform interval 1 − ǫ ≤ ∆ t ≤ 1 + ǫ. Let the site of addition chosen at time t be denoted by a t .
(ii) We make a list of all sites whose energy exceeds or becomes equal to the critical value E c . All these sites are relaxed in parallel by topplings. In a toppling, the energy of the site is equally distributed to its two neighbors and the energy at that site is reset to zero. If there is toppling at a boundary site, half of the energy at that site before toppling is lost.
(iii) We iterate Step (ii) until all topplings stop. This completes one time step. This is the slow driving limit, and we assume that all avalanche activity stops before the next addition event. In this limit, the model is characterized by two parameters ǫ and E c . In the limit ǫ = 0, and 1 < E c ≤ 2, the model reduces to the discrete case, where the behavior is well understood [17]. For non-zero but small ǫ, the behavior does not depend on the precise value of E c . In fact, starting with a recurrent configuration of the pile, and adding energy at some chosen site, we get exactly the same sequence of topplings for a range of values of E c [9]. To be precise, for any fixed initial configuration, and fixed driving sequence (of sites chosen for addition of energy), whether a site x topples at time t or not is independent of E c , so long as we have 1+ǫ < E c ≤ 2−2ǫ. In the following, we assume for simplicity that E c = 3/2, and 0 ≤ ǫ ≤ 1/4.
It was shown in [9] that in this case, the stationary state has at most one site with energy E(x, t) = 0 and all other sites have energy in the range 1−ǫ ≤ E(x, t) ≤ 1+ǫ. The position of the empty site is equally distributed among all the lattice points. There are also some recurrent configurations in which all sites have energy E(x, t) ≥ 1 − ǫ. In such cases, we shall say that the site with zero energy is the site L + 1. Then, in the steady state, there is exactly one site with energy equal to 0, and the L + 1 different positions of the site are equally likely.
If E c does not satisfy the inequality 1 + ǫ < E c ≤ 2 − 2ǫ, this simple characterization of the steady state is no longer valid. However, our treatment can be easily extended to those cases. Since the qualitative behavior of the model is the same in all cases, we restrict ourselves to the simplest case here.
It is easy to see that the toppling rules are in general not Abelian. For example, start with a two site model in configuration (1.6, 2.0) and E c = 1.5. The final configuration would be (1.4, 0), or (0, 1.3), depending on whether the first or the second site is toppled initially. In our model, using the parallel update rule, the final configuration would be (1.0, 0.8). A. Fey et al. [9] have shown that only in one dimension, for 1 + ǫ < E c , the Zhang model has a restricted Abelian character, namely, that the final state does not depend on the order of topplings within an avalanche. However, topplings in two different avalanches do not commute.
III. THE PROPAGATOR, AND ITS RELATION TO THE DISCRETE ABELIAN MODEL
It is useful to look at the Zhang model as a perturbation about the ǫ = 0 limit. For sufficiently small ǫ, given the site of addition and initial configuration, the toppling sequence is independent of ǫ. It is also independent of the amount of energy of addition ∆ t , and is same as the model with ǫ = 0, which is the 1-dimensional Abelian sandpile model with integer heights (hereafter referred to simply as ASM, without further qualifiers). We decompose the energy variables as where Nint refers to the nearest integer value. Then the integer part of the energy evolves as in the ASM. We write Here u t is uniformly distributed in the interval [−1, +1]. The linearity of energy transfer in toppling implies that the evolution of the variables η(x, t) is independent of ǫ. Thus, η(x, t) is a linear function of u t ; the precise function depends on the sequence of topplings that took place. These are determined by the sequence of addition sites {a t } up to the time t, and the initial configuration C 0 . These together will be called the evolution history of the system up to time t, and denoted by H t . We assume that at the starting time t = 0, the variables η(x, t = 0) are zero for all x, and the initial configuration is a recurrent configuration C 0 of the ASM. Then, from the linearity of the toppling rules, we can write η(x, t) as a linear function of {u t ′ } for 1 ≤ t ′ ≤ t, and we can write for a given history H t , This defines the matrix elements G(x, t|a t ′ , t ′ , H t ). These can be understood in terms of the probability distribution of the position of a marked grain in the ASM as follows. Consider the motion of a marked grain in the one dimensional height type BTW model. We start with configuration C 0 and add grains at sites according to the sequence {a t }. All grains are identical except the one added at time t ′ , which is marked. In each toppling, the marked grain jumps to one of its two neighbors with equal probability. Consider the probability that the marked grain will be found at site x after a sequence of relaxation processes at time t. We denote this probability as Prob(x, t|a t ′ , t ′ , H t ). From the toppling rules in both the models, it is easy to see that Averaging over different histories H t , we get the probability that a marked grain added at x ′ = a t ′ at time t ′ is found at a position x at time t ≥ t ′ in the steady state of the ASM. Denoting the latter probability by where the over bar denotes averaging over different histories H t , consistent with the specified constraints. Here, the constraint is that H t must satisfy a t ′ = x ′ . At other places, the constraints may be different, and will be specified if not clear from the context. We shall denote the variance of a random variable ξ by V ar [ξ]. From the definition in Eq. (1), it is easy to show that Different u t are independent random variables, also independent of H t and have zero mean. Let V ar[u t ] = σ 2 . For the case when u t has a uniform distribution between −1 and +1, we have σ 2 = 1/3. Then, from Eq. (3), we get As t → ∞, the system tends to a steady state, and the average in the right hand side of Eq. (7) becomes a function of t − t ′ . Also, for a given t ′ , all values of a t ′ are equally likely. We define Then, for large L, in the steady state (t large), the variance of energy at site x is 1/L + ǫ 2 Σ 2 (x), where We define Σ 2 to be the average of Σ 2 (x) over x.
Evaluation of G(x, t|x ′ , t ′ , H t ) for a given history H t and averaging over H t is quite tedious for t > 1 or 2. For G, the problem has been studied in the context of residence times of grains in sand piles, and some exact results are known in specific cases [10]. For G 2 , the calculations are much more difficult. However, some simplifications occur in large L limit. We discuss these in the next section.
IV. CALCULATION OF Σ 2 (x) IN LARGE-L LIMIT
In order to find the quantity F (x, τ ) in Eq. (8), we have to average G 2 (x, t|x ′ , t ′ , H t ) over all possible histories H t , which is quite difficult to evaluate exactly. However, we can determine the leading behavior of F (x, τ ) in this limit.
We use the fact that the path of a marked grain in the ASM is a random walk [10]. Consider a particle that starts away from the boundaries, at x ′ = ξL, with L large, and 0 < ξ < 1. If it undergoes r(H t ) topplings between the time t ′ and t = t ′ + τ under some particular history H t , then its probability distribution is approximately a Gaussian, centered at x ′ with width √ r. Then, we have Using this approximation for G, summing over x ′ , we get Thus, we have to calculate the average of 1/ r(H t ) over different histories. Here r(H t ) was defined as the number of topplings undergone by the marked grain. Different possible trajectories of a marked grain, for a given history, do not have the same number of topplings. However, if the typical displacement of the grain is much smaller than its distance from the end, differences between these are small, and can be neglected. There are typically O(L) topplings per grain per avalanche in the model, and a grain moves a typical distance of O( √ L) in one avalanche. Then, we can approximate r(H t ) by N (x ′ ), the number of topplings at x ′ .
Let the number of topplings at x ′ at time steps τ = 0, 1, 2, . . . be denoted by N 0 , N 1 , N 2 , . . .. Then, N (x ′ ) = N 0 + N 1 + N 2 + · · · . It can be shown that the number of topplings in different avalanches in the one dimensional ASM are nearly uncorrelated (In fact the correlation function between N i and N j varies as (1/L) |i−j| .). By the central limit theorem for sum of weakly correlated random variables, the mean value of N grows linearly with τ , but the standard deviation increases only as √ τ .
Then, for τ ≫ 0, the distribution is sharply peaked about the mean, and 1/ √ N ≃ 1/ N . Clearly, for τ ≫ 0, N = τn(x ′ ), wheren(x ′ ) is the mean number of topplings per avalanche at x ′ in the ASM, given byn The upper limit on τ for the validity of the above argument comes from the requirement that the width of the Gaussian be much less than the distance from the boundary, (without any loss of generality, we can assume that ξ < 1/2, so that it is the left boundary ), else we cannot neglect events where the marked grain leaves the pile. This gives τn(x) ≪ ξL, or equivalently, τ ≪ ξL. Thus we get, where C 1 is some constant. Also, we know that for τ ≫ L, the probability that the grain stays in the pile decays exponentially as exp(−τ /L) [10]. Thus, G, and also G 2 will decay exponentially with τ , for τ ≫ L. Thus, we have, for some constants C 2 and a, It only remains to determine the behavior of F (x, τ ), for ξL ≪ τ ≪ L. In this case, in the ASM, there is a significant probability that the marked grain leaves the pile from the end. This results in a faster decay of G, and hence of F with time. We argue below that the behavior of the function F (x, τ ) is given by where C 3 is some constant. This can be seen as follows: Let us consider the special case when the particle starts at a site close to the boundary. Thenn(x) is approximately a linear function of x for small x. Its spatial variation cannot be neglected, and Eq. (12) is no longer valid. We will now argue that in this case The time evolution of P rob ASM (x, t|x ′ , t ′ ) in Eq. (5) is well described as a diffusion with diffusion coefficient proportional ton(x) which is the mean number of topplings per avalanche at x in the ASM [10]. For understanding the long-time survival probability in this problem, we can equivalently consider the problem in a continuous-time version: consider a random walk on a half line where sites are labelled by positive integers, and the jump rate out of a site x is proportional to x. A particle starts at site x = x 0 at time t = 0. If P j (t) is the probability that the particle is at j at time t, then the equations for the time-evolution of P j (t) are, for all j > 0, The long time solution starting with P j (0) = δ j,x0 is for t ≫ x 0 and large j. The probability that the particle survives till time t decreases as 1/t for large t. We have discussed the calculation in the Appendix. Using Eq. (5), we see that G(j, t ′ + τ |x 0 , t ′ ) scales as x 0 /τ 2 . It seems reasonable to assume that G 2 will scale as G 2 . Then, each term in the summation for F (x, τ ) in Eq. (8) scales as x 2 0 /τ 4 , and there are τ such terms, as the sum over x 0 has an upper cutoff proportional to τ , and so F (x, τ ) varies as 1/τ for L ≫ τ ≫ x 0 . This concludes the argument.
We can put these three limiting behaviors into a single functional form that interpolates between these, as where K, a and B are some constants. In Section V, we will see that results from numerical simulation are consistent with this phenomenological expression. Using this interpolation form in Eq. (9), and converting the sum over τ to an integration over a variable u = τ /L, we can write This integral can be simplified by a change of variable au = z 2 , giving where K, B ′ are constants, and I(y) is a function defined by It is easy to verify that I(y) diverges as log(1/y) for small y. In particular, we note that the exponential term in the integral expression for I(y) has a significant contribution only for z near 1. We may approximate this by dropping the exponential factor, and changing the upper limit of the integral to 1. The resulting integral is easily done, giving where K ′ is some constant. Averaging Σ 2 (x) over x, we get a behavior Σ 2 (x) ≃ 1/L. Of course, the answer is not exact, and one could have constructed other interpolation forms that have the same asymptotic behavior.
We will see in the next Section that results from numerical simulations for Σ 2 (x) can be fitted very well to the phenomenological expression in Eq. (24).
V. NUMERICAL RESULTS
We have tested our non-rigorous theoretical arguments against results obtained from numerical simulations. In Fig. 1, we have plotted the probability distribution P L (E) of energy at a site, averaged over all sites. We used L = 200, 500 and 1000, and averaged over 10 8 different configurations in the steady state. We plot the scaled distribution function P L (E)/ √ L versus the scaled energy (E −Ē) √ L. A good collapse is seen, which verifies the fact that the width of the peak varies as L −1/2 .
The dependence of the variance of E(x, t) on x is plotted in Fig. 2 for systems of length 200, 300 and 400. The data was obtained by averaging over 10 8 avalanches. We plot (L + λ)Σ 2 (x)/σ 2 versus x ef f /L ef f , where x ef f differs from x by an amount δ to take into account the corrections due to end effects. Then, for consistency, L is replaced by L ef f = L + 2δ. For the specific choice of λ = 5 ± 1 and δ = 1.0 ± 0.2, we get a good collapse of the curves for different L. We also show a fit to the proposed interpolation form in Eq. (24), with K ′ = 1.00 ± 0.01 and B ′ = 1.5 ± 0.2. We see that the fit is very good.
In order to check the logarithmic dependence of Σ 2 (x) on x for small x, we re-plot the data in Fig. 3 using logarithmic scale for x. We get a good collapse of the data for different L, supporting our proposed dependence in Eq. (24).
VI. CONCLUDING REMARKS
To summarize, we have studied the emergence of quasiunits in the one-dimensional Zhang sandpile model. The variance of energy variables in the steady state is governed by the balance between two competing processes. The randomness in the drive i.e., the energy of addition, tends to increase the variance in time. On the other hand, the topplings of energy variables tend to equalize the ex-cess energy by distributing it to the nearby sites. There are on an average O(L 2 ) topplings per avalanche. Hence, in one dimension there are, on an average, O(L) topplings per site per avalanche. For large system size, the second process dominates over the first and the variance becomes low. We have shown that the variance vanishes as 1/L with increasing system size and the probability distribution of energy concentrates around a non-random value which depends on the energy of addition. We have also proposed a functional form for the spatial dependence of variance of energy which incorporates the correct limiting behaviors, and matches very well with the numerical data.
An interesting question is whether one can extend these arguments to the two-dimensional Zhang model. In this case, there are several peaks in the distribution of energies at a site, but there are some numerical evidences for the sharpening of the peaks as the system size is increased. However, as the number of topplings per site varies only as log L, the width is expected to decrease much more slowly with L, and the fluctuation effects can be much stronger. This remains an open question for further study. Similarly, to solve the equation for b t , we use the form of e −at given in Eq. (A4) and get db t dt = b t 1 − 2(t + A) (t + A)(t + A − 1) .
It can be shown that for large t and j, the solution asymptotically becomes P j (t) = nt −2 exp(−j/t). | 2007-11-19T21:33:18.000Z | 2007-11-19T00:00:00.000 | {
"year": 2008,
"sha1": "cc13934db298ca9c4ccb47d42a98c748e1cb7ef1",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0711.3021",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "cc13934db298ca9c4ccb47d42a98c748e1cb7ef1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics",
"Medicine"
]
} |
96683457 | pes2o/s2orc | v3-fos-license | The Charm of the Proton and the $\Lambda _c^{+}$ Production
We propose a two component model for charmed baryon production in $pp$ collisions consisting of the conventional parton fusion mechanism and fragmentation plus quarks recombination in which a $ud$ valence diquark from the proton recombines with a $c$-sea quark to produce a $\Lambda_c^+$. Our two-component model is compared with the intrinsic charm two-component model and experimental data.
INTRODUCTION
The production mechanism of hadrons containing heavy quarks is not well understood. Although the fusion reactions gg → QQ and qq → QQ are supposed to be the dominant processes, they fail to explain important features of heavy quark hadro-production like the leading particle effects observed in D ± produced in π − p collisions [1], Λ + c production in pp interactions [2] [3] and in others baryons containing heavy quarks [4], the J/Ψ cross section at large x F observed in πp collisions [5], etc.
The above mentioned effects have been explained using a two-component model [6] consisting of the parton fusion mechanism calculable in perturbative QCD plus the coalescence of intrinsic charm [7].
In hadron-hadron collisions the recombination of valence spectator quarks with c-quarks present in the sea of the initial hadron is a possible mechanism for charmed hadron production. Here we explore that possibility for the Λ + c 's production in pp interactions. We will assume that in addition to the usual parton fusion processes, a ud diquark recombines with c-sea quark both from the incident proton.
We compare our results with those of the intrinsic charm two-component model and the experimental data available.
In the parton fusion mechanism the Λ + c is produced via the subprocesses qq(gg) → cc with the subsequent fragmentation of the c quark. The inclusive x F distribution of the Λ + c in pp collisions is given by [8] [9] where with x a and x b being the parton momentum fractions, q(x, Q 2 ) and g(x, Q 2 ) the quark and gluon distribution in the proton, E the energy of the produced c-quark and D Λc/c (z) the fragmentation function. In eq. 1, p 2 T is the squared transverse momentum of the produced c-quark, y is the rapidity of thec quark and z = x F /x c is the momentum fraction of the charm quark carried by the Λ + c . The sum in eq. 2 runs over a, b = u,ū, d,d, s,s.
We use the LO results for the elementary cross-sections dσ dt | qq and dσ dt | gg [8].
where ∆y is the rapidity gap between the produced c andc quarks andm 2 c = m 2 c + p 2 T . In order to be consistent with the LO calculation of the elementary cross sections, we use the GRV-LO parton distribution functions [10], allowing by a global factor K ∼ 2 − 3 in eq. 1 to take into account NLO contributions [6].
We take m c = 1.5 GeV for the c-quark mass and fix the scale of the interaction at Q 2 = 2m 2 c [8]. Following [6], we use two fragmentation functions to describe the hadronization of the charm quark; and the Peterson fragmentation function [11] with ǫ c = 0.06 and the normalization defined by H D H/c (z)dz = 1.
The production of leading mesons at low p T by recombination of quarks was proposed long time ago [12]. The method introduced by Das and Hwa for mesons was extended by Ranft [13] to describe single particle distributions of leading baryons in pp collisions.
In recombination models one assumes that the outgoing hadron is produced in the beam fragmentation region through the recombination of the maximun number of valence and the minimun number of sea quarks coming from the projectile according to the flavor content of the final hadron. Thus, e.g. Λ + c 's produced in pp collisions are formed by the ud valence diquark and a c-quark from the sea of the incident proton. One ignores other type of contributions involving more than one sea flavor recombination. The invariant inclusive x F distribution for leading baryons is given by where x i , i = 1, 2, 3, is the momentum fraction of the i th quark, is the three-quark distribution function in the incident hadron and R 3 (x 1 , x 2 , x 3 , x F ) is the three-quark recombination function. We use a parametrization containing explicitly the single quark distributions for the three-quark distribution function with F q (x i ) = x i q (x i ) and F u normalized to one valence u quark. The parameters β and γ are constants fixed by the consistency condition i, j, k = 1, 2, 3 for the valence quarks of the incoming proton as in ref. [13]. We use the GRV-LO parametrization for the single quark distributions in eq. 8. It must be noted that since the GRV-LO distributions are functions of x and Q 2 , then our F 3 (x 1 , x 2 , x 3 ) also depends on Q 2 .
In contrast with the parton fusion calculation, in which the scale Q 2 of the interaction is fixed at the vertices of the appropriated Feynman diagrams, in recombination there is not clear way to fix the value of the parameter Q 2 , which in this case is not properly a scale parameter and should be used to give adequately the content of the recombining quarks in the initial hadron.
Since the charm content in the proton sea increases rapidly for Q 2 growing from m 2 c to Q 2 of the order of some m 2 c 's when it become approximately constant, we take Q 2 = 4m 2 c , a conservative value, but sufficiently far from the charm threshold in order to avoid a highly depressed charm sea which surely does not represent the real charm content of the proton. At this value of Q 2 we found that the condition of eq. 9 is fulfilled approximately with γ = −0.1 and β = 75. We have verified that the recombination cross section does not change appreciably at higher values of Q 2 .
For the three-quark recombination function for Λ + c production we take the simple form [13] with α fixed by the condition 1 0 dx F (1/σ)dσ rec /dx F = 1, then σ is the cross section for Λ + c 's inclusively produced in pp collisions. From eqs 7 and 8, the where we already integrated over x 3 . The parameter σ will be fixed with experimental data.
The inclusive production cross section of the Λ + c is obtained by adding the contribution of recombination eq. 11 to the QCD processes of eq. 7, then The resulting inclusive Λ + c production cross section dσ tot /dx F is plotted in fig. 1 using the two fragmentation function of eqs. 5 and 6 and compared with experimental data in pp collisions from the ISR [3]. As we can see, the shape of the experimental data is very well described by our model. We use a factor σ = 0.92(0.72)µbar for Peterson (delta) fragmentation respectively. In a similar approach R. Vogt et al. [6] calculated the Λ + c production in pp and πp collisions. The two component model used by them consists of a parton fusion mechanism plus coalescence of the intrinsic charm in the proton. Their results are shown in fig.1. The normalization however has been modified to make a proper comparison to our result.
CONCLUSIONS
We have studied the Λ + c production in pp collisions with a two component model. We show that both the intrinsic charm model and the conventional recombination of quarks can describe the shape of the x F distribution for Λ + c 's produced in pp collisions. None of them, however, can describe the abnormally high normalization of the ISR data quoted in ref. [3]. This discrepancy between theory and experiment does not exist for charmed meson production, which is well described both in shape and normalization with the parton fusion mechanisn plus intrinsic charm coalescence [9] and with the conventional recombination as proposed here [14]. An interesting test to rule out one of the two models would come from a | 2019-04-06T13:02:20.048Z | 1997-02-05T00:00:00.000 | {
"year": 1997,
"sha1": "2756b0aca0c2bffed5d5ae5c7090db0c0ad39799",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/9702257",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "2756b0aca0c2bffed5d5ae5c7090db0c0ad39799",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
261705920 | pes2o/s2orc | v3-fos-license | A Critical Escape Probability Formulation for Enhancing the Transient Stability of Power Systems with System Parameter Design
For the enhancement of the transient stability of power systems, the key is to define a quantitative optimization formulation with system parameters as decision variables. In this paper, we model the disturbances by Gaussian noise and define a metric named Critical Escape Probability (CREP) based on the invariant probability measure of a linearised stochastic processes. CREP characterizes the probability of the state escaping from a critical set. CREP involves all the system parameters and reflects the size of the basin of attraction of the nonlinear systems. An optimization framework that minimizes CREP with the system parameters as decision variablesis is presented. Simulations show that the mean first hitting time when the state hits the boundary of the critical set, that is often used to describe the stability of nonlinear systems, is dramatically increased by minimizing CREP. This indicates that the transient stability of the system is effectively enhanced. It also shown that suppressing the state fluctuations only is insufficient for enhancing the transient stability. In addition, the famous Braess' paradox which also exists in power systems is revisited. Surprisingly, it turned out that the paradoxes identified by the traditional metric may not exist according to CREP. This new metric opens a new avenue for the transient stability analysis of future power systems integrated with large amounts of renewable energy.
INTRODUCTION
In the synchronous state of a power system, the frequencies of all synchronous machine must be at or near the nominal frequency (eg, 50 Hz or 60 Hz). The frequency is the derivative of the rotational phase angle and is equal to the rotational speed of the synchronous machine expressed in units of rad/s. Synchronization of the frequency is essential for the proper functioning of a power system. Severe interference can cause desynchronization, which can lead to widespread power outages. Current energy systems are moving towards more distributed generation by renewables, which tend to be inherently more uncertain and low inertial, posing an even greater threat to synchrony.
Synchronization stability, which is also called the transient stability in power engineering, is the ability to maintain the synchronization when subjected to disturbances [9]. In this paper, we refer to synchronization stability and transient stability equivalently. The synchronous state and its stability are determined by the system parameters which include the power generation and loads, the inertia and the damping of the synchronous machines, the capacity of lines and the network topology. For deterministic systems, significant insights on the role of these parameters have been obtained from investigations on the existence condition of a synchronous state [5,4], the linear or nonlinear stability [14], the synchronization coherence [7] and the basin of attraction [13,3]. The system parameters may be assigned to optimize the synchrony, which can be obtained by load frequency control, the placement of virtual inertia, configuration of the damping coefficient, deletion or addition of lines or by changing the line capacities. In particular, regarding the transient stability, the local convergence to the synchronous state or the basin of attraction of the synchronous state are investigated [1,28].
In practice, the synchronous state of the power system is a set point for control, in which control actions are taken to let the state converge to this set point after disturbances. Thus, with frequently occurring disturbances, e.g., the uncertainties from wind energy and power demand and unpredictable fault in power generation, the frequency, and the phase usually fluctuate around the synchronous state. If both the fluctuations of the frequency and phase difference between the synchronous machines are so large that the state of the system cannot return to the synchronous state, then the synchronization is lost. Hence, the risk of losing synchronization is actually determined by two factors, i.e., the size of the basin of attraction of the synchronous state and the fluctuation of the state caused by the disturbances. To increase the transient stability, it is important to find such a synchronous state that has a large basin of attraction and around which the fluctuation of the state is also small. It is insufficient to analyze the transient stability in a deterministic system without considering the state fluctuations caused by the disturbances.
Regarding the state fluctuations, various investigations have been made to learn the impacts of the system parameters, from which insights have been obtained on the propagation of the disturbances and the parameter assignment for suppressing the fluctuations. With perturbations added to the system parameters, the disturbance arrival time is estimated in [30]. The amplitude of perturbation responses of the nodes is used to study the emergent complex response patterns across the network in [29]. By modelling the disturbances as inputs to an associated linearized system, the fluctuations are evaluated by the H 2 norm of the input-output linear system [6,15,19]. By minimizing it, the fluctuations can also be effectively suppressed by system parameter assignment, such as the optimal placement of virtual inertia [15]. To precisely characterize the fluctuations, the variances of the frequency at each node and the phase difference at each line in the invariant probability distribution are investigated [21,?] with the disturbance modelled by Gaussian noise. It is found that the impacts of the disturbances at the nodes can be described by the Superposition Principle [24]. With assumption of uniform disturbance-damping ratios among the nodes, explicit formulas of the variance have been deduced. From these formulas it is found that the fluctuations are related to the cycle space of graphs [21]. In control theory, the robust control method is applicable to suppress the fluctuations by controlling the power generation in load frequency control. However, for enhancing the transient stability, it is insufficient to suppress the fluctuations only because the stability also depends on the basin of attraction.
For enhancing the transient stability, the most difficult problem is to define a quantitative optimization formulation with the system parameters as the decision variables. The mean of the first hitting time when the state hits the boundary of the basin of attraction is often used to study the survival time of a system, which is also used to study the stability of non-linear systems [11,8]. The longer is the mean first hitting time, the more stable is the system under stochastic disturbances. Both of the basin of attraction and the severity of the state fluctuations are involve into this value, which makes it a potential metric for the transient stability. However, it can hardly be maximized directly because it is difficult to get the probability distribution of the first hitting time and the boundary of the basin of attraction. For coupled phase oscillators, the probability that the state exits a secure domain, which also involves the basin of attraction and the state fluctuations, is investigated in order to enhance the synchronization stability of the system in [23]. However, the dynamics of the frequencies at the nodes are not considered in that system.
In this paper, for power systems with stochastic disturbances, we model the disturbances by Gaussian noises and focus on the invariant probability distribution of the frequency and the phase difference in a linearized stochastic process. We define a metric named Critical Escape Probability(CREP), which describes the probability of the state escaping from a critical set, to assess the transient stability. It is related to the mean first hitting time of the state to the boundary of the critical set, i.e., the smaller is CREP, the longer is the mean first hitting time. We analyze the trends of CREP as the system parameters change and its relationship to the size of the basin of attraction. In addition, we revisit the famous Braess' paradox [22,2] with CREP. It is found this paradox can also be identified by CREP. In particular, it is surprisingly found that adding a new line may lead to increasing the stability under CREP while decreasing the stability under the other existing metrics. This is because the influences of all the system parameters are included into CREP while in the other metrics, e.g., the linear stability measured by the spectrum of the Jacobi matrix and the order parameter defined by Kuramoto to study the level of the synchronization, not all system parameter's influences are fully considered. We formulate an optimization framework that minimizes CREP with the system parameters as decision variables. The mean first hitting time is used to verify the performance of CREP on identifying the Braess' paradox and the optimization framework on enhancing the transient stability. The optimization framework can be applied in optimal power flow calculation, the placement of virtual inertia, tuning the gain for droop control and the design of the network topology. It also provides a new avenue to the stability analysis of the complex system in which the synchronization plays an important role on the proper function of the system [5].
The contributions of this paper include: (1) CREP formulation for assessing the transient stability, which involves the roles of all the system param-eters and can be minimized to enhance the transient stability; (2) An optimization framework that minimizes CREP, by which the system parameters can be optimally configured to increase the first hitting time, thus enhance the transient stability of the system under stochastic disturbances; (3) A new finding on the identification of the Braess' paradox by CREP.
This paper is organized as follows. We formulate the problem by introducing the mathematical model of power system and the concept of the mean first hitting time in Section 2. The invariant probability distribution of a linear stochastic process and the definition of CREP are described in Section 3 and the optimization framework for improving the transient stability is presented in Section 4. We analyze the dependence of CREP on the system parameters and evaluate performance of the proposed optimization framework on improving the transient stability through case studies in Section 5 and conclude with remarks in Section 6.
Problem formulation
In this section, we present the scientific problem of this paper with the introduction of the model and the mean first hitting time of a stochastic process.
The model
The network of the power system can be modelled by a graph G = (V, E) with nodes in set V and lines in set E ⊂ V × V, where a node denotes a bus and a line denotes a transmission line connecting two buses. We focus on the transmission network and assume the lines are lossless. The dynamics of the power system are described by the swing equations [28,12,1] where ( ) and ( ) denote the phase angle and the frequency deviation from the nominal frequency of the synchronous machine at node ; > 0 describes the inertia of the synchronous generators; > 0 represents the damping coefficient with droop control; denotes power generation if > 0 and denotes power load otherwise; , =ˆ, is the effective susceptance, wherê , is the susceptance of the line ( , ) and is the voltage. In this paper, , is also referred as the line capacity. We assume that the voltage at each node is a constant because the dynamics of the voltage and that of the frequency can be decoupled in the stability analysis (1).
It is assumed that the graph is connected, thus it holds > − 1.
When the power generations and loads are time invariant, the frequencies at the nodes synchronize at an equilibrium state, called the synchronous state that satisfies, for = 1, 2, · · · , , Without loss of generality, we assume =1 = 0, which means that the power generations and loads are balanced. In practice, this balance is achieved by secondary frequency control [26]. Hence, at the synchronous state, it holds that = 0, and the phases * at the nodes satisfies, Clearly, the existence of a synchronous state depends on the topology structure, the distribution of power generations and loads at each node and the line capacities [5,4]. Denote the synchronous state by (( * ) ⊤ , 0) ⊤ ∈ R 2 with * = col( * ) ∈ R . For practical reasons, we restrict our attention to the synchronous state with the phase in the following domain It has been proven there exists at most one synchronous state in this domain and when it exists, it is asymptotically stable [17]. The stability region of the synchronous state has been analyzed by [1] and independently by [28].
In real networks, the state of the power system always fluctuates around the synchronous state due to various disturbances. When the fluctuations are very large, the state may exit the stability region of the synchronous state and become instable. Desynchronization means that both the fluctuations of the frequency and the phase angle difference are so large that the system cannot return to the synchronous state. The fluctuations depend on many factors, which include the line capacity, the inertia and damping of the synchronous machines, the network topology and the strength of disturbances. The source of the disturbances are also various, e.g., the renewable power generation, fault of the devices in the network, etc. We focus on the following problem.
Problem 2.1 How to improve the transient stability of the system under stochastic disturbances by changing the system parameters?
To address this problem, we model the disturbance by Gaussian noise and focus on the following stochastic pro-cess, where ( ) = ( ) − ( ), is used to describe the strength of the noise, ( ) is a Brownian motion process, which has increments with a Gaussian probability distribution. Here, we have assumed for any two distinct nodes and the stochastic process ( ) and ( ) are independent. This is reasonable because the locations of the renewable power generators with serious power generation uncertainties are usually far from each other.
Denote
= ( , ) ∈ E for = 1, · · · , . To obtain the information of the frequency and the phase difference, we define the output of the system (1) as the frequencies at the nodes and the phase differences in the lines as follows, where the direction of line is specified arbitrarily without influence on the study below, ∈ R × is an identity matrix. Note that the first elements of are the phase differences in the lines and the next elements are the frequencies at the nodes. At the synchronous state * = (( * ) ⊤ , 0) ⊤ , the output becomes To address Problem 2.1, a metric that fully reflect the transient stability and can be minimized or maximized as an objective function in an optimization problem is sought. The mean first hitting time that is often used to describe the stability of a nonlinear system is introduced below.
Definition 2.2 Consider a stochastic process { ( ) ∈ X, ∈ T} with initial state (0) = 0 and a boundary set B of set A, in which A ⊂ X and T = [0, +∞). Assume that the initial value 0 of the process lies inside A but outside B, then the first hitting time is defined by the random variable : where is the first time when the sample path of the stochastic process reaches the boundary set B.
The first hitting time is also called the first exit time of the set A with boundary set B. It is a random variable and a stopping time of the algebra family generated by the process ( ). We denote the mean of by . It is obvious that the first hitting time depends on the initial state (0), the probability distribution of ( ) and the boundary set B.
In reality, the stability depends on how the system reacts to a series of small fluctuations. If we set B = A, with the set A is the basin of attraction, and the initial state as the synchronous state, the expectation of the first hitting time of the process ( ( ) ⊤ , ( ) ⊤ ) ⊤ in (4) can fully reflect the transient stability, i.e., this expectation depends on the size of the basin of attraction and the strength of the disturbances. This makes it a potential candidate for the metric. However, the distribution of the first hitting time can hardly be derived analytically or even approximated by the Monte-Carlo method because of the difficulty on describing the boundary of the basin of attraction. Due to this difficulty, we set X = Θ, with Θ is defined in (3) and where ∈ R is a small real number corresponding to the quality of power supply according to the requirement of governments, i.e., the frequency fluctuation should be sufficiently small to guarantee the system stability. If the state goes out of this set, the synchronization may be lost. The set Θ is critical for monitoring the transient stability [27], Hence, we call it a critical set for the transient stability of the system in this paper.
Let us reconsider the synchronization of the system (1) under the disturbances. With the definition of and X = Θ, the state ( ) of the system (4) remains in the set Θ in the period [0, ], i.e., ( ) ∈ Θ for any ∈ [0, ], thus the synchronization is maintained in the period [0, ]. Once the state exits the set Θ, the synchronization may be lost. If → +∞, desynchronization will almost never happen under the disturbances. Thus, the larger is , the longer is the time that the synchronization is maintained, which means the synchronization is more stable under the disturbances. This makes the mean first hitting time a suitable metric for the transient stability. Clearly, the mean can be approximated by the Monte-Carlo method with simulations of (4), thus can be used to assess the transient stability. However, to maximize in an optimization problem, one has to know the analytical expression of its probability distribution, which can hardly be derived because of the high dimension and nonlinearity of the system (4). Thus, an alternative metric for enhancing the transient stability by an optimization framework has to be designed.
CREP for transient stability
In this section, we define a metric that can be minimized as an optimization problem with the system parameters as decision variables for enhancing the transient stability.
An intuitive way to increase is to increase the probability of the state ( ) staying in the critical set Θ. This is equivalent to increasing the probability of the output ( ) staying in the set Due to the nonlinearity and the high dimension of the state of the system (4), the probability distribution of the process ( ) can hardly be analytically obtained. Because the state ( ) always fluctuates around the synchronous state * = (( * ) ⊤ , 0) ⊤ , the output ( ) = ( ) fluctuates around the value * = * , which is seen as the expectation of the output. To investigate the fluctuations, the system (4) is linearised around the synchronous state * = (( * ) ⊤ , 0), with the state variable, output vector, system matrix and input matrix where represents the deviation of the state ( ) from the synchronous state * , is a singular Laplacian matrix whose elements satisfy The matrix is also called the Jacobian matrix of the system (1) at the synchronous state (( * ) ⊤ , 0) ⊤ . Because of the Gaussian distribution of ( ), the process and ( ) are also Gaussian such that In real networks, the state always fluctuates around the synchronous state under various disturbances. Thus, it is reasonable to use the variance of ( ) in the invariant probability distribution, regardless the initial probability distribution of ( ), to measure the fluctuations [21].
Because the system matrix is singular, the invariant probability distribution of ( ) does not exist. However, the invariant probability distribution of ( ) exists [21], i.e., With this Gaussian process, we further define a new process to approximate the process ( ).
Definition 3.1 Given the output * in (6) and the Gaussian process ( ) in (9), the stochastic process ( ) is defined as, We denote ( ) = ( ⊤ ( ), ⊤ ( )) ⊤ . The process ( ) is Gaussian process, and has invariant probability distribution with Clearly, ( ) also fluctuates around * . It actually approximates ( ) at the neighborhood of * because the linearisation of the system (4) at the synchronous state With * as the expectation solved from (2) and the variance matrix , the probability that the process ( ) is in the set Θ at the invariant probability distribution can be calculated. However, this probability still can hardly be computed because an integral over a supercube of dimension + is needed, which involves immense computational complexity. Thus, we focus on the marginal probability distribution of the components of ( ) in the invariant probability distribution. Denote the vector formed by the diagonal elements of the matrix by 2 = (( 2 ) ⊤ , ( 2 ) ⊤ ) ⊤ ∈ R + where 2 = col( 2 ) ∈ R and 2 = col( 2 ) ∈ R are the variances of the phase differences in the lines and the frequencies at the nodes. It is noticed that and are the standard variances of the phase difference in line and the frequency at node respectively. The logic behind enhancing the transient stability is to increase the probability of ( ) in the domain Θ . With this logic, we define a critical escape probability based on the invariant marginal probability distribution of the components of ( ) as below.
Definition 3.2 Consider the stochastic process ( ) in (10). Denote = ( ⊤ , ⊤ ) ⊤ ∈ R + and = col( ) ∈ R and = col( ) ∈ R with and defined as the probability of the absolute values of and exiting /2 and such that respectively. The Critical Escape Probability (CREP) assessing the probability of ( ) escaping from the set Θ in the invariant probability distribution is defined as Analogously, the CREP assessing the probability of ( ) escaping from the set Θ and the probability of ( ) escaping from the set Θ are respectively defined as Because ( ) approximates ( ) at the neighborhood of * , by minimizing Φ, the probability of ( ) escaping from the critical set Θ decreases, which leads to an enhancement of the transient stability. Naturally, Φ is a metric effectively assessing the transient stability of the system (1) with stochastic disturbances. Similarly, when the rotor angle stability which focuses on the ability of the system to maintain the cohesiveness of the phase angles, and the frequency stability which considers the severity of the frequency fluctuations need to be enhanced separately, Φ and Φ can be minimized respectively. Clearly the performance of these minimization can be evaluated by the mean first hitting time of ( ) to the boundary Θ , which is approximated by the Monte-Carlo method with simulations of the nonlinear stochastic system (4).
Obviously, if in (12), which is a tolerance of the frequency fluctuations, is very large such that ∥ ∥ ∞ > ∥ ∥ ∞ , then Φ = Φ . On the other hand, if is very small such that ∥ ∥ ∞ ≤ ∥ ∥ ∞ , then Φ = Φ . We next present the procedure for calculating Φ and then introduce the characteristics of Φ.
The phase difference in the vector * is solved from the power flow equation (2). For the calculation of , we need to solve the variance matrix for the system (9) which is presented below.
Theorem 3.4 Consider the stochastic system (9) and the notations of matrices in Lemma 3.3. Define matrices which can be reformulated in blocks according to , and 2 is the matrix obtained by removing the first column of the matrix so that with = 2 3 · · · ∈ R × ( −1) . The variance matrix of the output of the system (4) in the invariant probability distribution satisfies is the unique solution of the following Lyapunov equation See [21] for the proof of this theorem. Based on Theorem 3.4, we present the procedure for the calculation of Φ. Clearly, Φ and Φ can be calculated at the same time with Φ by this procedure. An important observation is that using the value of , the line where the system loses synchronization the most easily and the node where the frequency fluctuation is the most severe can be identified. These identifications allow for the discovery of weak parts in the network that may lead to network desynchronization.
CREP Φ has the following characteristics.
(i) CREP includes the influences of all the system parameters, the inertia and the damping of the synchronous machines, the distribution of the power loads and generation, the line capacity, the network topology and the strength of the disturbances. (ii) CREP also reflects the size of the basin of attraction and characterizes the phenomenon that the size of the basin shrinks if either the power flows in lines increase or if a line capacity decreases The first characteristic is concluded directly from the calculation of and Φ in Procedure 3.5. Before explaining the second characteristic in a proposition, we first introduce a lemma for the bounds of the matrix , which is needed in the proof of the proposition. For matrices , ∈ R × , we say that ⪯ if the matrix − is semi-negative-definite. To emphasize the variance matrix of the frequencies and the phase differences, we write the matrix in the following form, Lemma 3.6 Consider the stochastic process (9). Define = max{ , = 1, · · · , } and = min{ , = 1, · · · , } with = 2 / . The variance matrix satisfies where Λ −1 = diag( , = 2, · · · , ) ∈ R ( −1) × ( −1) .
Proof: Define matrices = ( ) 1/2 and = ( ) 1/2 and From the definition of and and ≤ 2 = ≤ for all the nodes, it yields which leads to By Theorem 3.4, we further obtain With these explicit formulas and (18) and the form in (20), we obtain (21). □ Based on this Lemma, we have the following theorem. (2) if the second smallest eigenvalue of the matrix at the synchronous state decreases to zero, then the metric Φ increases to one.
Proof: (1) At a synchronous state, when the strength of the disturbances vary from zero to infinity, the variance 2 of the frequencies at the nodes and 2 in the lines vary from zero to infinity. It follows from Definition 3.2 for and , the values of Φ lies in the interval [0, 1].
(2) By the definition in (11), decreases to one as the variance increases to infinity. With the bounds of the matrix in Lemma 3.6, we only need to prove that as the second smallest eigenvalue decreases to zero, there is at least one diagonal element of the matrix that increases to infinity. The incidence matrix of the graph is written into = 1 2 · · · , where describes the indices of the two nodes connected by line . Without loss of generality, assume the line connects nodes and and the direction of this line is from node to . Then, the -th and -th element of the vector are =1 and = −1, respectively and the other elements all equal to zero. From the definition of the matrix in Lemma 3.6, we obtain the diagonal elements of , where , +1 and , +1 are the -th and -th element of the vector +1 . Here +1 is the ( + 1)-th column of the matrix defined in Lemma 3.3. Because 2 is a column of the orthogonal matrix , there exists , with ≠ such that −1/2 ,2 ≠ −1/2 ,2 , thus increases to infinity as the second smallest eigenvalue 2 decreases to zero. □ This theorem indicates that CREP Φ fully reflects the size of the basin of attraction of a stable synchronous state. In fact, it is known that as the power loads increases or the line capacities decreases, the synchronous state (( * ) ⊤ , 0) ⊤ moves to the boundary Θ and both the second smallest eigenvalue 2 of and the size of the basin of attraction decrease. For the system (1), the number of eigenvalues of its system matrix with positive real part equals to the number of negative eigenvalues of [27]. Thus, when the secondary smallest eigenvalue of decreases to zero, the stable synchronous state gradually disappears, which means the basin of attraction disappears. Clearly, this is captured by CREP Φ which increases to one in this case. This theorem also demonstrates that CREP fully reflects the phenomena that if the synchronous state is close to the boundary, a very small disturbance may lead to desynchronization.
To illustrate the procedure for calcluating CREP and its characteristics, we apply it to the Single Machine Infinite Bus (SMIB) model with Gaussian disturbances, Assume ≤ , obviously, an equilibrium point is ( * , * ) = (arcsin / , 0). Linearising the system at the equilibrium point, we obtain a Gaussian stochastic process with system matrix and input matrix By solving the following Lypunov equation, we obtain the variance matrix in the invariant probability distribution of the stochastic process, With this variance matrix and the equilibrium point as the expectation, the critical probability and and are calculated according to Definition 3.2. Clearly, depends on the inertia and the damping of the synchronous machines and the strength of the disturbance while independent of the line capacity and the load . However, the dependence of on the system parameters is relatively complex.
It is clear that is independent of the inertia. Fig. 1 shows the trend of as the system parameters changes. In particular, as the load increases to the line capacity, the basin of attraction of the equilibrium ( * , 0) gradually disappears. This is fully confirmed by , which increases to one. In Section 5, the Braess' parodox will be revisited with the proposed metric, where new findings will be presented.
The optimization framework
The proposed metric CREP actually quantifies the risk that the state escape from the critical set Θ. By minimizing this metric with the system parameters as the where denotes the selected decision variable, ( ) ≤ 0 denotes the constraint on the selected decision variables. If only the rotor angle stability is considered, the objective function is replaced by Φ . Similarly, if suppressing the frequency fluctuations is the main purpose, the objective function is replaced by Φ . We remark that the constraints (24) cannot be neglected because there may be many equilibrium points for the system (1) that are not in the set Θ [25] and unexpected unstable synchronous state may be obtained if these constraints are neglected.
If the power generation are selected as the decision variables, which is usually optimized in the tertiary frequency control, the constraints (25) are replaced by where is the total power load, V ⊂ V denotes the set of power generations, and are the lower bound and upper bound of the power generation respectively. Note that the power load at a node may also be selected as a decision variable, the form of the constraints are the same as (26).
If the inertia of the synchronous machines are the decision variables such that = col( ) ∈ R , the constraints (25) are replaced by where M denotes the total amount of inertia and and are the lower bounds and upper bounds of inertia coefficients respectively.
Similarly, if the damping coefficients are selected as decision variables such that = col( ) ∈ R , the constraints (25) are then replaced by where D denotes the total amount of damping and and are the lower bound and upper bound of the damping coefficient at node respectively.
If the line capacities of the lines are selected as decision variables, i.e., = col( , ) ∈ R with ( , ) ∈ E, the constraints (25) are replaced by where L is the total available line capacities and , and , are the lower bound and the upper bound of the capacity of line ( , ) ∈ E respectively.
When the objective function in (23) is replaced by Φ = ∥ ∥ ∞ , the frequency fluctuations are minimized by tuning the system parameters. For this minimization problem, we have the following proposition.
The value is monotonically increasing with respect to the standard variance . Thus, minimizing ∥ ∥ ∞ is equivalent to minimizing ∥ 2 ∥ ∞ . □ With this proposition, the objective function can be further replaced by ∥ 2 ∥ ∞ . We remark that this is different from minimizing the H 2 norm as in the optimization framework (B.1) where the objective function is the sum of the frequency variances at all the nodes.
Note that in the constraints (26), (27), (28) and (29), the upper bound may equal to the lower bound, in which case the corresponding decision variables become constants. The constraints (24) restrict the synchronous state in the domain (7). Because the synchronous state may not exist, there may be no solutions for the optimization problem in that case.
Case study
In this section, we evaluate the performance of the proposed metric on assessing the transient stability and the optimization framework on enhancing the transient stability of a system with network topology as shown in Fig. 2. In this model, all the buses are assumed to be synchronous machines. There are 39 nodes and 46 lines. The nodes with even numbers are connected to power generators and the other nodes are connected to power loads, which are denoted by blank nodes and grey nodes respectively in Fig. 2.
Because the solution of the optimization problem with objective (23) is sensitive to the parameter , i.e., if it is too large, the phase difference will often first hit the boundary, while if it is too small, the frequency component will often first hit the boundary. Due to difficulty in the configuration, we first study the metric Φ regardless the frequency fluctuations, which lead to Φ = Φ , and then study it without boundary trigger of the phase differences, which leads to Φ = Φ . For the former case, we only need to evaluate the performance of the metric Φ = ∥ ∥ ∞ and the corresponding optimization framework. For the latter one, we evaluate Φ = ∥ ∥ ∞ and its corresponding optimization framework. In particular, we revisit the Braess' paradox, in which according to earlier studies using different metrics the stability may be decreased when a new line is added or the line capacity of an existing line is increased.
The phase cohesiveness measured either by ∥ * ∥ ∞ or the order parameter at the synchronous state and the H 2 norm of the system with stochastic input may be considered as metrics for optimal network design [7,18], for which the corresponding optimization frameworks are introduced in the Appendix. Here, we compare the performance of these optimization frameworks to that of the proposed framework in this paper. The corresponding optimization problems are solved by the Genetic Al-gorithm method using Matlab. The bound constraints (26b-29b) of the decision variables are not considered.
To show the results intuitively, the mean first hitting time of ( ) to the boundary Θ is used to indicate the enhancement of the transient stability in these evaluations, which is calculated statistically by the Monte-Carlo method for the nonlinear system (4). The Euler-Maruyama method is applied to the system (4) with the initial condition ( (0), (0)) = ( * , 0) and simulation time = 10 5 . The total number of samples for calculating the mean first hitting time is = 10 5 and the time step for the simulation is Δ = 10 −3 . In subsection 5.1, we investigate the dependence of Φ on the system parameters and the relationship between Φ and . The performance of minimizing Φ and the revisit of Braess' paradox are also described in this subsection. In subsection 5.2, we introduce the dependence of Φ on the system parameters and the performance of minimizing Φ .
The metric Φ
To understand the dependence of the metric Φ on the system parameters and the performance of proposed optimization framework, we focus the systems with the following 5 configurations of system parameters respectively, where the decision variables may be the power generation, the line capacities of all the lines, the inertia and the damping coefficients of the synchronous machines at all the nodes separately.
(1) The parameters selected as decision variables are set identically. For example, when the power generation is selected as decision variables, we set all the power generation identically with total power supply P in these systems, i.e., = / where is the total number of generators and also is the dimension of the decision variables. We mention the models with this parameter configuration as initial models for simplicity; (2) The parameters selected as decision variables are set to the solution of the optimization problems minimizing Φ . For example, when the power generation are selected as decision variables, the values of the power generation is set to the solution of the optimization problems with objective Φ . We set = 0.5 √ + 1. If a parameter is not selected as a decision variable, it is set as in Table 1. For example, if the power generation are selected as decision variables, then we set the other variables as shown in Table 1, i.e., , = 20, = 0.08 , = 0.2 × (42 − ). Specially, when studying the dependence of the metric Φ and the mean first hitting time on the power generation and loads, we set the power loads identically with total amount of power P . In the Monte-Carlo methods for the simulations of the system (4), the first hitting time is recorded when there are lines in which the phase differences exit the set Θ regardless of the deviations of the frequencies. The initial models are used for comparing the performance of the 4 optimization frameworks.
The dependence of Φ on the system parameters
The dependence of and Φ on the parameters P , L M and D are shown in Fig. 3. The findings from these figures are summarized below.
First, by comparing the trends of Φ = ∥ ∥ ∞ as these parameters change in Fig. 3(a-d) with those of in Fig. 3(e-h), it can be observed that when the metric ∥ ∥ ∞ increases, decreases. This demonstrates that CREP fully reflects the trends of the mean first hitting time, and shows the ability of effectively assessing the transient stability in terms of the mean first hitting time.
Second, it is found from Fig. 3(a,b,d) that Φ = ∥ ∥ ∞ decreases as L and D increase respectively while increases as P increases in all the 5 models. This is practical. In particular, it is shown in Fig. 3(c) that when the inertia increases, ∥ ∥ ∞ decreases significantly. This demonstrates that increasing the inertia is also beneficial to the rotor angle stability, which is consistent with the findings from the explicit formula of the variance matrix of the phase differences for star networks in [24]. This is because a large inertia accelerates the propagation of the disturbances from a node to the other nodes. It is remarked that with the assumption of uniform disturbance-damping ratio, i.e., 2 / = for all the nodes, the variance of the phase differences is independent of the inertia [21].
We remark that the objectives ∥ * ∥ ∞ in (B.2) and the order parameter (B.4) are independent of the inertia and damping coefficients of synchronous machines. Thus, when the inertia and the damping coefficients are selected as decision variables, the values of will not be changed. This is shown in Fig. 3(g,h) where the curves of the mean first hitting time in the initial model and the ones with system parameters setting to the solutions of maximizing the order parameter and minimizing ∥ * ∥ ∞ coincide.
Performance of minimizing Φ
We compare the performances of the optimization frameworks, i.e., the ability on increasing the mean first hitting time . In Fig. 3(e-h), it is clearly shown that with system parameters optimized by the proposed optimization framework, that are denoted by red dotted lines, is much larger than all the others. This demonstrates that minimizing Φ is more effective on increasing the transient stability than optimizing all the other metrics. This also confirms that for enhancing the transient stability, it is insufficient to suppress the fluctuations only. Obviously, when the strength of the disturbance decreases to zero which leads 2 to zero, the effectiveness gradually reduces to that of the optimization framework (B.2).
Revisit of the Braess' paradox with Φ
If a new line is added or the capacity of a line increases, its influences can be evaluated from the changes of the linear stability measured by the absolute values of the real parts of the non-zero eigenvalues of the system matrix in (9) or the order parameter defined in (B.3). We denote the smallest absolute value of the real parts of the nonzero eigenvalues by min{|Re( )|} where denotes the non-zero eigenvalues of . A Braess' paradox occurs if min{|Re( )|} or the order parameter decrease when a new line is added or the capacity of a line increases. Here, on the network shown in Fig. 1, we study the performance of Φ on identifying a Braess' paradox and compare it with those of the linear stability and the order parameter. We set = 0.09 and set the other parameters as in Table 1. We show in Table 2 the values of , ∥ ∥ ∞ , min{|Re( )|} and , where the confidence intervals of with confidence level 95% are [ − , + ] with ≤ 2s in all the 4 cases.
Let us first focus on the changes of these metrics after a new line is added, i.e., either line (19,23) in case 2 or (24, 38) in case 3. It is shown in Table 2 that after adding line (19,23), min{|Re( )|} and increase from 0.2833 to 0.3179 and 0.9663 to 0.9666 respectively, which indicate the stability is increased. While, ∥ ∥ ∞ increases from 4.383 × 10 −6 to 1.419 × 10 −5 and decreases from 195.14s to 148.94s, both indicate the stability is decreased. Clearly, this conflicts with the result by min{|Re( )|} and . Conversely, in case of adding line (24, 38), a Braess' paradox is identified with respect to min{|Re( )|} and , which decrease from 2.833 to 2.832 and from 0.9663 to 0.9652 respectively. In contrast, with the metric ∥ ∥ ∞ , which decreases from 4.383 × 10 −6 to 3.393 × 10 −6 , and the metric , which increases from 195.14s to 241.51s, it is identified that the new added line increases the stability.
We next study the changes of these metrics after increasing the line capacity of (22, 35) by comparing the results of case 1 and case 4 in Table 2. It is seen that after increasing the line capacity, min{|Re( )|} and the order parameter increase from 0.2833 to 0.3103 and from 0.9663 to 0.9666 respectively, both indicate that increasing the line capacity of (22, 35) is beneficial to the stability. However, the metric ∥ ∥ ∞ increases from 4.383 × 10 −6 to 4.967 × 10 −6 and the mean first hitting time decreases from 195.14s to 188.11s, which indicate that stability decreases, thus a Braess' paradox occurs.
In words, whether a Braess' paradox occurs depends on the metric used for the stability. The proposed metric that involves the roles of all the system parameters and the strength of disturbances provides a more practical tool to identify a Braess' paradox.
The metric Φ
In this subsection, we study the dependence of the metric Φ = ∥ ∥ ∞ on the system parameters and the performance of the optimization framework. We focus on the mean first hitting time when the frequencies exit the range Θ and ∥ ∥ ∞ in the systems with the following 3 configurations of system parameters.
(1) The parameters selected as decision variables are set identically. The model with this parameter configuration are also called initial models and used for comparison as in the previous subsection. We set = 5 × 10 −4 , which is much smaller than the setting in Subsection 5.1. The parameters that are not selected as decision variables are set to the values in Table 1. Note that minimizing ∥ 2 ∥ ∞ is equivalent to minimizing ∥ ∥ ∞ based on Proposition 4.1. As in the previous subsection, when studying the impact of the power generation and loads on the metric ∥ ∥ ∞ , we select all the power generation as decision variables and set the power load identically with the total amount P . We set = 0.02 to calculate ∥ ∥ ∞ and in the simulations of (4). In the Monte-Carlo methods for the simulations of the system (4), the first hitting time is recorded when there are nodes at which the frequencies exit the range Θ regardless of the fluctuations of the phase differences. Note that in all the simulations, because the strengths of the disturbances are much smaller than those in Subsection 5.1, the phase differences in all the lines remain in the range Θ in the simulations. In other words, the frequencies always hit the boundary of Θ first. The simulation results are shown in Fig. 4.
The dependence of Φ on the system parameters
It is observed from Fig. 4 that ∥ ∥ ∞ increases as P increases and decreases as L increases. This is because either increasing P or decreasing L will decrease the weight , cos( * − * ) for all ( , ) ∈ E, which decelerates the propagation of disturbances from a node to the others. Note that accelerating the propagation of the disturbances in a network with heterogeneous strength of disturbances is beneficial to decrease ∥ 2 ∥ ∞ which further decreases ∥ ∥ ∞ . This is consistent with the theoretical analysis with explicit formulas of the variance matrix in special networks that includes star networks and complete networks in [24].
From Fig. 4(c-d), it is seen that as M and D increase, ∥ ∥ ∞ decreases respectively. This is consistent with the analysis in [21] and [24] on the dependence of 2 on the inertia and damping coefficients.
Comparing the figures of and ∥ ∥ ∞ in Fig.4(a-d) and (e-h), we find that the trend of ∥ ∥ ∞ fully reflects the dependence of on the system parameters. Thus, CREP characterizes the mean first hitting time consequently assesses the transient stability of power systems.
Performance of minimizing Φ
By comparing the curves of in Fig. 4(e-h), the mean first hitting time when ∥ ∥ ∞ is minimized is the largest one among the three metrics. This demonstrates that the proposed optimization framework is the most effective on increasing the transient stability.
In contrast, it is surprising found from Fig. 4(c-d) and (g-h) that the curves of ∥ ∥ ∞ and in the initial model and the model where tr( ) are minimized, almost overlap. This indicates that by minimizing tr( ) with either the inertia or the damping as decision variables, the stability can hardly be improved.
Conclusion
Based on the theory of the invariant probability distribution of a stochastic process driven by Brownian motion, we have proposed a metric named CREP, which involves all the system parameters and reflects the size of the basin of attraction, to assess the transient stability. An optimization framework minimizing CREP with the system parameters as decision variables was formulated. The mean first hitting time of the state hitting the boundary of a critical set, can be significantly increased by this approach, which intuitively shows the strong potential of our approach in enhancing the transient stability.
Future study will be on efficient algorithms for solving the corresponding optimization problems and theoretical analysis of the transient stability enhancement of power systems with non-Gaussian noise [16]. Extensions of the method for robustness improvement of the other nonlinear systems with continuously occurring disturbances will also be investigated. where ∈ R , ∈ R × is Hurwitz, ∈ R × , ∈ R × , the input is denoted by ∈ R and the output of the system is denoted by ∈ R . The squared H 2 norm of the transfer matrix of the mapping ( , , ) from the input to the output is defined as || || 2 2 = tr( ) = tr( ), (A.2a) where tr(·) denotes the trace of a matrix, , ∈ R × are the observability Grammian of ( , ) and controllability Grammian of ( , ) respectively [6,20]. When the input is modelled by Gaussian white noise, the distribution of the state and the output are also Gaussian. Denote then for all ∈ , ( ) ∈ ( ( ), ( )) with ( ) ∈ R × and ( ) ∈ ( ( ), ( )) with ( ) ∈ R × . Because the matrix is Hurwitz, there exists an invariant probability distribution of this linear stochastic system with the representation and properties Here is the unique solution of the Lyapunov matrix function (A.2c) .
B The traditional optimization frameworks
In this section, we present the traditional metric for the optimal configuration of the system parameters, the H 2 norm of the system (9), the phase cohesiveness and the order parameters for the level of the synchronization.
If the maximum of the variance of the phase angle differences in the edges is minimized, the objective function is replaced by || 2 || ∞ in (B.1). The decision variables can be either the power generation, the inertia, the damping coefficients or the line capacities and the corresponding constraints (25) can be replaced by the ones in (26), (27), (28) and (29) The order parameter of couple phase oscillators is defined as where i 2 = −1 and is the phase at node and i is the phase' centroid on the complex unit circle with the magnitude ranging from 0 to 1 [10]. In Section 5, the order parameter is maximized by solving the following optimization problem [18], max = 1 − || * || 2 2 / , (B.4) s.t (2), (24), (25).
The decision variables can be either the power generation or the line capacities and the corresponding constraints (25) can be replaced by the ones in (26) and (29) respectively. In (B.2) and (B.4), because the inertia and the damping of the synchronous machines have no impacts on the synchronous state, these parameters cannot be configured in an optimal way by these frameworks. | 2023-09-14T06:42:58.423Z | 2023-09-13T00:00:00.000 | {
"year": 2023,
"sha1": "1a7037174b165a97971c33fa92cfa8ffe0dbc845",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "1a7037174b165a97971c33fa92cfa8ffe0dbc845",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering",
"Computer Science"
]
} |
81870132 | pes2o/s2orc | v3-fos-license | Breakbone Fever- Sonological Manifestations of Dengue
Background: Dengue fever is a major cause of illness and death worldwide. The disease is caused by dengue virus which gets transmitted to humans by the bites of infected mosquitoes, Aedes (Ae.) aegypti and Ae. albopictus [1] . The disease represents a global health issue as it is endemic in around 100 countries, most of which are in tropical and sub-tropical areas. Objectives: To determine importance of ultrasonography in the early diagnosis, severity grading and prognostic management of patient suffering from dengue To evaluate the typical sonographic features seen in patients suffering from dengue fever. To correlate ultrasound findings with platelet counts and to predict severity of dengue fever based on these findings. Materials and Methods: This is a retrospective study done in KIMS hospital Bangalore from June 2015 to January 2016 involving 140 patients with clinical suspicion of dengue fever. Ultrasound of abdomen, pelvis and thorax was done using GE Voluson Pro750 and Philips HD7 machines. The typical edematous GB wall thickening resembling onion peel was an important criteria in our Ultrasound study (Sachar and sunders sign). The serology markers were NS1Ag, IgG and IgM. Platelet count values were graded as severe (less than 50,000) and mild (50,000 – 100,000). Results: Sonography was conducted on 140 patients of whom 110 were seropositive for dengue. With the exclusion of 30 seronegative cases the total number of patients included in our study was 110 cases ranging from 1-90 years of which 64(58%) were males and 46(42%) were female. The majority of patients in the study ranged from 20 to 40 years. All 110 (100%) patients presented with fever, 93 (85%) had generalized body ache, 68(62%) had nausea and vomiting and 25(23%) had generalized skin rash. Sonographic correlation in 55 patients with platelet count less than 50000, demonstrated gall bladder wall thickening in 52 patients (94%), ascites in 48(87%), pleural effusion in 39 patients (71%), splenomegaly in 14(25%) and hepatomegaly in 11(20%). The 36 patients with platelet count 50000 to 100000/mL, demonstrated gall bladder wall thickening in 29(81%), ascites in 22(61%), pleural effusion in 15(42%), with sonography. In this study, platelet counts were reduced in all 140 cases and dengue serology was positive in 110. Platelet counts below 50000/mL was noted in 55(50%) and 50000 to 100000mL in 36(33%) patients. Conclusion: In a dengue epidemic ultrasound findings of gall bladder edema with or without ascites, pleural effusion and ascites should definitely suggest a provisional diagnosis of dengue fever prior to confirmatory serology reports. This helps in early management of patients there by reducing morbidity and mortality associated with dengue fever.
Introduction
Dengue fever is a major cause of illness and death worldwide. The disease is caused by dengue virus which gets transmitted to humans by the bites of infected mosquitoes, Aedes (Ae.) aegypti and Ae. albopictus [1] . The disease represents a global health issue as it is endemic in around 100 countries, most of which are in tropical and sub-tropical areas. Over the last decades, the incidence rate and the geographic distribution of dengue have rapidly increased (almost 30-fold). Data from the World Health Organization (WHO) estimates up to 100 million cases of dengue fever each year. Changes in dengue epidemiology and the increase in incidence rates (with and without co-morbidities) have led the WHO to propose a new dengue classification system according to disease severity [2] .
Objectives
The purpose of our study is to determine importance of ultrasonography in the early diagnosis, severity grading and management of patients suffering from dengue as serology takes approximately 7 to 10 days to give a positive result.
Material and Methods
This is a retrospective study done in KIMS hospital Bangalore from June 2015 to January 2016 involving 140 patients with clinical suspicion of dengue fever. All patients of any age of both sexes presenting with clinical suspicion of dengue fever were included in the study. Patients with negative dengue serology and those associated with other medical conditions like chronic heart disease and chronic renal disease were excluded. A detailed history of all patients included in the study was taken along with thorough clinical examination and laboratory investigation findings were recorded as per proforma. Based on the platelet count patients were categorically split into two groups. One group with <50000/mL and another with >50000mL. Ultrasound abdomen and thorax was performed in all 140 cases with ultrasound machine (GE Voluson Pro750 and Philips HD7 machines) using 3.5 and 5 MHz probes. Ultrasound examination findings gall bladder wall edema, ascites, pleural effusion, splenomegaly and hepatomegaly were noted in all patients. Gall bladder wall edema was measured between two layers of anterior wall of gall bladder. Both the pleural spaces were evaluated through an intercostal approach. Liver measuring more than 15 cms was taken as hepatomegaly, spleen of long axis more than 12 cms and short axis more than 5 cms was taken as splenomegaly. The serological tests for dengue fever was performed using dengue card test includes NSag1 (nonstructural antigen test), IgM and IgG in all patients. IgM and IgG are antibodies against dengue virus in human plasma.
Results
Ultrasonography was conducted on 140 patients of which 110 were seropositive for dengue. 30 seronegative cases were excluded. Total number of patients included in our study was 110 cases of which 64(58%) were males and 46 (42%) cases were female. The age range of patients varied from 1 to 90 years. Majority of our cases were in the age group of 20 to 40 years. All 110 (100%) patients presented with fever for 3 to 5 days, 93 (85%) patients had generalized body ache, 68(62%) patients had nausea and vomiting and 25(23%) patients had skin rash. Ultrasound findings in 110 sero-positive cases included edematous gall bladder wall thickening in 86 patients(78%), ascites in 73 patients (66%), pleural effusion in 56 patients (51%), splenomegaly in 20 patients (20%) and hepatomegaly in 18patients (16%) .
In our study platelet counts were reduced in all 140 cases but dengue serology was positive in 110 cases. Platelet count was below 50000 in 55 (50%) and 50000 to 100000/mL in 36 (33%) patients.
Discussion
Dengue fever is caused by infection with dengue virus (DENV). The DENV is a vector-borne virus transmitted to humans primarily by two mosquito species, Ae. aegypti or Ae. albopictus. DENV is a single positive-stranded RNA virus belonging to Flavivirus genus of the Flaviviridae family and has 4 major serotypes (DENV 1-4) that are antigenically distinct from each other. Each DENV serotype is phylogenetically distinct suggesting that each serotype could be considered a separate virus [3] . Mosquitoes transmit the virus by feeding on blood of infected persons. At first, the virus infects and replicates in the mid-gut epithelium of the mosquito and then spreads to other organs until it reaches the salivary glands after 10-14 days where it can be inoculated to another person during subsequent blood meal. Vertical transmission of DENV in mosquitoes, i.e. from mosquito to larvae has been reported by a number of research groups. Clinically, dengue infection has a broad spectrum of features. The vast majority of cases are asymptomatic and passes unnoticed. Typically, the symptoms start to be prominent after an incubation period of 3-10 days [4] . The severity of the clinical manifestations varies from mild symptoms to severe life threatening symptoms in the case of dengue hemorrhagic fever (DHF) and dengue shock syndrome (DSS) [5] . Dengue fever mostly occurs in children and young adults [6] . Clinical features vary with the age of the patient although clinically occult infection occurs in about 80% [7] . There are four presentations Non-specific febrile illness Classical dengue fever Dengue haemorrhagic fever Dengue haemorrhagic fever with dengue shock syndrome, encephalopathy and liver failure Non-specific febrile illness: A maculo-papular rash occurs mostly in young children. Upper respiratory features, especially pharyngitis, are common [8] .
Classical dengue fever (DF) is primarily a disease of older children and adults. It begins abruptly followed by three phasesfebrile, critical and recovery. The fever may be biphasic lasting 3 to 7 days and accompanied by a variety of symptoms including severe headache, retro-orbital pain, fatigue, nausea, vomiting, generalised aches, arthralgia and myalgia, hence the term "break bone fever" [7] . Dengue haemorrhagic fever (DHF) is primarily a disease of children under 15 years in hyperendemic areas. It usually follows a secondary dengue infection and is characterized by high fever, haemorrhages, circulatory failure and hepatomegaly [9] .
Dengue shock syndrome (DSS) is associated with almost 50% mortality. Warning signs include sustained abdominal pain, vomiting, irritability or somnolence, a fall in body temperature and decrease in platelet count [8] .
Typically, leucopenia and thrombocytopenia occur as early as the second day of fever. After the onset of illness, the virus can be detected in serum, plasma, circulating blood cells and other tissues for 4-5 days. During the early stages of the disease, virus isolation, nucleic acid or antigen detection can be used to diagnose the infection. At the end of the acute phase of infection, serology is the method of choice for diagnosis. These antibodies are detectable in 50% of patients by days 3-5 after onset of illness, increasing to 80% by day 5 and 99% by day 10 [10] . Thrombocytopenia has always been one of the criteria used by WHO guidelines as a potential indicator of clinical severity. In the most recent 2009 WHO guidelines, the definitions generally describe a rapid decline in platelet count or a platelet count less than 150,000 per microliter of blood. Most clinical guidelines recommend that platelet transfusions be given to patients who develop serious hemorrhagic manifestations or have very low platelet counts, platelet counts falling below 10-20 × 10 9 L −1 without hemorrhage or 50 × 10 9 L −1 with bleeding or hemorrhage [11] . Serology is the mainstay in the diagnosis of dengue fever. Hemagglutination inhibition antibodies usually appear at detectable levels by day 5-6 of febrile illness. Ultrasound findings in early, milder form of dengue fever include GB wall thickening, minimal ascites, pleural effusion and hepatosplenomegaly.
There is no specific therapy. Uncomplicated dengue infection usually resolves spontaneously. Patients with life threatening complications should be managed in hospital with supportive treatment. Fluid replacement and close monitoring of fluid and electrolytes balance are vital. Isotonic solutions (e.g. 0.9% saline, Ringer's lactate or Hartmann's solution) should be used [12] . Ultrasonography was conducted on 140 patients of whom 110 were seropositive for dengue. 30 seronegative cases were excludeds. Total number of patients included in our study was 110 cases of which 64 (58%) were males and 46 (42%) cases were female. The age ranged of patients varied from 1 to 90 years. Majority of our cases were fallen in the age group of 20 to 40 years with male predominance .These findings were correlating well with study done by Santosh et al [13] and shruti et al [14] studies. Commonest ultrasound findings in seropositive dengue patient with platelet count less than 50000 were gall bladder wall edema, ascites and pleural effusion (Figure 1,2,3). These findings were correlating well with venkatsai [15] et al study. In their study they concluded that ultrasound of the abdomen is important adjunct to clinical profile in diagnosing dengue fever and may help to direct further confirmatory investigations. Further diagnosis can be made early in the course of disease compared with other modes of diagnosis. During epidemic the ultrasound findings of gall bladder edema with or without polyserositis in febrile patients should suggest the possibility dengue fever. Study did by Santosh et al [13] showed that gall bladder wall edema, pleural effusion, ascites and hepatosplenomegaly should strongly favor the diagnosis of dengue fever in patient presenting with fever and associated symptoms particularly during epidemic. In our study gall bladder wall edema was present in 94% of cases with platelet count less than 50000/mL. This suggests gall bladder wall edema was an important ultrasound marker in severe dengue. This findings were correlating well with shashidhar et al study [16] . In our study Gall bladder wall edema, ascites and pleural effusion were not significant in patients with platelet counts 100000 to 150000/mL. [31%]. Gall bladder wall edema was nonspecific in the study done by Omprakash, Bhangdia et al [17] and Ventak, Sai, et al [15] study. In our study dengue seropositive patients withplatelet count less than 50000/mL showed all the ultrasound features of dengue fever. This indicates that in severe dengue the referring / treating physician can begin treatment before seropositive laboratory findings. In our study dengue seropositive patients with platelet count less than 50000 showed all the ultrasound features of dengue fever (Table 1). This indicates severe dengue and referring / treating physician can plan / tailor treatment at earliest before arrival of laboratory parameters.
Conclusion
During epidemics of dengue patient presenting with acute onset high grade fever with associated symptoms on ultrasound if gall bladder wall edema, ascites and pleural effusion were present then strongly suggests diagnosis of dengue fever. Ultrasonography is simple noninvasive and nonionizing modality which helps in the early diagnosis of dengue fever prior to serological confirmations. There by helps in planning and early initiation of management of patients. Finally helps in reducing morbidity and mortality associated with dengue fever. If all mentioned ultrasound findings of dengue were present then platelet count would be less than 50000.ultrasound findings plays additional role to clinical and laboratory parameters. From this study we arrive at a conclusion that when ultrasound is used as the initial modality of choice of investigation for a patient with suspected dengue fever, the findings of gall bladder wall edema, ascites and pleural effusion may aid in the early management. | 2019-03-18T14:04:43.709Z | 2018-12-06T00:00:00.000 | {
"year": 2018,
"sha1": "46fdb5f6ef7ecbee343d95d427623efbd1332e13",
"oa_license": null,
"oa_url": "https://doi.org/10.18535/jmscr/v6i12.29",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "3070f3d9391a9860db7ba59e5cf4f276e2dc5ca1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234080644 | pes2o/s2orc | v3-fos-license | Assisted colonization of a regionally native predator impacts benthic invertebrates in fishless mountain lakes
The intentional introduction of native cold‐water trout into high‐elevation fishless lakes has been considered a tool to build resilience to climate change (i.e., assisted colonization); however, ecological impacts on recipient communities are understudied. The purpose of this study was to inform native cold‐water trout recovery managers by assessing potential consequences of translocating a regionally native trout (westslope cutthroat trout; Oncorhynchus clarkii lewisi) into fishless mountain lakes. This study compared littoral benthic invertebrate richness, diversity, community structure and abundance between three groups of lakes (fishless, native trout, nonnative trout) in the Canadian Rocky Mountains. While richness and diversity were preserved across all lake groups, other lines of evidence suggested that the introduction of native westslope cutthroat trout into fishless lakes can alter littoral benthic invertebrate communities in similar ways as nonnative brook trout (Salvelinus fontinalis). The community structure of cutthroat trout lakes resembled brook trout lakes compared to that of fishless lakes. For example, both trout‐lake groups contained a lower density of free‐swimming ameletid mayflies and a higher density of certain burrowing taxa. Risk assessments for trout‐recovery actions should consider the potential for collateral damage to recipient invertebrate communities. Future research should identify possible cascading trophic effects on species subsidized by invertebrate prey.
| INTRODUCTION
Fishes in the Salmonidae family are vulnerable to climate change since they depend on cold water temperatures (Bear, McMahon, & Zale, 2007;Selong, McMahon, Zale, & Barrows, 2001). Climate change has direct effects on salmonids due to their ectothermic physiology. However, climate change also causes indirect effects as other stressors are exasperated by warming water. For example, native inland salmonids of western North America such as cutthroat trout (Oncorhynchus clarkii) are particularly vulnerable to competition from nonnative salmonids that have been broadly introduced outside of their range (Bear et al., 2007;Schindler, 2000). Cutthroat trout have experienced climate-mediated hybridization with rainbow trout (Oncorhynchus mykiss; Muhlfeld et al., 2014) and range contraction associated with the occurrence of brook trout (Salvelinus fontinalis; Wenger et al., 2011).
The persistence and adaptability of cold-water species may depend on their ability to disperse to colder habitats (Root et al., 2003). However, in mountain environments, a species' range expansion can be limited by topographic barriers. A recovery action for cold-water fishes could be to move individuals upstream of topographic barriers where stream and lake temperatures are more suitable for their persistence (i.e., assisted colonization; Ricciardi & Simberloff, 2009;IUCN/SSC, 2013). These areas include unoccupied fishless lakes within the fishes' native range (e.g., Galloway, Muhlfeld, Guy, Downs, & Fredenberg, 2016;Hayes & Banish, 2017).
Assisted-colonization practitioners are conflicted by the need for both rigorous risk assessment and immediate action on species-at-risk recovery (Ricciardi & Simberloff, 2009). Land-management agencies must address the risk of assisted colonization in terms of their own management policies and regulations (McLachlan, Hellman, & Schwartz, 2007). While the introduction of organisms outside of their native range is generally considered high risk (IUCN/SSC, 2013), there is much less understanding of potential impacts of introducing species within their native range but outside of their historical local distribution. This is problematic as there is little evidence to support risk assessments for such recovery actions despite the appetite for translocating native coldwater trout to high-elevation habitats (but see Galloway et al., 2016). The purpose of this study was to contribute to risk assessments for the assisted colonization of native cold-water trout by quantifying the potential consequences of translocating westslope cutthroat trout (Oncorhynchus clarkii lewisi; herein WSCT) into fishless mountain lakes in the Canadian Rocky Mountains.
There is sufficient data that demonstrates the direct and indirect impacts of nonnative trout introductions on native biodiversity, abundance and ecosystem function in mountain habitats (Eby, Roach, Crowder, & Stanford, 2006). As several salmonids are opportunistic feeders, size-selective predation can reduce common, yet vulnerable, organisms such as conspicuous benthic invertebrates (Bradford et al., 1998;Pope & Hannelly, 2013), microcrustaceans (Tiberti, von Hardenberg, & Bogliani, 2014;Weidman, Schindler, & Vinebrooke, 2011) and even terrestrial invertebrates (Baxter, Fausch, & Saunders, 2005;Pope, Piovia-Scott, & Lawler, 2009). Nonnative trout can also disrupt an entire food web via a trophic cascade (Epanchin, Knapp, & Lawler, 2010;Parker & Schindler, 2006). However, there is considerably less literature on the effects of introducing regionally native trout (herein native trout). These are trout that were introduced to a historically fishless lake within their native range but not within their historic local distribution.
Invertebrate prey communities within lakes are presumably structured by the regional invertebrate species pool as these taxa have strong dispersal mechanisms (Loewen & Vinebrooke, 2016). Therefore, introduced native trout may have a lower impact on recipient invertebrates compared to introduced nonnative trout as they coevolved with the regional invertebrate species pool (i.e., the distinctiveness hypothesis; Ricciardi & Atkinson, 2004;Cox & Lima, 2006). Indeed, a meta-analysis found that the severity of impacts from the invasion of aquatic species was related to the invaders' and recipients' ecological distinctiveness (Ricciardi & Atkinson, 2004). It is assumed that the ecological distinctiveness of behavioral and lifehistory traits is correlated with taxonomic distinctiveness given that genetic convergence increases with taxonomic relatedness (Thorpe, 1982).
The scale of taxonomic distinctiveness required for one type of salmonid to structure invertebrate prey differently than another type of salmonid is unknown. Ricciardi and Atkinson (2004) found that the highestimpact aquatic invaders are more likely to belong to genera not already present in the system. WSCT have been negatively impacted by introduced nonnative brook trout (i.e., Salvelinus) in western North America, but cutthroat trout have also co-occurred with a native Salvelinus species (e.g., bull trout; Salvelinus confluentus) for at least 12,000 years. However, there is some evidence for species-specific differences in how recipient invertebrate prey respond to fish predators (Anderson, 1980;Carlisle & Hawkins, 1998;Hume & Northcote, 1985). For example, Anderson (1980) reported that brook trout in the Canadian Rockies had the largest impact on zooplankton communities compared to bull trout and cutthroat trout.
Regardless of the taxonomic scale, if the distinctiveness hypothesis applies in the context of introduced salmonids, assisted colonization of a native trout may exert less adverse effects on fishless mountain-lake ecosystems compared to introduced nonnative trout. However, ecosystem function may be disrupted by any species that possess novel traits (Seddon, 2010). It cannot be assumed that an introduced animal will not impact recipient communities solely because of its apparent similarity to a resident species. For this reason, the potential impacts of introducing a native trout to a waterbody that was not historically occupied by trout should not be precluded.
The objective of this study was to compare littoral benthic invertebrate (herein littoral invertebrates) communities between three groups of lakes (fishless, native trout, and nonnative trout). Several response variables (littoral invertebrate richness, diversity, community structure and density) were measured to ensure a comprehensive assessment of the relative impacts of native versus nonnative trout on littoral invertebrate communities in fishless mountain lakes.
| Study-area description
Banff National Park (BNP) in the Rocky Mountains of Alberta, Canada, encompasses 6,641 km 2 of mountainous terrain, with numerous glaciers and icefields, dense coniferous forest and alpine landscapes. On the eastern side of the Continental Divide, the park has a subarctic climate with cold, snowy winters and mild summers. One lake in this study was on the Continental Divide next to BNP. Kootenay National Park consists of 1,406 km 2 of montane, sub-alpine and alpine habitats ( Figure 1). The hydrology in both parks is dominated by snowmelt runoff in the spring, which produces high flows from April to June that rescind to base flows in the fall and over winter.
| Brief history on stocking
Between the turn of the century and the 1970s, trout were introduced to 25% of an estimated 486 lakes in BNP.
F I G U R E 1 Map of the 36 alpine and sub-alpine lakes sampled for littoral benthic invertebrates in Kootenay National Park, British Columbia, and Banff National Park, Alberta, illustrated by lake group (fishless, native trout, nonnative trout). See Table S2 for more information about individual lakes by corresponding lake number Of those, 84% were historically fishless (Schindler, 2000). In the beginning, settlers and Canadian Pacific Railway (CPR) workmen moved fish species about in considerable quantities (Ward, 1974). By 1915, a hatchery was built in Banff and early stocking was from locally sourced WSCT trout eggs (Ward, 1974). WSCT are native to the montane and foothill streams of southern Alberta within the Oldman and Bow watersheds and parts of the Missouri and Columbia basins in the intermountain western United States (USA). Egg collection from the wild proved to be difficult and the hatchery did not meet production capacity. In 1928 and 1941, additional hatcheries were built in Waterton and Jasper, respectively. To bolster production, eggs were imported from British Columbia and the USA. Cutthroat trout subspecies, such as Yellowstone cutthroat trout (Oncorhynchus clarkii bouvieri) and coastal cutthroat trout, (Oncorhynchus clarkii clarkii) were stocked in combination with some local WSCT. The stocking records often do not distinguish between subspecies. In some instances, stocking records suggested one species or sub-species of trout were introduced, but contemporary sampling indicated a different species or sub-species now occurs despite a lack of records for their introduction. In other cases, no records existed, but the contemporary presence of a given species suggested it was a stocked source as natural colonization was improbable with such steep geography.
Brook trout is of the genus Salvelinus, which is also endemic to North America, but separated from Oncorhynchus 30-40 million years ago (Behnke, 1992). The historic distribution of brook trout was from: The Atlantic seaboard south to Cape Cod; in the Appalachian Mountains south to Georgia; and in the upper Mississippi and Great Lakes drainages, north to Hudson Bay (Scott & Crossman, 1973). Brook trout have been widely introduced in many parts of the world because of its appeal as a sport fish (Scott & Crossman, 1973). Records for brook trout introductions in BNP started in 1910; however, brook trout were apparently introduced to the Bow River prior to 1900 by CPR workmen (Ward, 1974). Banff and then Jasper and Waterton hatcheries obtained eggs from large-scale suppliers from Eastern Canada and the USA. Brook trout was also chosen because of the exceptional growth and catch rates obtained in mountain lakes in the 1930s (Rawson, 1940). Brook trout were indiscriminately stocked on top of wild and previously stocked cutthroat trout. Brook trout usually out-competed other native and stocked trout and seldom co-exist with other trout (Table S1). By the 1980s, the Canadian mountain national parks discontinued the stocking program based on an increased recognition of the value of native fauna, however, many of these populations have persisted (Donald, 1987).
| Lakes
Thirty-six mountain lakes were identified and categorized into three groups based on contemporary occurrence of trout (Table S1): lakes not occupied by fish (fishless; n = 13); lakes occupied by WSCT (native trout; n = 10); and lakes occupied by nonnative brook trout (nonnative trout; n = 13).
Fishless lakes are lakes that were historically fishless and were never stocked (n = 10) or were stocked decades ago, but only one generation of fish survived due to a lack of spawning habitat or winter kill (n = 2; Table S1). It is likely that the littoral invertebrate communities had returned to pre-stocking conditions, given the life span of trout and the resilience of mountain lake ecosystems (Donald, Vinebrooke, Anderson, Syrgiannis, & Graham, 2001;Knapp et al., 2001).
Native trout lakes were lakes that were stocked with native WSCT. These lakes may have been stocked with other species more than four decades ago, but contemporary sampling confirmed the sole occurrence of pure WSCT (n = 4; Table S1). Given there were not a lot of historically fishless lakes stocked with WSCT, lakes that were historically occupied by WSCT were also included in this study group. These lakes were either never stocked or were stocked but, based on contemporary sampling, the native genotype prevailed (n = 6). Lakes that were historically occupied by WSCT were grouped with lakes stocked with WSCT. This decision assumed that stocking happened long enough ago (mean number of years since last stocking was 68 years) that stocked lakes with self-sustaining populations of WSCT had a littoral invertebrate community that had converged to that of historically occupied WSCT lakes (Donald et al., 2001;Knapp et al., 2001).
Nonnative trout lakes are sites that were historically fishless but were stocked at least once with nonnative brook trout (n = 10). Nonnative trout lakes also included two lakes, one with both brook trout and WSCT, and the other with brook trout and rainbow trout. Anderson (1980) showed brook trout had a disproportionately larger effect on zooplankton communities compared to cutthroat trout, so these mixed lakes were considered nonnative trout lakes in the analyses.
| Spatial sampling design
Study lakes were systematically selected to minimize differences in environmental gradients between lake groups (i.e., fishless, native trout, nonnative trout). The study lakes were situated at sub-alpine and alpine elevations (1,981-2,453 m), ranged in size from 2 to 35 ha and had maximum depths that vary from 3 to 71 m, with catchment areas ranging from 39 to 3,743 ha (Table S2). Water temperature and dissolved oxygen at the time of sampling ranged from 4.2 to 14.9 C and 8.2 to 12.4 mg/L, respectively, but the time of day was not standardized (- Table S3). Conductivity ranged from 32 to 336 μS/cm and pH ranged from 6.6 to 9.3. The substrate was most often equal parts clay, cobble and boulder with some outliers. A marginal amount of woody debris was found in most lakes. Macrophytes were rare in sub-alpine and alpine lakes in the Canadian Rockies. With the exception of catchment area (perANOVA, 999 permutations; p = .01), there were no significant differences in lake morphometry, water chemistry, littoral substrate or habitat between the lake groups (perANOVA, p > .05; Table S3).
| Sampling sites
Three to seven littoral invertebrate sample sites were allocated per lake depending on lake size: three samples for 0-2 ha lakes; four samples for 2.1-4 ha lakes; five samples for 4.1-8 ha lakes; six samples for 8.1-16 ha lakes; and seven samples for 16.1-35 ha lakes. Within lakes, sites were proportionally allocated between different substrate types (Knapp et al., 2001). For example, if a lake required six sample sites and there were three equally common substrate types, then two sites were allocated per substrate type. Lakes were visited once following ice-off, from 15 June to 1 July 2015, or 13 June to 29 June 2016.
| Environmental predictor variables
At each sample site, measurements were taken of water temperature, dissolved oxygen (DO), pH and conductivity with a YSI 650 multiparameter meter (YSI Incorporated, Yellow Springs, OH). Calibrations were made before each trip. All measurements were taken from a depth of 1 m prior to sampling littoral invertebrates. To determine substrate composition, we visually estimated percent cover of each substrate category then standardized to sample area. Substrates were categorized as clay (<0.1 cm), sand/gravel (0.1-1.6 cm), pebble (1.7-6.4 cm), cobble (6.5-25.6) and boulder (>25.6 cm). Woody debris and aquatic macrophytes were categorized as absent or present. A third category of abundant was also in the protocol, but woody debris and aquatic macrophytes were never abundant in the study lakes.
| Littoral invertebrate collection
Littoral invertebrates were collected from each site with the travelling kick-and-sweep method (400 μm mesh), similar to Jones, Somers, Craig, and Reynoldson (2007). At each site, three transects were established perpendicular to shore, spaced 2 m apart and extending into the lake to a depth of 1 m. Sampling was standardized over a 10-min period (David, Somers, Reid, Hall, & Girard, 1998;Jones et al., 2007). Each transect was sampled for 3 min and the length of the transect was recorded, as distance varied depending on the slope of the lake bottom. For an additional minute, missed littoral invertebrates were collected within the sampling area by sweeping the water column and searching under stones/ logs (O'Hare et al., 2007). This collection method estimates taxon abundance per unit area (density).
Samples were rinsed over a 400-μm mesh-sized sieve to remove organic debris and fine sediment, and fixed with formalin. Samples were transferred to 70% ethanol for long-term preservation and transport. A certified benthic invertebrate taxonomist processed and sorted the samples. The taxonomist enumerated 300 organisms from a subsample of cells using a Marchant box (Marchant, 1989). The data were then extrapolated to the total number of cells. The taxonomist identified the sample of littoral invertebrates to the genus level, but the analysis was done at the family level (see below-Statistical Analyses). One exception was Nematoda, which were identified at the phylum level (Table S4). Taxa were checked against the reference collection of Environment and Climate Change Canada, held at the Canada Centre for Inland Waters in Burlington, Ontario, Canada. For quality assurance and control, randomly chosen samples were verified by a secondary taxonomist to achieve 95% sorting and identification efficiency.
| Statistical analyses
Littoral invertebrates were standardized by the sample area (individuals per square metre), then averaged by the number of sample sites per lake to obtain a composite mean density per taxon per lake. All analyses were performed at the family level. Many studies have shown that taxonomic detail has little influence on multivariate descriptions of benthic communities, and the family level provides sufficient resolution for bioassessments (e.g., Bowman & Bailey, 1997). Secondly, collapsing the data at the family level reduced the number of zeros and helped normalize residual variances (Norris & Georges, 1993). With the exception of littoral invertebrate richness and diversity analysis, rare taxa were excluded at the genus level for each analysis if the taxa were found in fewer than four lakes (<10% ; Table S4) to minimize the large influence that rare taxa have on analyses . Once rare species were removed, 71% of the taxa were limited to one genera per family. Zooplankton were also removed because sampling methods were not designed to collect a full representation of the zooplankton community.
| Littoral invertebrate richness and diversity between lake groups
Richness and Shannon's diversity were compared between lake groups, using a one-way analysis of variance (ANOVA). To consider the entire community, rare species were included for the diversity analysis. Prior to analysis, density values of littoral invertebrates were log 10 (x + 1) transformed to reduce the asymmetry of the species distributions and the influence of dominant species and outliers for ordination (McGarigal, Cushman, & Stafford, 2000).
| Littoral invertebrate community structure between lake groups
Nonmetric multidimensional scaling (NMDS) was used to visualize multivariate patterns between lake groups in twodimensional space based on the similarities in littoral invertebrate assemblages. Jaccard's similarity coefficient was used to visualize community composition in the form of presence or absence. Taxa (vectors) were fitted to the ordination by significance of the correlation of each variable with a cut-off p-value of <.05, determined by 999 permutations. NMDS scores between lake groups were compared using permutational analysis of variance (perANOVA, 999 permutations) and Tukey's Honest Significant Difference (Tukey HSD) post-hoc comparison test.
Permutational multivariate analysis of variance (PERMANOVA, 999 permutations; Anderson, 2001) was also used, based on Jaccard's similarity coefficient to examine differences in littoral invertebrate communities between lake groups. Since PERMANOVA is sensitive to differences in the within-group dispersion, an analysis for homogeneity of multivariate dispersion was performed using PERMDISP (Anderson, 2006;Anderson, Ellingsen, & McArdle, 2006;Anderson & Walsh, 2013). This test compares the within-group variability between groups of individual points and their average distance to the group centroid. PERMDISP was completed in conjunction with PERMANOVA to ensure significant differences are the result of different mean values between group centroids (PERMANOVA; multivariate location) rather than dispersion (PERMDISP; within-group variability) from the centroids (Anderson, 2006;Anderson & Walsh, 2013).
| Littoral invertebrate densities between lake groups
Taxon-specific density of common littoral invertebrate taxa was compared between lake groups, using perANOVA (999 permutations). To examine which lake group combination were most different, post-hoc comparison tests were used, including Tukey's HSD and Games-Howell, depending on how the assumption of equal variance was met (Levene's test).
| Environmental predictors of variation in littoral invertebrate communities
Redundancy Analysis (RDA; was used to evaluate environmental predictors of taxonomic variation in the littoral invertebrate community between lake groups. A detrended correspondence analysis (DCA; Hill & Gauch, 1980) was initially performed, and based on the gradient length of the dominant axes (1.8 standard deviations for axis one and 1.3 for axis two), linear response models were suitable for analyses (Lepš & Smilauer, 2003).
Environmental variables were scaled and centered to compare gradient lengths. Then, littoral invertebrate densities were transformed using the Hellinger transformation to ensure linearized relationships between the taxa that contained many zeroes (Legendre & Gallagher, 2001). Collinearity among variables was evaluated using variance inflation factor (>10) to reduce the risk of overestimating the significance of correlated variables. Correlated environmental variables were removed prior to analysis (Spearman's rank; r > .6).
Measured environmental variables used for the initial model included elevation, maximum depth, lake area, water temperature, dissolved oxygen (DO), conductivity, pH, clay, sand/gravel, pebble, boulder, habitat, woody debris and aquatic macrophytes. The significance of the global model was tested with the complete set of predictors to proceed with forward selection. A double-stopping criterion (Blanchet, Legendre, & Borcard, 2008) was added to select the most parsimonious set of predictors. The significance of each environmental predictor variable was determined using Monte Carlo permutation tests with 4,999 permutations. Subsequent permutation tests determined significance of individual axes and the significance of the overall ordination of the reduced model (Borcard, Gillet, & Legendre, 2011).
All statistical analyses were performed in R version 4.0.1 (R Core Team, 2020) using the following packages: (d) lmperm (Wheeler & Torchiano, 2016) for perANOVA; and (e) Stats for ANOVA and pairwise tests. The significance criterion used in all data analyses was p < .05. The R code used for this analysis is available from https://doi.org/ 10.5061/dryad.ffbg79csg.
| Littoral invertebrate community structure between lake groups
The NMDS plot that represents the degree of similarity in taxonomic composition (Jaccard's similarity coefficient) displayed greater resemblances between native and nonnative trout lakes compared to fishless lakes (Figure 2; stress = 0.20). NMDS scores associated with axis one were significantly separated between two of the three lake groups (perANOVA, p < .01; Figure 2) (Tukey HSD; nonnative vs. fishless, p = .004; native vs. fishless, p = .005; nonnative vs. native, p = .99). NMDS scores along axis two were not significantly different between lake groups (perANOVA, p = .66).
Consistent with the perANOVA test results on NMDS scores, PERMANOVA of Jaccard's coefficient confirmed significant differences between lake groups (Table 1). Pairwise comparisons revealed native and nonnative trout lakes were significantly different from fishless lakes, but not from each other (Table 1). There was no difference in dispersion between lake groups, which suggests that significant differences in PERMANOVA results were attributed to a difference in location (average community composition) rather than within-group variability (PERMDISP; Table 1).
| Littoral invertebrate densities between lake groups
Both native and nonnative trout lakes contained the highest density of taxa compared to fishless lakes (264, 232 and 157 individuals per square meter, respectively; Table 2; p = .04 and p = .04, respectively). However, the density of littoral invertebrates did not differ between native and nonnative trout lakes (Table 2; p = .97).
F I G U R E 2 NMDS ordination of littoral benthic invertebrate community data, based on Jaccard's similarity coefficient, colored by lake group (fishless, native trout, nonnative trout). Ellipses enclosing 60% of lakes for each lake group are presented in corresponding colors (stress = 0.20). Red vectors represent intrinsic taxa variables after correlation analysis with a cut-off p-value of .05 T A B L E 1 PERMANOVA and PERMDISP results of location and dispersion differences in common littoral benthic invertebrate communities between and within lake groups (fishless, native trout, nonnative trout) The density of individual littoral invertebrate taxa in native trout lakes was more likely to be similar to nonnative trout lakes compared to fishless lakes. For example, both trout-lake groups had reduced densities of the free-swimming and conspicuous Ameletidae compared to fishless lakes (Table 2 and Figure 3). Furthermore, Chironomidae were found at significantly higher densities in both trout-lake groups compared to fishless lakes.
There were some key differences between native and nonnative trout lakes. Native trout lakes contained higher densities of certain taxa compared to both fishless and nonnative trout lakes. For example, Lebertiidae and Sphaeriidae were found at the highest densities in native trout lakes compared to both other lake groups. Nonnative trout and fishless lakes did not differ for these taxa (Table 2 and Figure 3). Native trout lakes also had the highest densities of Tipulidae and Limnephilidae, followed by nonnative trout and fishless lakes (Table 2 and Figure 3). Native trout exerted an intermediate effect on certain littoral invertebrates compared to nonnative trout. For example, Naididae and Nematoda were at the highest densities in nonnative trout, followed by native trout and fishless lakes (Table 2 and Figure 3). Lumbriculidae also followed this pattern, but post-hoc comparisons were not statistically significant (Table 2 and Figure 3). Gammaridae were almost completely absent from nonnative trout lakes, but at intermediate densities in native trout lakes compared to fishless lakes (Table 2 and Figure 3).
| Environmental predictors of variation in the littoral invertebrates
The RDA model axes one and two accounted for 6.4% and 3.0% of the total taxonomic variance, respectively. Two environmental variables-elevation and water temperature-were statistically significant predictors of the invertebrate communities. Compared to the global model (R 2 adj = 12.3%), forward selection reduced the model while capturing most of the variance explained with only two environmental variables-lake temperature and elevation (R 2 adj = 9.4%; Table S5; Figure S1a). The first RDA axis was significant (F 1,33 = 3.81, p = .004), as was the ordination (F 1,33 = 2.81, p < .001). There was no pattern of separation between the lakes based on lake groups ( Figure S1b). Elevation and water temperature showed very little correlation. Warmer lakes contained greater densities of Sphaeriidae and Naididae, and Ameletidae and Enchytraeidae were better represented in colder lakes ( Figure S1a). Higher densities of Chironomidae were associated with lower montane lakes, whereas Capniidae were associated with alpine and subalpine lakes ( Figure S1a).
| DISCUSSION
This study revealed several lines of evidence that suggested stocking native WSCT into fishless lakes can alter littoral invertebrates in similar ways as nonnative brook trout. For example, littoral invertebrate community structure was indistinguishable between native and nonnative trout lakes when compared to fishless lakes. Both trout-lake groups contained a lower density of free-swimming ameletid mayflies and higher densities of burrowing taxa such as Naididae, Nematoda and Chironomidae. Aggregate properties of the invertebrate community, such as species richness and diversity, were similar between all lake groups. Nevertheless, the finer-scale taxonomic differences between the lake groups highlight the potential ecological consequences of cold-water trout-conservation efforts for mountain-lake ecosystems.
The direct effect of predation is the most likely explanation for the observed differences in littoral invertebrate communities between fishless lakes and lakes occupied by trout (Carlisle & Hawkins, 1998;Knapp et al., 2001). These conspicuous invertebrates do not have behavioral mechanisms to persist when they co-occur with fish predators (Luecke, 1986). For example, Luecke (1986) demonstrated how the addition of native cutthroat trout caused Hyalella azteca (Amphipoda) to burrow into the sediment to avoid predation, while Callibaetis sp. (Ephemoptera) did not, resulting in higher predation rates on Callibaetis sp. The physical inability of Callibaetis sp. to burrow into soft sediments may also explain why Ameletidae were at such low densities in all trout lakes in this study.
Higher densities of certain burrowing taxa in the presence of either nonnative or native trout may be attributed to the indirect effects of predation. Weidman et al. (2011) suggested that trout predation on gammarids releases sediment-dwelling invertebrates from competition or predation by these amphipods. Another hypothesized indirect positive effect of introduced trout on burrowing invertebrates was based on an expected increase in nutrient recycling from fish fecal matter (Carlisle & Hawkins, 1998;Leavitt, Schindler, Paul, Hardie, & Schindler, 1994).
The effects of trout introductions on the community structure and abundance of benthic invertebrates in fishless lakes has been well-documented. However, studies have only examined the effects of nonnative trout (e.g., Tiberti et al., 2014) or a mix of native and nonnative trout without distinguishing between the two (e.g., Bradford et al., 1998;Knapp et al., 2001). For example, Bradford et al. (1998) found that stocked native and nonnative trout reduced or eliminated large, mobile epibenthic or limnetic taxa, such as Baetidae, Siphlonuridae, Notonectidae, Corixidae, Limnephilidae and Dytiscidae, compared to fishless lakes. Knapp et al. (2001) found that five of six clinger/swimmer taxa occurred less frequently, or at reduced densities in stocked trout lakes compared to fishless lakes. Furthermore, clinger/swimming taxa were virtually absent from mountain lakes stocked with nonnative brook trout in the Italian Alps (Tiberti et al., 2014).
Functional (e.g., scrapers, collectors, shredders and predators) and habitat groups (e.g., swimmers, burrowers, sprawlers, and clingers) were compared between lake groups to provide a broader understanding of how native trout may impact the functioning of the littoral invertebrate community. These analyses provided less clarity since some taxa of the same grouping had opposite effects. For example, Ameletidae and Gammaridae were both grouped as swimmers but were not similarly affected by native and nonnative trout. One explanation is that some taxa within the same functional group have different secondary functions (Hooper et al., 2002). Secondary functions may contribute to a taxon's response to trout stocking. In this study, Gammarus lacustris were classified as swimmers but have also been observed burrowing in sediment, presumably to resist trout predation (Luecke, 1986;McNaught et al., 1999). A second difficulty with comparing habitat and functional groups across environmental gradients is the high diversity of organism functions that exist within a taxonomic rank, especially at the family level (e.g., Chironomidae; Cummins, 1973).
Despite evidence for similar impacts of native and nonnative trout on littoral invertebrate communities, certain taxon-specific differences emerged between native and nonnative trout lakes. Compared to fishless lakes, the densities of Lebertiidae and Sphaeriidae were significantly higher in native but not in nonnative trout lakes. While the behavioral adaptation of burrowing could explain why Lebertiidae and Sphaeriidae are able to persist in the presence of native trout, it does not explain why densities of Lebertiidae and Sphaeriidae were not as high in nonnative trout compared to native trout lakes. This differential effect of native versus nonnative trout on invertebrate prey has been demonstrated previously (Anderson, 1980;Cox & Lima, 2006;Paolucci, Mac-Isaac, & Ricciardi, 2013) without a clear understanding of the causal mechanisms. For example, in the mountain lakes of the Canadian Rockies, brook trout had the greatest effect on zooplankton assemblage, followed by rainbow trout, Dolly Varden (Salvelinus malma) and cutthroat trout (Anderson, 1980). Prey naiveté may be another interpretation of the observed differences between the effects of native and nonnative trout on littoral invertebrate communities (e.g., Cox & Lima, 2006). For example, brook trout may be behaviorally more efficient at exploiting benthic prey than the native trout species (e.g., cutthroat trout; Hume & Northcote, 1985;Carlisle & Hawkins, 1998). While invertebrates would presumably be naïve to any species of trout introduced into a historically fishless lake, these results suggest that littoral invertebrates may be more naïve towards predators that were not native to the region. Indeed, stocked brook trout have only been exerting selection pressures on the regional species pool of littoral invertebrates for approximately 100 years compared to at least 12,000 years of selection pressure for WSCT (Behnke, 1992).
Another potential explanation of the observed difference between the effects of native and nonnative trout on littoral invertebrates is the density dependence of predation pressure. Nonnative trout may be found at higher abundances than native trout, causing greater prey consumption at nonnative trout lakes (e.g., Benjamin, Fausch, & Baxter, 2011;Simon & Townsend, 2003). While brook trout were selected for their excellent growth rates in mountain lakes (Donald, 1987), data on relative densities of native versus nonnative trout were not available.
The final consideration for the observed differences in littoral invertebrate densities was partially influenced by the physical habitat of each lake group. Lakes were selected in each group that reduced potential confounding effects of the physical environment. Most environmental variables such as mean elevation, water depth and lake area, were successfully controlled. However, the mean catchment of fishless lakes was more than twice as large as the other groups. Although not statistically different, native trout lakes contained more woody debris than the other two groups. Shoreline structure has been shown to moderate the effects of stocked fish (Nasmith, Tonn, Paszkowski, & Scrimgeour, 2012). However, very little woody debris was found in any given lake, and woody debris was not a significant contributor to the RDA analysis.
| Conservation implications
Natural resource agencies are tasked with preserving biodiversity and species at risk. As stewards of publically owned land, agencies are responsible for developing plans to accomplish these goals (e.g., Fisheries and Oceans Canada, 2014). However, these plans often lack policy about new species that might be introduced for conservation purposes (McLachlan et al., 2007). Given the absence of literature on the effects of native coldwater trout on recipient ecosystems, this study argues that conservation practitioners do not have the evidence needed to evaluate the risk of conservation introductions (but see Galloway et al., 2016). Therefore, a liberal policy on assisted colonization could cause broad irreversible damage.
Risk assessments would help guide policy development for assisted colonization. Risk assessments should require evidence of imminent threat to the donor species and a quantitative model of predicted outcomes related to all recipient taxa (see Galloway et al., 2016;Hayes & Banish, 2017). This study shows that the assisted colonization of native cutthroat trout would result in a recipient littoral invertebrate community structured similar to nonnative trout lakes. These data can directly inform local risk assessments and policy. However, the effect of introducing native cold-water trout on other recipient species should also be evaluated. For example, impacts to aquatic invertebrates, such as Ameletidae, might reduce prey for a passerine bird (Epanchin et al., 2010). Given that natural resource agencies have an ethical obligation to avoid collateral harm to other species or ecosystems (IUCN/SSC 2016), a cautious approach is needed for the assisted colonization of native trout.
ACKNOWLEDGMENTS
We are thankful to Craig Logan for sharing his expertise in invertebrate life history and taxonomy, Dr. Laura Gray-Steinhauer and Dr. Andreas Hamann for providing statistical advice. A special thank you to the following field assistants for their hard work in field data collection: Hedin Nelson-Chorney, Troy Malish, Fonya Irvine, Brian Merry, Sean O'Donovan, Madeleine Wrazej, Colby Whelan, Kayla Eykelboom, Sarah Fassina and Brenna Stanford. This research was funded by Parks Canada and by Alberta Conservation Association Grants in Biodiversity.
CONFLICT OF INTEREST
The authors declare no conflicts of interest.
AUTHOR CONTRIBUTIONS
The concept and study design of the project was conceived by Mark Taylor, Mark Poesch, and Rolf Vinebrooke. Allison Banting and Mark Taylor developed and evaluated field collection methods. Chris Carli arranged field logistics, trained and led the field crews. The analysis and interpretation was performed by Allison Banting with guidance from Mark Poesch and Rolf Vinebrooke. The manuscript was written by Allison Banting and Mark Taylor. All authors reviewed, edited and approved the final manuscript.
DATA AVAILABILITY STATEMENT
All data and code required to repeat analyses are available at: https://doi.org/10.5061/dryad.ffbg79csg. | 2020-12-31T09:08:14.458Z | 2020-12-27T00:00:00.000 | {
"year": 2021,
"sha1": "35a6b8baee48ab1cbc872a990b60cbac3c7540ef",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/csp2.344",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "5dc5ef19dd6917386c79c6cb821d0769f96ae6db",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
119525490 | pes2o/s2orc | v3-fos-license | Scaling dimensions in QED$_3$ from the $\epsilon$-expansion
We study the fixed point that controls the IR dynamics of QED in $d = 4 - 2\epsilon$. We derive the scaling dimensions of four-fermion and bilinear operators beyond leading order in $\epsilon$-expansion. For the four-fermion operators, this requires the computation of a two-loop mixing that was not known before. We then extrapolate these scaling dimensions to $d = 3$ to estimate their value at the IR fixed point of QED$_3$ as function of the number of fermions $N_f$. The next-to-leading order result for the four-fermion operators corrects significantly the leading one. Our best estimate at this order indicates that they do not cross marginality for any value of $N_f$, which would imply that they cannot trigger a departure from the conformal phase. For the scaling dimensions of bilinear operators, we observe better convergence as we increase the order. In particular, $\epsilon$-expansion provides a convincing estimate for the dimension of the flavor-singlet scalar in the full range of $N_f$.
Introduction
antum Electrodynamics (QED) in 3d is an asymptotically-free gauge theory, which becomes strongly interacting in the IR. When the U (1) gauge eld is coupled to an even number, 2N f , of complex two-component fermions, and the Chern-Simons level is zero, the theory is parity invariant and has an SU (2N f ) × U (1) global symmetry. For large N f the theory ows in the IR to an interacting conformal eld theory (CFT) that enjoys the same parity and global symmetry.
e CFT observables are then amenable to perturbation theory in 1/N f ; this has been done for scaling dimensions [1][2][3][4][5][6][7][8][9][10][11], two-point functions of conserved currents [12][13][14], and the free energy [15]. e IR xed point is expected to persist beyond this large-N f regime, but not much is known about it. Ref. [16] employed the conformal bootstrap approach to derive bounds on the scaling dimensions of some monopole operators. Another method to study the small-N f CFT is the -expansion, which exploits the existence of a xed point of Wilson-Fisher-type [17] in QED continued to d = 4 − 2 dimensions. When 1 we can access observables via a perturbative expansion in and subsequently a empt an extrapolation to = 1 2 . e -expansion of QED was employed to estimate some scaling dimensions [18,19], the free energy F [20], and the coe cients C T and C J [14]. In particular, ref. [18] considered operators made out of gauge-invariant products of either four or two fermion elds.
Four-fermion operators are interesting because of the dynamical role they can play in the transition from the conformal to a symmetry-breaking phase, which is conjectured to exist if N f is smaller than a certain critical number N c f [21][22][23][24]. In fact, the operators with the lowest UV dimension that are singlet under the symmetries of the theory are four-fermion operators.
If for small N f they are dangerously irrelevant, i.e., their anomalous dimension is large enough for them to ow to relevant operators in the IR, they may trigger the aforementioned transition [7,25,26]. 1 e one-loop result of ref. [18] led to the estimate N c f ≤ 2. Bilinear operators, i.e., operators with two fermion elds, are interesting because they are presumably among the operators with lowest dimension. For instance, when continued to d = 3, the two-form operators Ψγ [µ γ ν] Ψ become the additional conserved currents of the SU (2N f ) symmetry, of which only a SU (N f ) subgroup is visible in d = 4 − 2 . is leads to the conjecture that their scaling dimension should approach the value ∆ = 2 as → 1 2 , which was tested at the one-loop level in ref. [18].
In order to assess the reliability of the -expansion in QED, and improve the estimates from the one-loop extrapolations, it is desirable to extend the calculation of these anomalous dimensions beyond leading order in . is is the purpose of the present paper. Let us describe the computations we perform and the signi cance of the results.
We rst consider four-fermion operators. In the UV theory in d = 4 − 2 , there are two such operators that upon continuation to d = 3 match with the singlets of the SU (2N f ) symmetry. We compute their anomalous dimension matrix (ADM) at two-loop level by renormalizing oshell, amputated Green's functions of elementary elds with a single operator insertion. As we discuss in detail in a companion paper [30], knowing this two-by-two ADM is not su cient to obtain the O( 2 ) scaling dimensions at the IR xed point. We also need to take into account the full one-loop mixing with a family of in nitely many operators that have the same dimension in the free theory. ese operators are of the form where n is an odd integer, and Γ n µ 1 ...µn ≡ γ [µ 1 . . . γ µn] is an antisymmetrized product of gamma matrices. All the operators in this family except for the rst two, i.e., n = 1, 3, vanish for the integer values d = 4 and d = 3, but are non-trivial for intermediate values 3 < d < 4. For this reason they are called evanescent operators. Taking properly into account the contribution of the evanescent operators, via the approach described in ref. [30], we obtain the next-toleading order (NLO) scaling dimension of the rst two operators. We then extrapolate to = 1 2 using a Padé approximant, leading to the result presented in subsection 5.2 and summarized in gure 2. e deviation from the leading order (LO) scaling dimension is considerable for small N f , indicating that at this order we cannot yet obtain a precise estimate for this observable of the three-dimensional CFT. Taking, however, the NLO result at face value, we would conclude that the four-fermion operators are never dangerously irrelevant. is resonates with recent results that suggest that QED 3 is conformal in the IR for any value of N f . Namely, refs. [31][32][33] argued based on 3d bosonization dualities [34][35][36][37], that for N f = 1 the SU (2) × U (1) symmetry is in fact enhanced to O(4) (this is related to the self-duality present in this theory [38]). Also, a recent la ice study [39] found no evidence for a symmetry-breaking condensate (for previous la ice studies see refs. [40][41][42]). We then consider the bilinear "tensor-current" operators of the form ΨΓ n µ 1 ...µn Ψ , (1.2) for n = 0, 1, 2, 3. We obtain their IR scaling dimension up to O( 3 ) using the three-loop computations from ref. [43]. Having these higher-order results, we are in the position to employ di erent Padé approximants to estimate errors and test the convergence as we increase the order. As mentioned above, in the limit d → 3 the operators with n = 1, 2 approach conserved currents of the SU (2N f ) symmetry. Indeed, we show in subsection 5.3 (see gure 4) that the extrapolated scaling dimension of the two-form operators approaches the value ∆ = 2 as we increase the order. As d → 3, the operators with n = 0, 3 approach scalar bilinears, which are either in the adjoint representation of SU (2N f ) or are singlets. For the singlet scalar, which is continued by a bilinear with n = 3, the results of various extrapolations we perform are all close to each other (see gure 5), indicating that -expansion provides a good estimate for this scaling dimension in the full range of N f . For the adjoint scalar, di erent components are continued by operators with either n = 0 or n = 3, giving two independent extrapolations at each order in . As expected, we nd that the two independent extrapolations approach each other as we increase the order (see gure 5). e rest of the paper is organized as follows: in section 2 we set up our notation and describe the xed point of QED in d = 4 − 2 ; in section 3 we present the computation of the two-loop ADM of the four-fermion operators, and then the result for their scaling dimension at the IR xed point in d = 4 − 2 ; in section 4 we present the same result for the bilinear operators; in section 5 we extrapolate the scaling dimensions to d = 3, and plot the resulting dimensions as a function of N f for the various operators we consider; nally in section 6 we present our conclusions and discuss possible future directions. In the appendices we collect additional material and some useful intermediate results.
QED in d = 4 − 2
We consider QED with N f Dirac fermions Ψ a , a = 1, . . . , N f , of charge 1. e Lagrangian is with the covariant derivative de ned as Summation over repeated avor indices is implicit. We work in the R ξ -gauge, de ned by adding the gauge-xing term We collect the Feynman rules in appendix A. e algebra of the gamma matrices is {γ µ , γ ν } = 2η µν , with η µν η νρ = δ ρ µ and δ µ µ = d. We will employ some useful results on d-dimensional Cli ord algebras from ref. [44]. We normalize the traces by Tr[1] = 4, for any d. For d = 3, Ψ a decomposes as giving 2N f complex two-component 3d fermions ψ i , i = 1, . . . , 2N f , all with charge 1. Correspondingly, the gamma matrices decompose as µ } µ=1,2,3 are two-by-two 3d gamma matrices. In d = 4, the global symmetry preserved by the gauge-coupling is In d = 4 − 2 , evanescent operators violate the conservation of the nonsinglet axial currents [45], so only the diagonal subgroup SU (N f ) is preserved. In d = 3, this symmetry enhances to SU (2N f ) × U (1).
We de ne α ≡ e 2 16π 2 and denote bare quantities with a subscript "0". e renormalized coupling is given by where the renormalization constant Z α (α, ) absorbs the poles at = 0, and µ is the renormalization scale. e beta function reads In Minimal Subtraction (MS), β depends only on α and not on . e MS QED β function is known up to four-loop order for generic N f [46,47] Using eqs. (2.7) and (2.9) we nd that in d = 4 − 2 the theory has a xed point at with ζ(n) the Riemann zeta function. Our convention for renormalizing elds is By the Ward Identity, Z A = Z −1 α . For our computations we need the eld-renormalization of the fermion up to two-loop order. In MS and generic R ξ -gauge it reads (2.12)
Operator mixing
To compute the anomalous dimension of local operators O i , we add these operators to the Lagrangian and compute their renormalized couplings C i at linear level in the bare ones (2.14) Z j i are the mixing renormalization constants from which we obtain the ADM Like β, γ does not depend on in the MS scheme. We introduce the following notation for the coe cients of the expansion in α and e most direct way to compute the mixing Z j i is to renormalize amputated one-particleirreducible Green's functions with zero-momentum operator insertions and elementary elds as external legs. Alternatively, one can renormalize the two-point functions of the composite operators. e former method has two main advantages. e rst is that to extract n-loop poles only n-loop diagrams need to be computed. e second is that we can insert the operators with zero-momentum.
is makes higher-loop computations more tractable. e disadvantage is that o -shell Green's functions with elementary elds as external legs are not gauge-invariant, so some results in the intermediate steps of the calculation are ξ-dependent, which is why we need to include the ξ-dependent wave-function renormalization of external fermions. In addition, operators that vanish under the equations of motion (EOM) enter the renormalization of such o -shell Green's functions. We refer to the la er as EOM-vanishing operators.
In the next section, we consider composite operators given by scalar quadrilinear and bilinear operators in the fermion elds. We rst present the computation of the two-loop anomalous dimension of the four-fermion operators and use it to obtain the O( 2 ) IR scaling dimension at the xed point. Next, we employ the already existing results of the three-loop anomalous dimension of bilinear operators [43] to obtain their IR dimension to O( 3 ).
Four-fermion operators in d = 4 − 2
In this section, we present the computation of the ADM of the four-fermion operators at the two-loop level. Here and in the following Γ n µ 1 ...µn ≡ γ [µ 1 . . . γ µn] , with the square brackets denoting antisymmetrization, which includes the conventional normalization factor 1 n! . In d = 4, the operators in eq. (3.1) are the only two operators with scaling dimension 6 at the free xed point that are singlets under the global symmetry SU (N f ) L × SU (N f ) R . We focus on these avor-singlet operators because, as explained in the introduction, we are interested in understanding whether or not they are relevant at the IR xed point. e calculation of the ADM for avor-nonsinglet operators is actually simpler because it involves a subset of the diagrams. We report the result for some nonsinglet operators in appendix C.
In d = 4 − 2 , insertions of Q 1 and Q 3 in loop diagrams generate additional structures that are linearly independent to the Feynman rules of Q 1 and Q 3 . To renormalize the divergences proportional to such structures, we need to enlarge the operator basis. It is most convenient to de ne the complete basis by adding operators that vanish for → 0, and hence are called evanescent operators, as opposed to Q 1 and Q 3 that we refer to as physical operators. ere is an in nite set of such evanescent operators. One choice of basis for them is with n an odd integer ≥ 5. e terms proportional to the arbitrary constants a n and b n are of the form × a physical operator; they parametrize di erent possible choices for the basis of evanescent operators.
For the computation of the ADM we adopt the subtraction scheme introduced in refs. [48,49]. Since this is the most commonly used scheme for applications in avor physics, we refer to it as the avor scheme. We label indices of the ADM using odd integers n ≤ 1, so that n = 1, 3 correspond to the physical operators, eq. (3.1), and n ≥ 5 to the evanescent operators, eq. (3.2). e ADM up to two-loop order is 2 where Notice that the invariant (Q 1 , Q 3 ) block of γ (2,0) depends on the coe cients a 5 and b 5 , which parametrize our choice of basis. is dependence can be understood as a sign of schemedependence [51]. Clearly, this implies that the scaling dimensions at O( 2 ) are not simply obtained from the eigenvalues of this invariant block, as also its eigenvalues depend on a 5 and b 5 . e additional contribution that cancels this basis-dependence originates from the O( ) term γ (1,−1) in the one-loop ADM. Such O( ) terms are indeed induced in every scheme that contains nite renormalizations, such as the avor scheme. For a thorough discussion of the scheme/basis-dependence and its cancellation we refer to ref. [30].
ere are a few non-trivial ways of partially testing the correctness of the two-loop results: i) We performed all computations in general R ξ gauge. is allowed us to explicitly check that the mixing of gauge-invariant operators indeed does not depend on ξ.
ii) All the two-loop counterterms are local, i.e., the local counterterms from one-loop diagrams subtract all terms proportional to 1 log µ in two-loop diagrams.
iii) e 1 2 poles of the two-loop mixing constants satisfy the relation where β (1,0) is the one-loop coe cient of the beta-function. is is equivalent to the -independence of the anomalous dimension [52].
In the next two subsections, we discuss the renormalization of the one-and two-loop Green's functions from which we extract the relevant entries of the mixing matrix Z -and ultimately the ADM entries in eqs. (3.4), (3.5), and (3.6)-and some technical aspects of the two-loop computation. A reader more interested in the results for the scaling dimensions may proceed directly to section 3.4.
Operator basis
As argued in section 2.1, in general we need to consider also EOM-vanishing operators when renormalizing o -shell Green's functions. Moreover, in our computation we adopt an IR regulator that breaks gauge-invariance, so we also need to take into account some gauge-variant operators. Below we list all operators that, together with (Q 1 , Q 3 ) and {E n } n≥5 , enter the renormalization of the two-loop Green's functions we consider:
EOM-vanishing operators
ere is a single EOM-vanishing operator, N 1 , that a ects the ADM at the one-loop level and another one, N 2 , that a ects it at the two-loop level. ey read Additionally, there are EOM-vanishing operators that are only necessary to close the basis of independent Lorentz structures for certain Green's functions. For completeness, we list them here Here / D ≡ γ µ D µ and the arrow indicates on which eld the derivative is acting.
Gauge-variant operators Renormalization constants subtract UV poles of Green's functions. It is thus essential to ensure that no IR poles are mistakenly included in the renormalization constants. In practice, this means that an energy scale must be present in dimensionally regularized integrals. Otherwise, UV and IR contributions cancel each other and the result of the loop integral is zero in dimensional regularization [45].
One possibility to introduce a scale is to keep the external momentum in the loop integral. However, i) such loop integrals are more involved than integrals obtained by expanding in Table 1: A summary of the Green's functions we consider. e loop order (L-loop) refers to the α L contribution to the corresponding Green's function (second column). e third column contains the mixing renormalization constants that the given Green's function depends on. e last column contains the ones we extract in each case. powers of external momenta over loop momenta, and ii) keeping external momenta does not necessarily cure all the IR divergences, e.g., diagrams with gluonic snails in non-abelian gauge theories. Another possibility for QED would be to introduce a mass for the Dirac fermions. e drawback in this case is that we would have to consider many more EOM-vanishing operators.
Instead, we apply the method of "Infrared Rearrangement" [53,54]. is method consists in rewriting the massless propagators as a sum of a term with a reduced degree of divergence and a term depending on an arti cial mass, m IRA . Section 3.3 contains some more details about the method. e caveat is that the method violates gauge invariance in intermediate steps of the computation. All breaking of gauge invariance is proportional to m 2 IRA and explicitly cancels in physical quantities. However, to restore gauge-invariance, also gauge-variant operators proportional to m 2 IRA need to be consistently included in the computation. Fortunately, due to the factor of m 2 IRA , at each dimension there are only a few of them. At the dimension-four level, there is a single operator generated, i.e., the photon-mass operator: At the dimension-six level, there are more operators, but only one, P, enters our ADM computation because Q 1 and Q 3 mix into it at one-loop. It reads
Renormalizing Green's functions
In this subsection, we highlight the relevant aspects in the computation of the renormalization constants Z j i , from which we extracted the ADM presented above, via the renormalization of amputated one-particle irreducible Green's functions.
For each Green's function we need to specify the operator we insert and the elementary elds on the external legs. In our case, the external legs are either four elementary fermions, or two fermions and a photon, or two photons. At the tree-level, a Wick contraction with the elementary elds de nes a vertex structure for each operator. We denote the ΨΨΨΨ structures with S, the ΨΨA µ ones withS, and the A µ A ν one withŜ. An additional subscript indicates the operator associated to a given structure. e representation in terms of Feynman diagrams is We collect all structures that enter the computation in appendix A.
In what follows, we refer to as a sum over a speci c subset of Feynman diagrams: i) All these diagrams have a single insertion of the operator O. ii) ey are dressed with interactions such that they contribute at O(α L ). In particular, we include all counterterm diagrams proportional to eld and charge renormalization constants, but we do not include diagrams that contain mixing constants. We keep those separate to demonstrate how we extract them. iii) e subscript S indicates that out of this sum of diagrams we only take the part proportional to the structure S. In short, the notation of eq. (3.14) denotes the L-loop insertion of O projected on S, including contributions from eld and charge renormalization constants.
As an illustration of the notation we show in gure 1 a small subset of the Feynman diagrams for the non-trivial case of N 1 (2) S , withS any of the structures in eq. (A.10). Notice that since N 1 is a linear combination of terms with di erent elds, see eq. (3.8), its eld and charge renormalizations depend on the part we insert, namely Next we derive the conditions on the Green's functions that determine the mixing constants. For transparency we frame the constant(s) that we extract from a given condition. In table 1 we summarize which Green's functions we consider, on which mixing renormalization constants they depend, and which one we extract in each case. For brevity we use the following shorthand notation: We collect the results for the renormalization constants in appendix B. A µ A ν at one-loop At one-loop there is no insertion of any four-fermion operator that contributes to the Green's function with only two external photons. us Contrarily, one-loop insertions of four-fermion operator contribute to the ΨΨA µ Green's function. By expanding the diagram in the basis ofS structures, we determine the mixing into operators with a tree-level projection onto ΨΨA µ , namely N 1 and P. For the physical operators the conditions are In the rst line we use that Z Notice that in this case the mixing constants subtract nite terms, as required by the avor scheme we adopt.
ΨΨΨΨ at one-loop
Next, we compute the one-loop insertions in the ΨΨΨΨ Green's function. Firstly, we insert physical operators, i.e., Q, with the only non-vanishing N 1 QN 1 , which we have previously determined via the ΨΨA µ Green's function. Next, we insert evanescent operators. Again, the only di erence here is that their mixing constants into physical operators subtract nite pieces is completes the computation of all one-loop constants required to determine the mixing of physical operators at the two-loop level. Next, we renormalize the same Green's functions at the two-loop level.
At the two-loop order Q 1 and Q 3 insertions do contribute to the A µ A ν Green's function. ey can thus mix into the operator N 2 . Even though N 2 itself does not have a tree-level projection on physical operators, we need this mixing to extract the two-loop mixing of Q 1 and Q 3 into N 1 in the next step. e projection onto theŜ structure results in the condition
(3.23)
ΨΨA µ at two-loop Next we renormalize the ΨΨA µ Green's function at the two-loop level. We only need the two-loop mixing of physical operators into N 1 , because only N 1 has a tree-level projection onto Q 1 . To unambiguously determine the projection on the structureS N 1 , we have to x a basis of linear independent structures, which correspond to linearly independent operators. At this loop order, we nd that apart from N 1 we also need to include the operators N 3 and N 4 to project all generated structures. is projection is the only point in which these operators enter our computation. e niteness of the two-loop ΨΨA µ Green's function determines the two-loop mixing of physical operators into N 1 via 3 ΨΨΨΨ at two-loop Finally, we have collected all results necessary to renormalize the two-loop ΨΨΨΨ Green's function. e renormalization conditions for the mixing in the physical sector read We see here explicitly that, because N 1 has a tree-level projection onto Q 1 , we need Z
Evaluation of Feynman diagrams
Already at the two-loop level the number of Feynman diagrams entering the Green's functions is quite large. e present computation is thus performed in an automated setup. Firstly, the program QGRAF [55] generates all diagrams creating a symbolic output for each diagram. is output is converted to the algebraic structure of a loop diagram and subsequently computed using self-wri en routines in FORM [56]. e methods for the computation and extraction of the UV poles of two-loop diagrams are not novel and also widely used throughout the literature. Here, we shall only sketch the steps and mention parts speci c to our computation.
One major simpli cation of the computation comes from the fact that we can always expand the integrand in powers of external momenta over loop-momenta and drop terms beyond the order we are interested in. For instance, for the ΨΨΨΨ Green's function all external momenta can be directly set to zero, while for the ΨΨA µ one we need to keep the external momenta up to second order to obtain the mixing into N 1 (seeS N 1 in eq. (A.10)).
A er the expansion, all propagators are massless so the resulting loop-integrals vanish in dimensional regularization. To regularize the IR poles and perform the expansion in external momenta we implement the "Infrared Rearrangement" (IRA) procedure introduced in refs. [53,3 Note that N1 (1) , as N1 has two Feynman rules. 54]. In IRA, an -in our case massless-propagator is replaced using the identity where p is the loop momentum, q is a linear combination of external momenta, and m IRA is an arti cial, unphysical mass. We see that the rst term in the decomposition contains the scale m IRA and carries no dependence on external momenta in its denominator. In the second term, the original propagator reappears, but thanks to the additional factor the overall degree of divergence of the diagram is reduced by one. When we apply the decomposition multiple times, we obtain a sum of terms with only loop-momenta and m IRA in the denominators plus a term proportional to 1 (p+q) 2 . is last term, however, can be made to have an arbitrary small degree of divergence. erefore, in a given diagram we can always perform the decomposition as many times as necessary until terms proportional to 1 (p+q) 2 are nite and can thus be dropped if we are interested in UV poles.
When applying IRA on photon propagators, the resulting coe cients of the poles are not gauge-invariant, because we drop the nite terms in the expansion of propagators. is is why some gauge-variant operators/counterterms enter in intermediate stages of the computation, for instance the operator P. Such operators are always proportional to m 2 IRA and so only a small number of them enters at each dimension. For more details on the prescription we refer to the original work [54].
e IRA procedure results in integrals with denominators that i) are independent from external momenta, and ii) contain the arti cial mass m IRA . We can always reduce these integrals to scalar "vacuum" diagrams by contracting them with metric tensors and solving the resulting system of linear equations, e.g., see ref. [54]. is tensor reduction reduces all integrals to oneand two-loop scalar integrals of the form with the integers n 1 , n 2 , n 3 ≥ 1, and m 1 = 0. e one-loop integral can be directly evaluated, whereas all two-loop integrals can be reduced to a few master integrals using the recursion relation in ref. [57]. In fact, in our case m 1 = m 2 = m IRA and the use of recursion relations is not required.
In the evaluation of the Feynman diagrams, we use the Cli ord algebra in d dimensions for i) the evaluation of traces with gamma matrices when the diagram in question has closed fermion loops, and ii) the reduction of the Dirac structures to the operator structures S orS listed in appendix A.
Anomalous dimensions at the fixed point
where not required otherwise . (3.31) Note that the physical-physical block is not invariant at order 2 , because there are non-zero entries (γ * ) n1 and (γ * ) n3 for all n ≥ 5.
We are interested in nding the rst two eigenvalues of γ * up to order 2 . ey determine the scaling dimensions of the corresponding eigenoperators at the IR xed point. We denote these scaling dimensions by with i = 1, 2 and ∆ UV ( ) = 6−4 . To compute the rst two eigenvalues we have truncated the problem to include a large but nite number of evanescent operators. Taking a su ciently large truncation, the scheme/basis-dependence of the approximated result can be made negligible at the level of precision we are interested in (for details see ref. [30]). In table 2, we list the values of (∆ 1 ) i and (∆ 2 ) i for N f = 1, . . . , 10 a er we included enough evanescent operators such that the three signi cant digits listed remain unchanged. e table is the main result of this section. In section 5, we will use these results as a starting point to extrapolate the scaling dimensions to d = 3.
Bilinear operators in d = − 2
In this section we consider operators that are bilinear in the fermionic elds. e most generic bilinear operators without derivatives are Table 2: e values of the one-loop (∆ 1 ) i and the two-loop (∆ 2 ) i coe cients de ned in eq. In d = 4−2 , the conservation of the nonsinglet axial currents is violated by evanescent operators [45], and thus only the diagonal SU (N f ) is a symmetry. On the other hand, the CFT in d = 3 is expected to enjoy the full SU (N f ) L × SU (N f ) R symmetry, which is actually enhanced to SU (2N f ) × U (1). erefore, in continuing the operators of eq. (4.1) to d = 3, we nd that the ones with γ 5 are in the same multiplets of the avor symmetry as those without. So even though their scaling dimensions can di er as a function of , the enhanced symmetry entails that they should agree when = 1 2 . Since the operators with γ 5 do not provide new information about the 3d CFT, and the 't Hoo -Veltman prescription makes computations technically more involved, we restrict our discussion here to operators without γ 5 . As a future direction, it would be interesting to test this prediction of the enhanced symmetry by comparing the scaling dimensions of operators with γ 5 a er extrapolating to d = 3 at su ciently high order. We also restrict the discussion to operators with n ≤ 3, because the others are evanescent in d = 3.
e anomalous dimension of bilinear operators without γ 5 has been computed for a generic gauge group at three-loop accuracy in ref. [43]. For our U (1) gauge theory we substitute C A = 0 and C F = T F = 1. Moreover, there is a di erence in the normalization convention for the anomalous dimension, so that γ here = 2γ there . Under SU (N f ) each operator decomposes into a singlet and an adjoint component, respectively. A priori, the two components can have di erent anomalous dimensions. e di erence between the singlet and the adjoint originates from diagrams in which the operator is inserted in a closed fermion loop. When the operator has an even number of gamma matrices, the closed loop gives a trace with an odd total number of gamma matrices, which vanishes. So for even n there is no di erence between the singlet and the adjoint, i.e., they have the same anomalous dimension.
Below we collect the results for n ≤ 3. Two-form:
Scalar:
Three-form: In d = 4 these three-form operators are Hodge-dual to axial currents. Actually, the fact that they do not get an anomalous dimension at one-loop, as seen from the equations above, is related to this. However, Hodge-duality cannot be de ned in d = 4 − 2 and the anomalous dimensions start to di er from those of the axial current at the two-loop level.
is exhausts the list of bilinears without γ 5 that ow to physical operators as d → 3. In section 5.3 we discuss which operators of the CFT in d = 3 are continued by the operators above, and extrapolate the above results to obtain estimates for their scaling dimensions.
Padé approximants
A computation of a certain order in provides an approximation to the observable, e.g. the scaling dimension ∆, in terms of a polynomial Padé (1,1) 6.86 6.52 6.35 6.25 6.19 6.15 6.12 6.10 6.08 6.07 Taking → 1 2 in this polynomial gives the " xed order" d = 3 prediction of the -expansion. Typically, the xed-order results show poor convergence as the order is increased. A standard resummation technique adopted for these kind of extrapolations is to replace the polynomial with a Padé approximant. e Padé approximant of order (k,l) is de ned as e coe cients c i and d i are determined by matching the expansion of eq. (5.2) with eq. (5.1). k + l must equal the order at which we are computing. Another condition comes from the fact that we are interested in the result for → 1 2 . In order for the -expansion to smoothly interpolate from = 0 to = 1 2 , an employable Padé approximant should not have poles for ∈ [0, 1 2 ] for the values of N f that we consider. In what follows, we show the predictions from a Padé approximation only if it does not contain any pole on the positive axis of for any value of N f = 1, . . . , 10.
Four-fermion operators as d → 3
In d = 3, the two four-fermion operators in the UV can be rewri en as where i = 1, . . . , 2N f . In this rewriting we see explicitly that these operators are singlets of SU (2N f ). We now evaluate the scaling dimensions (∆) 1 and (∆) 2 of the two corresponding IR eigenoperators, at NLO. For the NLO prediction we employ the Padé approximation of order (1,1). We list the values of the LO and NLO Padé (1,1) predictions for the values of N f = 1, . . . , 10 in table 3.
We visualize the results in gure 2. e dashed lines are the result of the one-loopexpansion computation. Indeed, as discussed in ref. [18], the one-loop approximation predicts that the lowest eigenvalue becomes relevant for N f < 3. e two-loop computation presented here changes this prediction. e two solid lines represent the NLO Padé (1,1) approximation to the two scaling dimensions. We observe that for no value of N f does the lowest eigenvalue reach marginality. We also see that the corrections to the LO result are signi cant, especially for small N f , i.e., N f = 1, 2. is means that for such small values of N f , NLO accuracy is not su cient to obtain a precise estimate for this scaling dimension. Nevertheless, at face value, the result of the two-loop -expansion suggests that QED 3 is conformal in the IR for any value of N f . Next, we comment on the relation of our result to the 1/N f -expansion in d = 3. At large µ ψ i , is set to zero by the EOM of the gauge eld, hence the operator Q 1 is an EOM-vanishing operator. However, besides Q 3 , there still is another avor-singlet scalar operator of dimension 4 for N f = ∞, namely F 2 µν . Q 3 and F 2 µν mix at order 1/N f [11]. Looking at the -expansion result in gure 2 we see that indeed only the lowest eigenvalue (∆) 1 (black lines) approaches 4 for large N f . e other scaling dimension (red lines) approaches 6 as N f → ∞, implying that the two eigenoperators cannot mix at large N f . is is consistent precisely because there is only one non-trivial singlet four-fermion operator at large N f . Its mixing with F 2 µν cannot be captured within the -expansion, because the UV dimension of F 2 µν di ers from that of a four-fermion operator in d = 4 − 2 . We can, however, test whether for any value of ∈ [0, 1 2 ] the lowest eigenvalue (∆) 1 , which starts o larger at = 0, crosses the dimension of F 2 µν . Such a level-crossing would require to revisit the extrapolation to = 1 2 and possibly a ect the estimate. e scaling dimension of F 2 µν in -expansion is with α * given in eq. (2.10) up to O( 4 ). At three-and four-loop order the only Padé approximation without poles in the positive real axis of is the order (2,1) and (2,2), respectively. In gure 3 we plot (∆) 1,2 and ∆(F 2 ) as a function of d for the representative cases of N f = 1, 2, fermion operators (black and red lines) and F 2 µν (blue lines) as a function of the dimension d, i.e., for ∈ [0, 1 2 ]. e le , center, and right panel show the result for the representative cases of N f = 1, 2, and 10, respectively. We observe that the N 3 LO Padé (2,2) prediction of ∆(F 2 ) never crosses the NLO Padé (1,1) prediction of (∆) 1 in the extrapolation region. and 10. We observe that the only case in which (∆) 1 crosses ∆(F 2 ) before d = 3 is when N f = 1 and when we employ N 2 LO Padé (2,1) to predict ∆(F 2 ). e N 3 LO Padé (2,2) prediction for N f = 1 does not cross (∆) 1 and the same holds for larger values of N f . erefore, at least at this order, F 2 µν should not play a signi cant role in obtaining the four-fermion scaling dimension.
Bilinears as d → 3
Next we consider bilinear operators in d = 3. In the UV, restricting to the ones without derivatives, the possibilities are Scalar: e subscript refers to the representation of SU (2N f ). e singlet is parity-odd. We can combine parity with an element of the Cartan of SU (2N f ), in such a way that one component of the adjoint scalar is parity-even. Since parity squares to the identity, this Cartan element can only have +1 and −1 along the diagonal, which up to permutations we can take to be the rst N f , and the second N f diagonal entries, respectively. With this choice, the parity-even bilinear is N f a=1 (ψ a ψ a − ψ a+N f ψ a+N f ). is is the candidate to be the "chiral condensate" in QED 3 [22]. Vector: e singlet is the current of the gauged U (1). When the interaction is turned on, it recombines with the eld strength and does not ow to any primary operator of the IR CFT.
e adjoint is the current that generates the SU (2N f ) global symmetry. erefore, we expect it to remain conserved along the RG and ow to a conserved current of dimension ∆ = 2 in the IR.
We now identify which d = 4 − 2 bilinears from section 4 approach the d = 3 bilinears above. Substituting the decomposition of eqs. (2.4) and (2.5), and also using 3d Hodge duality, we nd that In gure 4 we plot the extrapolations for the scaling dimension of the conserved avornonsinglet current B (1) adj as a function of N f . We observe that both N 2 LO Padé approximants are closer to 2 than the LO and NLO ones, and they remain close to 2 even for small values of N f . We consider this to be a successful test of the -expansion, which supports its viability as a tool to study QED 3 . sing we nd good convergence behaviour between the NLO Padé (1,1) and the two N 2 LO Padé approximations. erefore, for this observable we are able to provide a rather convincing estimate. We do stress, however, that the comparison of the various approximations does not provide rigorous error estimates, since the error due to the extrapolation is not under control. For B (0) adj we have two di erent operators that provide a continuation to d = 4 − 2 . It is encouraging that as the order increases, the two resulting estimates approach each other. Even so, we nd that for small N f the N 2 LO Padé approximations are spread, so the -expansion at this order does not provide a de nite prediction. As N f increases the situation improves, namely all NLO and N 2 LO approximations begin to converge.
In table 4 we list the numerical values for the various estimates of the bilinear scaling dimensions for N f = 1, . . . , 10.
Next, we compare to the large-N f predictions for the scaling dimensions of the bilinears. e Padé approximants used to estimate the dimensions of B and compare the coe cient c (k,l) with its exact value obtained from the large-N f expansion, is suggests that the extrapolation of the three-form may provide a be er estimate for the scaling dimension of the adjoint scalar at this order.
Conclusions and future directions
We employed the -expansion to compute scaling dimensions of four-fermion and bilinear operators at the IR xed point of QED in d = 4 − 2 . We estimated the corresponding value for the physically interesting case of d = 3. e results seem to con rm the expectations from the enhancement of the global symmetry as d → 3 (see gures 4 and 5). erefore, going beyond the leading order gave us more con dence that the continuation is sensible. At the same time, it appears that -with the exception of the scalar-singlet bilinear-to obtain precise estimates for the scaling dimensions for small values of N f requires even higher-order computations and perhaps more sophisticated resummation techniques (see for instance chapter 16 of ref. [60] and references therein). e computation of such higher orders in via the standard techniques used in the present work would require hard Feynman-diagram calculations.
On a di erent note, ref. [84] recently argued that QCD 3 with massless quarks undergoes a transition from a conformal IR phase, which exists for su ciently large number of avors, to a symmetry-breaking phase when N f ≤ N c f . is is analogous to the long-standing conjecture for QED 3 , and so four-fermion operators may play the same role.
erefore, at least for the case of zero Chern-Simons level, -expansion can be employed in a similar manner to estimate N c f . A LO estimate appeared in ref. [85]. In light of our results for QED 3 , it would be worth studying how this estimate is modi ed at NLO.
Acknowledgements: we thank Joachim Brod, Martin Gorbahn, John Gracey, Igor Klebanov, Zohar Komargodski, and David Stone for their interest and the many helpful discussions. We are also indebted to the Weizmann Institute of Science, in which this research began. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research & Innovation.
A Feynman rules
From the QED Lagrangian in R ξ -gauge, we obtain the Feynman rules ere is one additional counterterm coupling that we need to specify. It is a relic of the procedure with which we regulate IR divergences (see section 3.3), which essentially breaks gauge invariance. For this reason to consistently renormalize Green's function we need to include a counterterm analogous to a mass for the photon, i.e., Only the one-loop value of δm 2 IRA enters our computations. It reads To nd the EOM-vanishing operators at the non-renormalizable level we apply the EOM of the fermion and photon. ey read For brevity we use the shorthand notation γ µ D µ ≡ / D and use an arrow to indicate the direction in which the derivative in / D acts, i.e. / D and / D ≡ / D. We consider the Lagrangian with additional couplings proportional to the operators introduced in section 3.1 To compute the Green's function we need the Feynman rules of the operators we insert, as well as all the structures that we need to project the amplitude. For instance, to renormalize the Green's function of ΨΨA µ with one-loop insertions of Q 1 we need not only the Feynman rule of Q 1 , but also the ΨΨA µ structure of all operators that Q 1 generates at one-loop.
B Renormalization constants
In this appendix we list the mixing-renormalization constants of four-fermion operators. First we list the constants we need to compute the ADM of avor-singlet four-fermion operators, which we discussed in the main text, and subsequently the constants entering the computation of the ADM of avor-nonsinglet four-fermion operators, which we discuss in appendix C.
B.1 Flavor-singlet four-fermion operators
and for the evanescent operators they are with n an odd integer ≥ 5. To compute these constants for generic n we used Cli ord-algebra identities from ref. [44]. As explained in section 3.2, in the computation of the mixing at two-loop level more operators enter. e only one-loop mixings entering the computation, apart from those above, is the mixing of the physical four-fermion operators into the EOM-vanishing operator N 2 , and the gauge-variant operator P. e former vanish, i.e., with Q = Q 1 , Q 3 . We do not list the corresponding constants for the evanescent operators because they do not enter the two-loop computation of the mixing of physical operators.
In table 1 we summarised on which renormalization constants the Green's functions we computed depend on. We see that to determine the two-loop mixing of the four-fermion operators we rst need to determine the two-loop mixing of the physical operators into the two EOM-vanishing operators N 1 and N 2 . e corresponding constants read Z (2,2) Finally, the two-loop mixing constants of the two physical operators read
B.2 Flavor-nonsinglet four-fermion operators
e renormalization of the Green's functions with insertions of avor-nonsinglet four-fermion operators is analogous to the one with avor-singlets but less involved. eir avor-o -diagonal structure forbids them to receive contributions from any EOM-vanishing or gauge-variant operator at two-loop order. erefore, in this case we only need the mixing constants within the physical and evanescent sectors.
As in the avor-singlet case, the one-loop mixing is directly related to the one-loop anomalous dimensions of eqs. (C.4) and (C.5) via with O, O any physical or evanescent avor-nonsinglet four-fermion operator; the one-loop anomalous dimensions above are given in appendix C. Finally, the two-loop mixing constants Table 5: ree signi cant digits of the one-loop, (∆ 1 ) i , and the two-loop, (∆ 2 ) i , contributions to the scaling dimension of the avor-nonsinglet four-fermion operators for various cases of N f . To obtain the two-loop (∆ 2 ) i values we implemented the algorithm to include the e ect of evanescent operators [30].
C Flavor-nonsinglet four-fermion operators
In the main part of this work we investigated bilinear and avor-singlet four-fermion operators. ere exist also four-fermion operators that are not singlets under avor. e ones we consider in this appendix are spanned by the basis E n = T ac bd (Ψ a Γ n µ 1 ...µn Ψ b )(Ψ c Γ n µ 1 ...µn Ψ d ) + a n Q 1 + b n Q 3 , (C. 3) with T ac db = T ca bd and T ac ad = T ab bd = 0. e computation of their ADM at one-and two-loop order entails only a subset of the Feynman diagrams needed for avor-singlet case and is actually less involved as discussed in appendix B. In this appendix we present their ADM and their scaling dimensions at the IR xed point in d = 4 − 2 , and use this to estimate the corresponding d = 3 observables.
(C.6) e part of the one-loop result that does not depend on a n and b n was rst computed in ref. [48]. Table 6: LO and either NLO Padé (1,1) or xed-order NLO predictions for the scaling dimension of the two avor-nonsinglet four-fermion operators at d = 3 for various values of N f . Only three signi cant digits are being displayed. | 2017-08-12T03:15:39.000Z | 2017-08-11T00:00:00.000 | {
"year": 2017,
"sha1": "aca0ff2b2d955a6d122b5ee2a4423984805d4903",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP12(2017)054.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "aca0ff2b2d955a6d122b5ee2a4423984805d4903",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
256263642 | pes2o/s2orc | v3-fos-license | Flow Experience Is a Key Factor in the Likelihood of Adolescents’ Problematic TikTok Use: The Moderating Role of Active Parental Mediation
TikTok use and overuse have grown rapidly in recent years among adolescents. However, risk factors for problematic TikTok use are still largely unknown. In addition, drawing on the flow theory and parental mediation theory, this study aims to examine how adolescents’ perceptions of enjoyment, concentration, and time distortion affect their problematic TikTok use behavior. Further, we examined the moderating effect of active parental mediation. An online survey in China received responses from a sample of 633 adolescents between the ages of 10 and 19 (males: 51.2%; Mage = 15.00; DS = 0.975). Our findings showed that enjoyment was positively associated with concentration and, in turn, with time distortion. We also found significant positive effects of concentration and time distortion on problematic TikTok use. The effect of enjoyment, however, was non-significant, indicating that hedonic mood was not associated with problematic TikTok use. Out of the three moderated relationships examined in this study, only active parental control was found to be a significant moderator for the relationship between concentration and problematic TikTok use. The significant negative moderation result showed that as active parental mediation grows, the impact of adolescents’ concentration on problematic TikTok use is reduced. Future research directions and implications are discussed.
Introduction
As a result of the rapid development of mobile devices and information technology, users' consumption habits on the Internet are changing constantly. Some social media users are no longer satisfied with the form of text and pictures; there is a tendency to prefer vivid short videos, especially user-generated content applications such as TikTok, which are characterized by fragmentation, a low threshold, and high transmission characteristics [1]. TikTok is a social media platform that allows users to watch, share, and create short videos [2]. It has been launched in China since 2016, targeting adolescents and young people [3]. The users under the age of 24 account for more than half of TikTok's total users (61.73%) [4], making TikTok the most popular leisure activity among China's millennials [5,6].
TikTok enables users to create interactive and recreational videos. Its powerful AI algorithms and content-oriented distribution strategy can present content tailored to users' preferences; this is the most distinctive feature that sets TikTok apart from its peers [7]. The use of short video applications helps users relax [8,9]. More specifically, the perception of TikTok's effortlessness, recommendation accuracy, and recommendation serendipity can most likely provide users with a "flow" experience [10], which refers to the optimal experience of doing something completely concentrated, generating an intense sense of enjoyment and satisfaction, and focusing intensely on the task without being aware of the time [11]. In the context of TikTok use, users simply repeat the swipe-up operation, which makes them completely immersed, feel a high level of fun and curiosity, and even lose track of time [12]. However, the constant use of smart digital devices may result in a flow experience that increases the risk of problematic TikTok use [3,13].
Previous studies have shown that excessive smartphone use can trigger the user's flow experience [14]. As a result, users may be more likely to develop problematic TikTok use [2,13]. This online behavior is associated with a range of physical and psychological problems, including depression, anxiety, and stress, memory loss [3], poor quality sleep, dry and blurred eyes, and social isolation [5,15].
Additionally, given TikTok's popularity among Chinese adolescents, the problem of excessive TikTok use has raised concerns, especially among parents of teens. Many parents face the task of actively guiding their children's online activities to increase the possible benefits of media and limit the risks [16]. These parental efforts are called "parental mediation." In the past, parental mediation was targeted to examine the effects of television (TV) on children and teenagers in media and communication [17], and it recognized the important role of parents in managing and regulating children's behavior, especially in traditional media such as TV [18]. As digital media gradually become a common phenomenon, parents shift their attention to various online behaviors such as internet use or social media use to avoid its harmful effect on their children [19]. Parental mediation is considered an effective way to help control the problematic behaviors of children [20,21].
In comparison to problematic online gaming and problematic smartphone use, problematic TikTok use is a relatively new phenomenon. It has been gradually noticed by scholars [3,5,12], but it still receives limited attention [2,5,22,23]. Thus, further research is needed to determine the process of users' gradual involvement in problematic TikTok use and to clarify how parents may alleviate this problematic use behavior among adolescents. Thus, we should ask: how do adolescents gradually become problematically involved in TikTok? And what are the protective factors? Whether the active parental mediation used in traditional media is effective in the context of TikTok needs to be further explored.
Furthermore, based on previous literature, this study employed the flow theory and the parental mediation theory to explore possible risk factors for problematic TikTok use. By examining the influence of experience perception (enjoyment, concentration, and time distortion) generated from TikTok use and borrowing active parental mediation, we explored how to break the negative aspect of flow. This study aimed at providing new insights into the risk and protective factors for problematic TikTok use.
Problematic TikTok Use
Several scholars have used different terms, such as problematic use [12], compulsive excessive [24], pathological [25], or addiction [2] to describe problematic online behaviors [15]. Consistent with most studies [26][27][28], our study uses the term "problematic use." Since problematic online behaviors are a spectrum of related but discrete phenomena [29], problematic TikTok use can also be regarded as an independent construct [14]. Referencing other types of social networking services (e.g., Facebook, Instagram, Weibo, etc.), this study defines problematic TikTok use as the uncontrolled and obsessive use of TikTok, which may have negative physical or psychosocial consequences [30,31]. As a progressive form of overuse, the problematic use behavior is configured as loss of control, withdrawal, an inability to reduce use, and negative consequences [12,32].
In addition, given the potentially detrimental consequences of problematic online behaviors, scholars have started to explore their main risk factors. Previous studies have found that users may use social media for social interaction, entertainment, and selfpresentation, but excessive use leads to problematic use behaviors, showing symptoms such as loss of control, withdrawal, and relapse [27]. It may deeply affect people's mental and physical health, which results in a loss of productivity [33]. Studies have found that individuals' personality traits [34,35], attachment styles [36,37], attitudes [38], life satisfaction [39], and self-esteem [40] are related to problematic online behaviors. These studies examined problematic social media use at the individual level [34]. Subsequently, some scholars have explored the risk factors from the perspective of technology, arguing that the technological environment (such as the quality of the platform) and the online experience are related to problematic social media use [35]. These studies are discussed from the perspective of users and social media platforms, focusing on the factors that users already generate before the use process, such as personality characteristics. However, the influence of the user's ongoing experience on problematic use during the use process is far from understood.
Flow experiences can make users feel amused and focused, even lose track of time, and, in some cases, foster problematic online behaviors [31]. In other words, social media satisfy users' psychological needs through an intrinsic reward mechanism. When users perceive benefits, they might continue to be immersed in the process of using, which creates a closed loop of pleasure and eventually leads to problematic online behaviors [12]. While past research focused on the influence of experiential perception during use and problematic use [12], the present study argues that problematic online behavior does not happen in a vacuum but is gradually formed.
Parental Mediation
Parental mediation refers to the parents providing a reasonable method to guide their child's behavior properly [41]. Previous studies have focused on how parents can reduce the negative impact of the media on their children [42]. In addition, parental mediation drew on Bandura et al.'s (1977) [43] social learning theory, which focused on the negative effects of media, to explore and evaluate the role of traditional TV media in shaping the role of aggression. They regarded parents' conscious active mediation, such as how different interpersonal communication strategies can maximize the interests of the media's influence and reduce the negative impact of media on young people's cognitive development [44].
Additionally, with the increase in problematic short-form video use among adolescents, many solutions have been proposed. More attention is being paid to the adolescents themselves to ease their problematic media use, such as improving their mindfulness [45], self-esteem [46], metacognitive beliefs [47], and self-control [48]. However, it has been found that this strategy can be ineffective because of the special age of adolescents. They often have a hard time overcoming the temptation of new things and usually lack selfcontrol [49]. The family environment is the most permanent and central in the development of children [50]. Thus, parents play an important role in the occurrence and relief of adolescent Internet addiction [51].
The parental mediation theory suggests that parents can mediate and alleviate the negative influence of media on children's lives, and they often use different mediation strategies to influence children's media use behavior [52]. Researchers proposed three different mediation strategies: (1) active mediation, (2) restrictive mediation, and (3) coviewing [53]. These three dimensions are found in parental mediation relationships among children's TV viewing [54], video game playing [55], and internet usage behavior [56]. However, as Internet usage has become more common among adolescents, some scholars have highlighted that the existing method needs to be extended to further address parental strategies regarding children's Internet use behavior [57]. Nikken and Jansz 2014 [58] considered added supervision (i.e., monitoring) and technical safety restrictions as new strategies.
Although parental mediation includes several strategies, this study only examined active parental mediation in the context of TikTok use. It is because, first, the restrictive mediation might have a negative impact, leading adolescents to engage in more risky behaviors [59], such as befriending strangers on social networking sites. Second, mobile phones are relatively personal items, and usually, when using mobile phones, people are alone [60]. Thus, it can be difficult for parents and children to use short-form video apps together. Third, a previous study confirmed that parental supervision or monitoring was ineffective in reducing these problem behaviors. Hefner et al., 2019 [61] found that parental monitoring was inefficient in reducing the adverse effects of problematic phone use in children. Finally, installing filters and monitoring software on electronic devices can be a hard task for parents because it requires an advanced level of computer skills that the average adult does not have [52]. Therefore, we only examined active parental mediation in the context of TikTok use.
Flow Experience and Problematic TikTok Use
The flow experience is a state of intense concentration and immersion in an activity [16]. Flow reflects a person's psychological need for entertainment and pleasure, and it is a continuous state.
Previous studies applied the concept of flow to online activities [62][63][64]. Scholars considered problematic online behavior to be facilitated by the increasing flow of online activity; therefore, the flow experience was a positive predictor of problematic social media use [12]. However, the relationship between flow experience and user behavior was mainly tested in traditional social media environments; whether the flow can still stimulate adolescents' problematic online behaviors in the context of TikTok remains unknown.
The social media flow experience represents a multi-dimensional construct including enjoyment, concentration, and time distortion [65]. Enjoyment refers to the individual's hedonic mood, and concentration is the user's attention fully focused on the activity. It is the optimal experience generated by the high concentration of a limited stimulus domain and one of the most important representations of flow experience [66], while time distortion refers to a person engaged in an optimal experience, usually experiencing short memory intervals [11]. It has been proposed that individuals may unconsciously fall into problematic use if they pursue and obtain flow experience over and over again [67]. However, the association between the dimensions of flow is unexplored.
Enjoyment, Concentration and Time Distortion
Past research has examined enjoyment as an intrinsic motivation for information system use [68]. It was regarded as a feeling of pleasure and an escape from unpleasant reality, since users may pull out their smartphones to escape the busyness of life and look for something fun [69]. Therefore, it can be considered a typical hedonic motivation for conducting online activities that are expected to cause users to concentrate on the content. As the time for adolescents to watch a video is very short, the accumulation of many short videos may cause them to concentrate on watching the content continuously. This long-term immersion then makes them forget the existence of the environment, resulting in a sense of time distortion. This overall feeling of flow may reduce users' perception of psychologically unpleasant experiences, such as fatigue. Hence, users lose track of their self-awareness [65], are fully immersed in their ongoing activities, and ignore changes in their surroundings [70]. In this study, we expect that enjoyment will lead to concentration, and concentration will lead to time distortion. Therefore, we hypothesize: H1. Enjoyment positively predicts concentration.
Enjoyment and Problematic TikTok Use
In the media environment, people can get enough pleasure from the activities they are engaged in. It has been proposed that flow experiences result from repetitive behavior and the desire to maintain positive emotions. This increases the frequency and intensity of media consumption, leading to problematic use [71]. The sense of satisfaction and pleasure derived from a flow experience makes people want to re-experience it over and over again. In addition, engaging in flow-generating activities may come at a great cost, and problematic use behavior is a possible consequence [12]. Given the entertaining nature of TikTok, it seems reasonable that the feeling of enjoyment on TikTok has a connection with adolescents' problematic TikTok use. Therefore, we hypothesize: H3. Enjoyment positively predicts problematic TikTok use.
Concentration and Problematic TikTok Use
A key factor in flow is the constant concentration on one activity, which can even develop into overuse [72]. Flow as an optimal experience can occur when players perceive a balance between their skill and the challenge within the interaction, accompanied by concentration [13]. Since the flow experience offers users feelings of immersion and pleasure, it is likely for them to generate attachment to the media [13], and this has also been found to have a positive influence on gaming addiction [54,63,73]. In addition, for many adolescents, the merging of TikTok's multiple functions has aroused strong interest. They are very likely to be absorbed in online activities and ignore their surroundings. Over time, the feeling of concentration on TikTok can trigger problematic use.
Time Distortion and Problematic TikTok Use
An important manifestation of online flow is that the user is completely immersed in the online world and disconnected from the real world. As a result, when people enter regions of time stasis, they become immersed in the virtual world and develop a distorted sense of time [13,74]. In this situation, people's sense of self is impaired, and they lose their sense of time (i.e., their mental clocks run slowly) [67]. Time distortion is a sign, and the greater the distortion perception, the greater the problematic use.
Additionally, each of TikTok's videos is short and attractive, and adolescents can easily stick to the activity of watching videos. This time-wasting and problematic use have been proven to be an important negative effect of watching TikTok videos [5,12,75,76]. In other words, users may underestimate the time interval between participating in online activities, which leads to problematic use. Therefore, we hypothesize: H5. Time distortion positively predicts problematic TikTok use.
Active Parental Mediation as Moderator
Adolescents and young adults often use social networks to acquire information, entertain, contact, and express themselves [77]. However, overuse of these platforms can become problematic for some adolescents. In traditional media, parents can use mediation to control their children's TV use [52]. Additionally, with the evolution of online technology, parents' concerns have shifted to their children's online screen use [78]. They are facing a challenge in terms of how to protect their children from the negative effects of online activities [52]. However, whether parental mediation can be applied in the context of social media, especially TikTok, remains to be determined.
Parental mediation involves refraining, co-viewing, active mediation; and restrictive mediation, among them, active mediation has been proven to be likely to alleviate the problematic use of online screens [21]. Active parental mediation refers to the discussion between parents and children about the media's content and what they watch; this method can mitigate the possibility of adverse consequences, such as aggressive behavior or distorted worldview formation [61]. A previous study has confirmed the relationship between problematic social media use, adolescents, and parents [79]. However, children who receive active parental mediation can experience more positive media use outcomes; in contrast, the problematic behavior of online activities can be aggravated [21].
The family, as the most stable environment for adolescents, has been proven to play a positive role in alleviating problematic use behaviors [80], such as controlling adolescents' screen use through efforts [81][82][83]. Adolescents can freely follow their interests out of pleasure and curiosity to keep using short video apps. This can lead to constant enjoyment based on intrinsic rewards and time-distorted problematic use behaviors. When adolescents are highly focused on enjoying TikTok and have time distortion perceptions, active parental mediation seems to be an effective measure to reduce the effect.
Therefore, we hypothesize: H6. Active parental mediation negatively moderates the relationship between enjoyment and problematic TikTok use.
H7. Active parental mediation negatively moderates the relationship between concentration and problematic TikTok use.
H8. Active parental mediation negatively moderates the relationship between time distortion and problematic TikTok use.
Research Design and Measurements
The model of this study was of the reflective-reflective type, in which the items were interchangeable and the removal of an item did not change the essential nature of the underlying construct [84]. This study aimed to explore the factors that predict adolescents' problematic TikTok use.
We collected data using purposive sampling from Chinese adolescents, aged 10-19 years old. The World Health Organization defines this age group as adolescents [85]. It is important to note that adolescents comprise the largest portion of TikTok users [4]. In addition, this study used a filter question, "How long do you spend on TikTok in a day?" to gauge problematic TikTok use. The use of the inclusion criteria of age category and time spent on TikTok suggests that the study sample was appropriate for this research.
This study has obtained ethical clearance. The consent to participate in this study was sought from the legal guardians of the respondents prior to the data collection stage. The consent form was attached to the invitation letter to participate in the online survey. The link to the survey was given only after getting the completed consent forms from the legal guardians of the respondents. The statements on the purpose of the study and respondents' right to withdraw at any point of the survey were clearly stated on the cover page of the online survey. The questionnaire underwent the forward-backward translation method from English to Chinese and then from Chinese to English in order to be applied in the Chinese context. An online survey was administered as it offered multiple benefits such as fast response, low cost, and the ability to handle massive questions and many respondents [86,87].
The data collection involves identifying schoolchildren to participate in the survey. This is because those aged between 10 and 19 are mostly still in school. We randomly selected one primary school, one secondary school, and one high school from Hebei Province, China, using the lottery balloting method. It is important to note that the educational system in Hebei Province follows the national public education system, and hence the targeted sample represents the adolescent population in China to some extent. We contacted the school headmasters via email or the official line. Upon agreement, the online questionnaire link was delivered to them, who later helped circulate it to the students' social media groups in their respective schools.
In addition, we adopted validated measures to examine the key variables of the study. All variables were measured using a 5-point Likert scale, from "strongly disagree" (1) to "strongly agree" (5). Enjoyment was measured using six items (α = 0.957) adapted from Cao et al., 2020 [35]. We modified the original scale by replacing the words "WeChat" with "TikTok." For the concentration scale, we adopted 3 items (α = 0.956) from Chen et al.'s (2017) [88] and replaced the words "smartphone" with "TikTok" to suit the context of this study. The time distortion scale of Kim and Ko 2019 [89] was used to assess time distortion in this study. The scale of three items (α = 0.948) was slightly modified by replacing the words "game" with "TikTok." As for active parental mediation, we used Nikken and Jansz 2014 s [58] active parental mediation scale, which consisted of 9 items (α = 0.940). Finally, we adopted Yu and Fu-min 2005 s [90] Internet Dependence Scale to measure problematic TikTok use. The scale consisted of 19 items (α = 0.900), and the words "online" and "smartphone" were replaced with "TikTok" for all of the statements. The Appendix A presents the measurement of each construct.
Pilot Testing
We ran a pilot study before the actual survey to ensure that the clarity of the instructions and the reliability of the measurement were taken care of. A total of 50 respondents were involved in the pilot study. The sample meets the requirement of 10 percent of respondents for the actual survey [91]. The outcomes showed that Cronbach's alpha values for all variables were all above 0.80, suggesting a high level of acceptable internal consistency. There was no problem of ambiguity or information overload reported during the pilot testing.
Data Collection
We collected data from January to August 2022, and a total of 735 responses were collected. We found 102 invalid responses because of straight-lining issues. As a result, the remaining 633 valid responses were used for further analysis. This sample size was adequate as it met Green's (1991) [92] criteria for sample calculation for an unknown population. Table 1 showed that 51.2% of respondents were male and 48.8% were female. The respondents were mainly aged between 10 and 19 years, with most of them aged between 15 and 17 (42.51%).
Common Method Bias (CMV)
As the data of this study was collected from a self-administered report from the same person [93], therefore, CMV was the potential risk that needed to be addressed. We applied the marker variable technique to examine CMV, which was recommended when conducting statistical analysis [94]. In addition, we adopted markers (2 items) with no theoretical relationship to our study [95]. Table 2 showed that the value of R 2 in problematic TikTok use slightly changed from 0.319 to 0.321, which was less than a 10% change after adding the marker variables to the research model. Table 3 showed there was no significant change when with or without Marker. Therefore, the CMV was not an issue for this study [96].
The Measurement Model
The research model was tested in a two-stage approach, involving the examination of the measurement model and structural model, using PLS-SEM analysis, as suggested by scholars [97,98]. The results presented for the variables' Cronbach's α values ranged between 0.889 and 0.969, and the composite reliability (CR) values ranged between 0.931 and 0.971. Thus, the model had good internal consistency and reliability. Next, all the outer loadings were larger than 0.6, indicating sufficient indicator reliability [99], and the average variance extracted value (AVE) of each construct was larger than the 0.5 thresholds (ranging from 0.641 to 0.856), which showed satisfactory convergent validity [18]. The results were presented in Table 4 Last, the heterotrait monotrait (HTMT) technique was adopted to test discriminant validity [56]. The following Table 5 indicated that the findings met the critical value (lower than 0.85) [100].
Structural Model
After establishing the reliability and validity of the instruments, the path relationships with 1000 bootstrap samples were tested [97]. The results of the structural model are presented in Tables 6 and 7 and Appendix B.
Hypothesis Testing
The results indicated that all the hypotheses were supported except for the direct relationship between enjoyment and problematic TikTok use. Enjoyment was a significant predictor of concentration (β = 0.714, t = 32.676, p < 0.01 *), which supported H1. Meanwhile, concentration significantly predicted time distortion (β = 0.693, t = 22.050, p < 0.01*), which supported H2. Therefore, based on our findings, the relationship between components of flow (enjoyment, concentration, and time distortion) was sequential: enjoyment was the antecedent of concentration, and time distortion was predicted by concentration.
The positive relationships suggest that enjoyment leads to concentration, and concentration leads to time distortion. Meanwhile, both concentration and time distortion significantly predicted problematic TikTok use (β = 0.305, t = 5.474, p < 0.01 *; and β = 0.371, t = 7.239, p < 0.01 *, respectively), thus H4 and H5 were supported. However, there was no significant relationship between enjoyment and problematic TikTok use (β = −0.101, t = 2.063, p < 0.01 *), therefore H3 was rejected. Thus, we found that feelings of concentration and time distortion were the main antecedents that were positively associated with problematic TikTok use; however, there was no significant effect between enjoyment and problematic TikTok use.
This study examined the role of active parental mediation as a moderator in the relationship between flow experience (i.e., enjoyment, concentration, and time distortion) and problematic TikTok use. We found that active parental mediation significantly interacted with concentration to influence adolescents' problematic TikTok use (H7: β = −0.145, t = 2.363, p < 0.01 *). The moderating results showed that a higher level of problematic TikTok use was associated with higher concentration experiences, and the problematic use level was likely to be severe when adolescents lacked active parental mediation. However, the moderating effects were not found in active parental mediation in the relationship between enjoyment, time distortion, and problematic TikTok use (H6: β = 0.110, t = 2.013, p > 0.01; H8: β = 0.068, t = 1.612, p > 0.01). Thus, H6 and H8 were rejected. As a result, we have identified the significant moderating role of active parental mediation.
Coefficient of Determination (R 2 ) and Predictive Relevance (Q 2 )
The overall quality of the model was evaluated by the coefficient of determination (R 2 ) and predictive relevance (Q 2 ) [97]. As shown in Table 8, our model had satisfactory explanatory power. The Q 2 of the problematic TikTok use was significantly different from zero (Q 2 = 0.194). Overall, approximately 31.9% of the variance in problematic TikTok use was explained by this structural model. Apart from the coefficient of determination (R 2 ) and predictive relevance (Q 2 ), we also ran goodness of fit to show the extent to which the sample data represent the data expected from the actual population. When the GoF value is greater than 0.36, 0.25, and 0.10, it is regarded as high, medium, and small, respectively [101]. Table 9 presented the summary of AVE and R 2 value. This study obtained a GoF value of 0.574, which was over the cut-off value of 0.36 for the large effect size of R 2 . Therefore, this study concluded that the research model performs well in contrast to baseline values.
Discussion
In this study, we explored the flow experience dimensions (i.e., enjoyment, concentration, and time distortion) as predictors of Chinese adolescents' problematic TikTok use and the moderating role of active parental mediation in this association. A structural equation modeling approach with moderation analysis was conducted to test the eight hypotheses drawn from empirical studies.
First, we provided evidence that the flow experience (concentration and time distortion) was positively associated with adolescents' problematic TikTok use. These findings were consistent with previous research on problematic use [12]. This suggests that TikTok can be a highly engaging online world for adolescents, which might provide them with a sense of immersion and distract them from offline activities.
In addition, we found that concentration was a strong predictor of problematic TikTok use. It could be because adolescents' attention spans cannot last very long. Hence, the playing time of each video was relatively short-only a few minutes-which is meant to sustain users' concentration on TikTok. Moreover, the algorithmic recommendation system behind this platform is constantly calculated and rehearsed. The videos that users are interested in are continuously presented on the screen. Further, for adolescents, the easy-touse format and interesting videos of TikTok can easily attract their interest. Although each video is relatively short, the accumulation of many short videos caused the overall usage time to become very long. When adolescents are exposed to short videos for a long time, they eventually develop problematic TikTok use.
In addition, we found that time distortion significantly affected problematic TikTok use. However, when adolescents are deeply immersed in using TikTok, they might lose their ability to perceive time. They cannot be aware of how long they have been immersed in TikTok and forget about their surroundings, gradually developing problematic TikTok use. In addition, adolescents may lack self-control, which can lead them to be hooked up to TikTok [49]. When too much time is spent on TikTok, they tend to concentrate heavily on the content and lose track of time, which can make them more prone to developing problematic TikTok use.
Further, we found, however, no significant effect of enjoyment on problematic TikTok use. One possible explanation is the formation process of flow. The immersed individuals tend to experience cognitive absorption as they concentrate [64] on the content, which distorts their perception of time. Previous studies found that experiencing flow online was particularly important to users' subsequent behaviors, such as problematic social media use [12], game disorder [63]. These studies considered flow as a holistic construct but did not account for the cumulative experience of flow. This study asserted that the flow experience accumulates from hedonic feeling to cognitive absorption or from concentration to time distortion. Based on our findings, we argued that experiencing enjoyment was the first stage of flow that could not yet predict the problematic use of TikTok. It was the subsequent stages of flow, involving concentration and time distortion, that led to problematic TikTok use.
In addition, our findings were consistent with past research on the positive influence that active parental mediation has on children's behavior [21,82,102]. Out of the three moderated relationships tested in this study, we found that active parental mediation negatively moderated the relationship between concentration and problematic TikTok use. As a result, the effect of concentration on problematic TikTok use was reduced when active parental mediation increased. For adolescents who overuse TikTok, active parental mediation would help to reduce their concentration and, hence, alleviate their problematic use of TikTok. Thus, we considered that the discussion on the TikTok short video between adolescents and parents could reduce the probability of problematic use problems. This study asserted that adolescents' concentration can be reduced by increasing parental mediation, which in turn will alleviate the problematic use of TikTok. Active parental mediation, however, had no interaction effects with enjoyment or time distortion to reduce problematic TikTok use. In other words, the parent's role became ineffective in mitigating the feelings of enjoyment and time distortion that adolescents experienced when they used TikTok.
Implications
This study has several theoretical and practical implications. First, this study is an initial exploration of the antecedents of adolescents' problematic TikTok use. Several researchers have conducted studies to explore risk factors for problematic TikTok use, such as stress [12], the features of short videos [30], socio-technical factors, and attachment [2]. However, very few studies have explored the influence of users' experiences on problematic use of TikTok. Thus, this study enhances our understanding of problematic use behavior by focusing on ongoing experiences during TikTok use and identifies flow experience as a critical factor influencing problematic use behavior. We used flow theory to guide our understanding and found that concentration and time distortion were significant predictors of problematic TikTok use. Our findings reveal the potential impact of flow on problematic TikTok use.
Secondly, this study extends its theoretical contribution by combining the application of flow theory with parental mediation theory. Many studies have applied parental mediation in different contexts, such as mobile phones [82], internet use [91], and digital media [92], and mostly adopted it as a critical factor leading to specific behaviors, such as mobile phone dependency [82], problematic online game use [80], and excessive internet use [103], but have ignored its moderating effect. We treat active parental mediation as a moderator to uncover the extent to which it can reduce problematic TikTok use. This study found that active parental mediation and concentration had significant interaction effects that can help reduce problematic TikTok use. In this study, no interaction effect between active parental control and the other dimensions of flow (i.e., enjoyment and time distortion) was found. Hence, this study asserts the importance of parents' involvement in addressing adolescents' problematic TikTok use. Our study found support for the application of the parental mediation theory.
Thirdly, our findings have important practical implications for the role of parent mediation as an intervention strategy to help address problematic TikTok use among adolescents. One of the ways is to divert adolescents' attention from TikTok to reduce their concentration on the platform, which in turn could reduce problematic TikTok use.
In doing so, parents should create an effective discussion space with their teens to discuss TikTok and its addictive features. This is because adolescents may not be aware of the advanced algorithm systems embedded in TikTok that can make them unconsciously and continuously addicted to it. In the process of communicating with their parents, adolescents could identify the shallow entertainment meaning of the content and comprehend the video production's implication behind the screen, such as its purpose, profits, and process. Parents play an important role in educating their teens about the consequences of problematic use of social media, especially TikTok. Adolescents should be regularly reminded that TikTok is the most addictive medium as compared to other social media platforms.
In our study, we find support for these assertions and confirm that active parental mediation is effective in reducing adolescents' problematic TikTok use in the Chinese context as well. Education is crucial. To educate adolescents, parents should be knowledgeable of the advantages and disadvantages of TikTok, and competent to guide their children's behavior and interactions in digital spaces.
Limitations and Future Research
Although the contributions of the study are evident, it still has limitations for future research to investigate. Initially, as the problem of excessive use of TikTok gets worse, it is valuable to further explore the antecedents of this problem. This study mainly uses flow theory to explore how enjoyment, concentration, and time distortion affect adolescents' problematic TikTok use while leaving space for examining other factors that might influence such problematic online behavior. Future studies can examine other variables (e.g., attachment, childhood traumatic experiences, psychopathology) to provide a comprehensive understanding of this phenomenon. Further, this study only considers active parental mediation in alleviating problematic use behavior; other mediation methods, such as co-viewing and parental control, have not been touched. It is important for future research to test different mediation methods to identify their effectiveness in reducing similar problems. In addition, this is a cross-sectional study. This makes it impossible to test causal relationships. It is important to note that the use of purposive sampling, which is a non-probability sampling technique, in this study may cause bias in the selection of respondents and, hence, limit the generalizability of the findings to the overall population. Future research may address this limitation by getting a more representative sample.
Furthermore, this study was conducted in China. As TikTok is a worldwide online application and other countries, especially in the West, seem to face the same issue (problematic TikTok use), it is valuable for future research to conduct cross-cultural research to understand unique contextual factors that may predict problematic TikTok use. Finally, we used survey data to examine the proposed hypotheses. Although the survey is regarded as an effective way to access respondents' perceptions and behaviors, the data are usually received in uncontrollable environments. Thus, we suggest future studies explore the use of a between-subject experiment to compare a treatment condition (which exposed participants to active parental mediation) with a control group (in which active parental mediation is absent). Another possible method is using focus group discussions to access a more in-depth understanding of this alarming phenomenon of problematic TikTok use among adolescents.
Conclusions
TikTok is a great innovation. It has many unique application advantages, such as concise content, an easy-to-use format, and efficient playback, which have attracted a growing number of users worldwide to indulge in it and eventually lead to problematic use. The negative effects are pertinent, especially for adolescents. This calls for research to understand the phenomenon and offer solutions to address the problem. This study responds to this call and discovers that flow experiences (i.e., concentration and time distortion) lead to obsessive use of TikTok and that active parental mediation reduces the concentration effect on problematic TikTok use. Although the relationships were tested in a Chinese context, we believe that our findings could be applied to other contexts as well. This is because problematic TikTok use among adolescents has become a universal phenomenon across the globe. Our results have theoretical and practical significance. However, there is still a long way to go in this direction of research, and we call for future research to extend our current knowledge. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Written informed consent has been obtained from the respondents to publish this paper, if applicable. Enjoyment I think that using WeChat is enjoyable. I think that using TikTok is enjoyable I think that using WeChat is interesting.
Data
I think that watching TikTok video is interesting I think that using WeChat is pleasurable.
I think that using TikTok is pleasurable. The actual process of using the system is pleasant.
The actual process of watching videos on TikTok is pleasant. Cyber-games provide me endless surprising experiences.
TikTok provide me endless surprising experiences I burned with curiosity when playing cyber-games.
I feel curious when I watch videos My life would be joyless without using TikTok. I feel distressed or down once I cease surfing online for a certain period.
I feel distressed or down once I cease using TikTok for a certain period. I feel very vigorous upon smartphone use regardless of the fatigues experienced.
I feel very vigorous upon watching video regardless of the fatigues experienced.
I fail to control the impulse to use smartphone. I can't help turning on TikTok even when I'm not planning on using it. I find that I have been hooking on smartphone longer and longer. I find that I have been hooking on longer and longer.
I surf online for a longer period and spend more time than I had intended.
I watch TikTok videos for a longer period and spend more time than I had intended. I need to spend an increasing amount of time online to achieve same satisfaction as before.
I need to spend an increasing amount of time on TikTok to achieve same satisfaction as before. I try to spend less time on smartphone, but the efforts were in vain.
I try to spend less time on TikTok, but the efforts were in vain. I make it a habit to use smartphone and the sleep quality and total sleep time decreased.
I make it a habit to watch TikTok videos and the sleep quality and total sleep time decreased. My recreational activities are reduced due to smartphone use.
My recreational activities are reduced due to watching TikTok videos. Surfing online has exercised certain negative effects on my schoolwork or job performance.
Watching TikTok videos has exercised certain negative effects on my schoolwork or job performance. I find myself indulged online at the cost of hanging out with family members and friends.
I find myself indulged on the TikTok at the cost of hanging out with family members and friends. I feel aches and soreness in the back or eye discomforts due to excessive smartphone use.
I feel aches and soreness in the back or eye discomforts due to excessive use. I have slept less than four hours due to using smartphone more than once.
I have slept less than 4 h due to watching TikTok videos more than once. I use smartphone for a longer period of time than I had intended.
I watch TikTok videos for a longer period of time than I had intended. I feel tired on daytime due to late-night use of smartphone.
I feel tired on daytime due to late-night use of TikTok. I was told more than once that I spent too much time on smartphone.
I was told more than once that I spent too much time on TikTok. I feel missing something after stopping smartphone for a certain period of time.
I feel missing something after stopping using TikTok for a certain period of time. | 2023-01-26T16:09:49.247Z | 2023-01-23T00:00:00.000 | {
"year": 2023,
"sha1": "dbccf7c437b55d8e5a95f5d86766e88ecfe56aae",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/20/3/2089/pdf?version=1674489687",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "126e84401c524aa8aa4349bac8625f34cb1aca65",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.