id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
81787258 | pes2o/s2orc | v3-fos-license | A Case Presentation of Sarcoidosis Larynx
Sarcoidosis is a multisystem disease, which rarely involves the larynx. Which is mostly limited to involvement of supraglottis-epiglottis, Aryepiglottic folds and false vocal cords. Diagnosis is by exclusion of other granulomatous infection and diseases, and histopathological confirmation. In this case report patient presented with change in voice, snoring. Diagnosis was made after excluding other differential diagnosis, confirmed by histopathology.
Introduction
Sarcoidosis is a multisystem disease of unknown aetiology, usually affecting the respiratory tract and other organs, and is characterized by the formation of nonnecrotizing epithelioid granulomas [1]. Incidence rate in US population is 10 people per 100,000 per year [2]. In india reported prevalence of sarcoidosis is 10-12 cases/100,000 new registration annually at a respiratory unit at Kolkata and 61.2/100,000 new cases registered in respiratory unit of the Vallabhbhai Patel Chest institute (VPCI), Delhi [3].
The central part of a granuloma is composed of macrophages, modified macrophages, epithelioid cells and giant cells, with scattered, predominately CD4+ T, lymphocytes between them [4]. Caseous necrosis is absent, while central necrosis, as a granular acidophilic focus without nuclear detritus, may be found [5]. It rarely involves larynx as extra pulmonary sites. Common presentation for laryngeal sarcoidosis are dysphonia, dysphagia, and dry cough, rarely can it cause total laryngeal obstruction leading to stridor [6]. We are presenting a case report of a patient who presented to us with complaint of change of voice, snoring and frequent arousal from sleep.
Case Description
A 25 year old female was referred by chest physician with complaints of change in voice and snoring and frequent arousal from sleep in night for 1 year. Patient was not having any complaint of fever, generalized weakness, palpable swelling in neck. Patient's general profile was height 161cm, weight 39 kg, BMI-15 kg/m 2 . For these complains polysomnography and CECT chest were done by chest physician. AHI index was 14.5, with average heart during sleep of 71, highest heart rate during sleep of 173, average O 2 saturation while in Non-REM-98%, and approximate minimum value of O 2 87%, desaturation index 2.1, average heart rate during sleep 71 BPM, and highest heart rate during sleep of 173 BPM. CECT chest showed centrilobular nodules in right upper lobe, mainly in anterior and apical segment, and associated right hilar lymphadenopathy. After receiving the patient, endoscopic examination of larynx was done, which showed u/l thickening of uvula left side, thickening of epiglottis turning to turban shape, aryepiglottic fold, and false vocal cords, both the true vocal cords were mobile, diagnostic nasal endoscopy was insignificant. CECT neck and upper mediastinum done was suggestive of prominent faucial tonsil, thick uvula and epiglottis, B/L aryepiglottic fold and false vocal cords. On the basis of clinical picture differential diagnosis of tuberculosis, amyloidosis and sarcoidosis were made. Biopsy was taken from uvula being very accessible for biopsy (Figures 1-3). Biopsy on histopathology showed tissue lined by stratified squamous epithelium with subepithelial tissue showing edema with moderate mixed inflammation predominantly lymphocytes, plasma cells and polymorphs. Few discrete granulomas devoid of caseous necrosis and composed of epitheloid like histiocytes, few granuloma within germinal Centre of lymphoid follicles. Serum calcium level was 11.2 mg/dl. Serum ACE level was 68 microgram/Ltr and Vitamin D level was 140ng/ml. Tuberculosis, amyloidosis, and fungal infections were ruled out. Diagnosis of laryngeal sarcoidosis was made on the basis of clinical, radiological and histopathological findings. Patient was put on oral and inhalational steroids. Patient showed subjective and hematological improvement, in the form of improvement of voice, and sleeping habits. Within 2 weeks of starting treatment, Although there was no significant improvement in endoscopy picture. Improvement may also be due to partial resection of uvula. Patient is in regular followup (Figures 4 & 5).
Discussion
The Joint statement of American oracic Society (ATS)/European Respiratory Society (ERS)/World Association for Sarcoidosis and other Granulomatous Diseases (WASOG) defines sarcoidosis as a multisystem disorder of unknown cause(s) [7]. Laryngeal sarcoidosis is an underdiagnosed disease, whether for lack of specific detecting tests, turning the diagnosis into an exclusion one, or for the diversity of criteria to be analysed, so that the diagnostic suspicion can be confirmed [8]. Laryngeal disease presents with hoarseness, dyspnoea, dysphagia, chronic cough, obstructive sleep apnoea, and airway obstruction, which could progress to upper airway obstruction and emergency cricothyrotomy.
In laryngeal disease, the supraglottis region, mostly the epiglottis, is typically involved, followed by the arytenoids, aryepiglottic folds, and false vocal folds [9]. This major impairment of the supraglottis probably occurs due to the big amount of lymphatic vases in this area, which are stretchered by replacing the architecture by sarcoid deposits or subcutaneous foci of the disease [10]. In our case patient presented with change in voice, snoring and frequent arousal from sleep. Diagnostic nasal endoscopy was normal in our patient but she was having oedematous epiglottis, aryepiglottic fold, arytenoid and flase vocal cord. Our patient was also having oedema of uvula also, contrary to other studies. For a sarcoidosis diagnosis, three criteria must be met: compatible clinical-radiological findings, histological sample represented by non-necrotizing granulomas and exclusion of other granulomatoses or diseases having similar findings [11].
Global Journal of Otolaryngology
In our patient clinical-radiological finding gave us suspicion of tuberculosis, amyloidosis, fungal infection or sarcoidosis. Histopathological examination from biopsy from uvula was suggestive of granulomatous disease, ZN staining and PAS staining were negative. Tuberculosis, amyloidosis and fungal infection were ruled out. Granulomas were noncaseating composed of epitheloid like histiocytes, occasional multinucleate foreign body and langhan's giant cells. Treatment regimen includes systemic steroids, intralesional steroids for well localised lesions, laser resection, surgical excision, low dose radiation and tracheostomy. Recent literature has suggested an improvement with use of inhaled steroid and Clofazimine [12]. We started oral steroid for the patient. Patient is subjectively improved, and is in regular followup.
Conclusion
Sarcoidosis is a multisystem disorder, which rarely involves larynx as primary organ. It should be kept in mind as differential diagnosis. As if diagnosed, main management is steroid. | 2019-03-18T13:58:22.813Z | 2017-12-06T00:00:00.000 | {
"year": 2017,
"sha1": "7bc11fd24e56c8984224c2f312475e99065badf7",
"oa_license": "CCBY",
"oa_url": "https://juniperpublishers.com/gjo/pdf/GJO.MS.ID.555829.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "9ce2e743e1e6c31541f07dd398ef3965ffd65051",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
269337860 | pes2o/s2orc | v3-fos-license | Analysis on the influence of digital economy on manufacturing transfer
. Based on the panel data of 30 provinces in China from 2010 to 2020, the entropy method is used to measure the digital economy, and the fixed-effect model is used to explore the impact of digital economy on manufacturing transfer. The results show that: (1) The digital economy will have an inverted U-shaped impact on the manufacturing industry, that is, the digital economy will promote the transfer of manufacturing within a certain range, but after reaching a certain extent, it will inhibit the transfer of manufacturing; (2) An increase in the producer price index and a decline in the per capita road area will significantly promote the transfer of manufacturing; (3) The level of digital economy in Guangdong, Jiangsu and Zhejiang provinces is among the best.
Introduction 1.Background
In the 20th Congress report, there was a proposal to accelerate the development of the digital economy in China.This proposal emphasized the need to integrate the digital economy with the real economy, establish internationally competitive digital industry clusters, and expedite the construction of a digital China.As a result, top-level design documents such as the "Outline of the Digital Economy Development Strategy" and the "14th Five-Year Plan" have been issued to guide the development of the digital economy.These initiatives aim to actively promote the digitization of industries and facilitate the process of digital industrialization.It is evident that the digital economy plays a crucial role in facilitating the high-quality development of China's economy.
Additionally, China also attaches great importance to industrial transfer work.The optimization of industrial transfer is crucial for fostering coordinated regional development and expediting the establishment of a new development paradigm.In line with the objectives outlined in the 14th Five-Year Plan, China aims to streamline the layout of regional industrial chains, encourage the retention of pivotal segments within domestic borders, and bolster the capabilities of all regions to accommodate industrial transfers [1].
Definition of the digital economy
In 2016, the G20 adopted a comprehensive definition of the digital economy, referring to it as a set of economic activities that rely on digital knowledge and information as fundamental components of production [2].The digital economy represents a new and distinct form of economic activity that emphasizes the critical role of digital knowledge and information as indispensable factors of production [3].According to ChinaInfo100, the digital economy encompasses the collective information activities of an entire society, and it is a economy with digital information as the primary resource, relying on information network, and closely integrated with other fields through information and communication technology, forming five types of digital economy: basic, convergence, efficiency, new generation, and welfare [4].
Definition of industrial transfer
Currently, there is no universally agreed-upon definition of industrial transfer academically, and Huang Tao points out that the reason lies in four aspects: industrial transfer is both a macro concept and a micro concept, it can be either a positive gradient transfer or a reverse gradient transfer, it can be caused by both endogenous and exogenous factors, and it is both a change in industrial scale and a change in competitive advantage [5].
Studies related to the influence effect of the digital economy
According to Chi Mingyuan and Shi Yannan, the digital economy incorporates digital elements into production, operation and circulation, which can not only optimize traditional industries, but also drive industrial structure upgrading [6].Jiang Song and Sun Yuxin found that the relationship between the digital economy and the real economy follows an inverted "U" shape, and the impact on the eastern region exhibits "crowding out effect" [7].Shen Yunhong and Huang Zhao divided the digital economy into three dimensions: digital infrastructure construction, development, innovation and research, and found through empirical research that all three factors can optimize the manufacturing industry structure [8].Ma Zhongdong and Ning Chaoshan found that the digital economy plays a vital role in facilitating the enhancement of manufacturing quality by alleviating distortions in the allocation of labour and capital factors [9].The non-linear impact of the digital economy on manufacturing competitiveness and the mediating effect [10].
The existing research literature in academia primarily focuses on the impact of the digital economy on industrial transformation and upgrading, as well as the optimization of industrial structure within the manufacturing sector.However, there are few studies that specifically analyzed the impact of digital economy on manufacturing transfer.Therefore, this study uses the entropy method to calculate the digital economy index using the panel data of 30 provinces in China spanning from 2011 to 2020, and adopts the fixed-effects model to explore the influence of digital economy on manufacturing transfer.
Entropy method
In this study, the entropy method was used to assess the level of digital economy development in each province.The application of this method was referred to the study by Huang Qinghua et al [11].The data were processed according to the entropy method.The weights of each indicator were assessed to measure the comprehensive digital economy index of each province: Where: is the overall digital economy index for each province by year, is the weight of the indicator, is the standardized value, i denotes the province, t denotes the year and j denotes the indicator.The larger the index, the higher the level of digital economy development, and vice versa.
Fixed-effects model
To assess empirically whether the digital economy can have an impact on manufacturing transfer, the following panel data model is constructed: Where and denote the level of manufacturing transfer and digital economy development in province i in year t respectively, represent the series of control variables, β and γ represent the estimated coefficients of the variables, and denote the area and time fixed effects, respectively, and are random error terms.
Description of variables
(1) Explained variables The manufacturing industry transfer (MIT) refers to the transformation of the industry and the optimization of industrial structure.Measured by the ratio of MVA to national MVA, it reflects the proportion of economic value produced by the manufacturing industries in their production process.
i for province, j for year.
(2) Explanatory variables Digital Economy (GEDI) is constructed by entropy methods, adopting the objective weighting entropy method of three-level index weighting to enhance the reliability of the results, and weighting to construct the index system.Level 2 indicators include digital infrastructure, digital innovation capability, digital coverage, digital industry development and government support.The detailed information are shown below in Table 1.(3) Control variables Production price index (PPI).It is an index that reflects changes in the prices of inputs and final products in the industrial production sector and can measure the stability of national economic activities.
Salary level (WL).It is measured by the average wage of the employees in urban units, as it is the main indicator to reflect the wage level of the employees.
Road area (RA).This indicator is an important economic indicator that reflects the urban road ownership in the urban built-up area, and promotes the development of logistics, tourism, commerce and other aspects, thus improving the economic benefits of the city.This indicator measures the extent of urban development.
Labor Level (LL) refers to the average number of employees owned monthly or annually during the Reporting Period.It is used to measure the scale and development of an industry or an enterprise and reflects the people's social and economic characteristics and living conditions.
Data sources
This paper selects 30 provinces in the country from 2011 to 2020 as samples, establishes a panel database, and uses the entropy method in order to quantitatively measure the digital economic index.The original data are from China City Statistics Yearbook, China Science and Technology Statistics Yearbook, China Electronic Information Industry Statistics Yearbook and China Labor Conditions Yearbook, etc.
Basic regression
Using the entropy method, we first calculated the development index of 30 provinces within our consideration.We find that Guangdong's digital economy is optimal, followed by Jiangsu and Zhejiang.Secondly, there is a positive correlation between manufacturing transfer and the development extent of the digital economy.As the core driving force, digital technology, with digital information as the main economic value-creating factor, is the most important driving force to stimulate the development of enterprises and achieve the effect of manufacturing industry transfer by enabling various industries and promoting their upgrading and transformation [15].
The regression results are shown below in Table 2.The results in column (1) of Table 2 indicates that for every unit added to a certain digital economic development index, the manufacturing transfer level in the region increases by 0.0301 units, which is significantly positive at the level of 1%.Section (2) is classified as the non-linear impact of the digital economy on manufacturing transfer.The impact on manufacturing transfer is prominent at a high level of 5%.Column (3) of Table 2 examines in detail the impact of digital economic development on manufacturing transfer through four control variables.The results show that the development level of digital economy is positively correlated with the transfer of manufacturing industry, and the impact coefficient is 0.0753, which is significant at the level of 10%.Pour: *, * *, * * * are significant at the levels of 10%, 5% and 1%, respectively, and the standard errors of each coefficient are shown in brackets.
Robustness test
For the robustness of the results, we changed the sample size to shorten the sample data to 2013-2020 for regression in our test.The results are shown above in Table 2(4).The positive and negative regression coefficients and their significance have not changed markedly, indicating robust results.
Conclusion
Based on the above regression results, we found that: 1. when the level of digital economy is low, it will promote the transfer of manufacturing, such as promoting the rational allocation of labor, capital.However, when digital economy becomes larger, it will inhibit the transfer of manufacturing, that is, the effect of the digital economy on the transfer of manufacturing is inverted U-shaped.2. The coefficients of labor service level and average salary of employed persons were not significant; The rise of the production price index will significantly promote the transfer of manufacturing, that is, the increase in the production price index will lead to an increase in the cost of industrial enterprises, then enterprises will accelerate technological transformation and innovation to avoid rising costs; Urban roads are one of the traditional infrastructure, the decline in per capita road area means that the government for this aspect of infrastructure construction investment is reduced, that is, the government supports manufacturing transfer.3. The top three provinces in the digital economy score in 2020 are Guangdong, Jiangsu and Zhejiang.
According to the analysis results, the following suggestions are put forward: 1. Strengthen the construction of digital economy infrastructure and promote the construction of digital infrastructure.2. Drive digital transformation for your business.Enterprises should strengthen their own digital construction, improve the competitiveness and productivity of digital technology, and the government can encourage enterprises to accelerate digital transformation through subsidies and other policies, and encourage enterprises to carry out technological innovation.3. Accelerate the development of digital talent.Talent is the main resource of competition, the development of digital economy requires many talents to play their talents, support enterprises to cooperate with research institutes, universities, etc. to build talent training bases, and systematically carry out the teaching and practice of digital economy.
Although some conclusions have been drawn in this article, there are also shortcomings.This paper concludes that the role of digital economy on manufacturing transfer, but the mechanism of digital economy affecting manufacturing transfer is not concluded, and the specific ways and means are unclear.At the same time, the selected control variables are not significant and there is no way to explain them.Therefore, it is hoped that future literature can start from relevant aspects to solve these problems.
Table . 2
. Regression Results of Digital Economy to Manufacturing Transfers | 2024-04-25T15:04:18.280Z | 2023-10-15T00:00:00.000 | {
"year": 2023,
"sha1": "661f5f3d2b7132d14e5b7150e171d54fb729073a",
"oa_license": "CCBYNC",
"oa_url": "https://drpress.org/ojs/index.php/HBEM/article/download/12790/12444",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "2b8f169b981da607a0ad8886221c5c5561122ed7",
"s2fieldsofstudy": [
"Economics",
"Computer Science"
],
"extfieldsofstudy": []
} |
198368240 | pes2o/s2orc | v3-fos-license | Hexane-1,2,5,6-tetrol as a Versatile and Biobased Building Block for the Synthesis of Sustainable (Chiral) Crystalline Mesoporous Polyboronates
We report on the synthesis and characterization of novel mesoporous chiral polyboronates obtained by condensation of (R,S)/(S,S)-hexane-1,2,5,6-tetrol (HT) with simple aromatic diboronic acids (e.g., 1,3-benzenediboronic acid) (BDB). HT is a cellulose-derived building block comprising two 1,2-diol structures linked by a flexible ethane bridge. It typically consists of two diastereomers one of which [(S,R)-HT] can be made chirally pure. Boronic acids are abundantly available due to their importance in Suzuki–Miyaura coupling reactions. They are generally considered nontoxic and easy to synthesize. Reactive dissolution of generally sparingly soluble HT with BDB, in only a small amount of solvent, yields the mesoporous HT/polyboronate materials by spontaneous precipitation from the reaction mixture. The 3D nature of HT/polyboronate materials results from the entanglement of individual 1D polymeric chains. The obtained BET surface areas (SAs) and pore volumes (PVs) depend strongly on HT’s diastereomeric excess and the meta/para orientation of the boronic acids on the phenyl ring. This suggests a strong influence of the curvature(s) of the 1D polymeric chains on the final materials’ properties. Maximum SA and PV values are respectively 90 m2 g–1 and 0.44 mL g–1. Variably sized mesopores, spanning mainly the 5–50 nm range, are evidenced. The obtained pore volumes rival the ones of some covalent organic frameworks (COFs), yet they are obtained in a less expensive and more benign fashion. Moreover, currently no COFs have been reported with pore diameters in excess of 5 nm. In addition, chiral boron-based COFs have presently not been reported. Scanning electron microscopy reveals the presence of micrometer-sized particles, consisting of aggregates of plates, forming channels and cell-like structures. X-ray diffraction shows the crystalline nature of the material, which depends on the nature of the aromatic diboronic acids and, in the specific case of 1,4-benzenediboronic acid, also on the applied diastereomeric excess in HT.
■ INTRODUCTION
Mesoporous materials have attracted significant attention as their wide pores (2−50 nm) allow for improved diffusion and accessibility, favoring applications as diverse as, controlled drug release, chromatography, adsorbents, electrodes, solar cells, and heterogeneous catalysis. 1,2 Exemplary are (non)siliceous mesoporous oxides, periodic mesoporous organic silicas (PMOs), mesoporous carbons, hyperconjugated porous polymers, and some metal−organic frameworks. 3−7 Often the introduction of mesopores in a material is linked to the use of (supra)molecular templates such as surfactants, but also other techniques are known like dealumination/desiliciation, nanoassemblies, and local associations of helices, among others. 1,8 Boronate ester polymers (BEP) and their tendency to form hierarchical supramolecular structures have been widely reported in the literature. 9 However, few micro/mesoporous materials have been reported based on 1D polymeric boronate esters containing backbones not involving heteroatoms (e.g., nitrogen, sulfur). Exemplary are the flowerlike micro/ mesoporous microparticles made by the condensation of 1,4benzenediboronic acid (1, with pentaerythritol (PE). These materials have a specific surface area of 184 m 2 g −1 (BET), yet no data on pore volume and pore size has been published. 10,11 Notably, the functionalization of these materials with Au or Pd nanoparticles allowed for (chemoselective) catalytic reduction reactions. 10,11 Mixing diboronic acid with a chiral indacene-type bis(1,2-diol) gave trimeric microporous cages with surface areas (SAs) of 491 and 582 m 2 g −1 . 12 Hierarchical structures using dative B−N bonds have also been reported. 9,13 A relative recent addition are the covalent 2D/3D organic frameworks (COFs). These are micro/mesoporous materials where rigid building blocks, often with a specific geometry and/or symmetry, are interconnected through covalent bonds forming 2D/3D rigid crystalline porous structures. 14 Most typically, the boron-based COFs employ catechol building blocks. To date no COF has been reported with a pore diameter larger than 5 nm. In addition, only a few chiral COFs are known, typically using chiral pyrrolidine and tartaric acid derivatives as the linking or pendant groups. 15 Presently, no chiral boron-based COFs have been reported.
Very recently a novel COF class was introduced where helical polymers are interwoven at regular intervals by means of copper complexes. 16 Demetalation of this structure (named COF-505) was shown to retain its morphology, at the expense of a certain decrease in crystallinity. 16 Most importantly, owing to the large flexibility of the 1D threads in demetalated COF-505, a 10-fold increase of its elasticity was observed. 17 Besides weaving, entanglement in porous frameworks can also be achieved by the interpenetration of 2D/3D frameworks or interlocking of rings. 17 In spite of the high surface areas and pore volumes, and the related plethora of potential and promising applications, the commercial use of mesoporous materials is still quite a challenge due to their chemical instability, expensive synthesis approaches, and nonstraightforward processability. 18 The cost of the extended linkers, which is especially relevant for COFs and MOFs, is another major concern with, for instance, only two commercialized MOFs employing linkers with more than one phenyl ring. 19 Recently, we have reported on the sustainable synthesis of hexane-1,2,5,6-tetrol (HT) in which two 1,2-diol moieties are joined by a flexible ethane bridge. 20 HT is derived from cellulose in 3−4 steps many of which involve heterogeneous catalysts. In successive order these comprise ( Figure 1) the following list: (1) Conventional/microwave pyrolysis of (waste) cellulosic polysaccharides. 21−23 (2) Hydrogenation of levoglucosenone to levoglucosanol over dihydrolevoglucosenone with heterogeneous Pd catalysts (one or two steps). 24,25 It is noteworthy that presently a joint venture between the Australian Circa Company and the Norwegian multinational Norskeskog is running a fully operational 50 tonnes/year prototype plant in Tasmania. 26 (3) Combined one pot hydrogenation/hydrolysis of levoglucosanol (one step). 20 HT is typically obtained as a mixture of two diastereomers notably (S,R)-HT (mesomer) and (S,S)-HT. Boronic acids are abundantly available due to their importance in Suzuki− Miyaura coupling reactions. 27,28 They are generally considered nontoxic and are easy to synthesize. 27,29 Here we report on the synthesis of a novel porous organic polymer obtained by reactive dissolution of sparingly soluble HT with simple benzenediboronic acids (BDB) and concomitant precipitation of the mesoporous HT polyboronate. The reaction is conducted at RT and requires no special reaction conditions such as vacuum, inert atmosphere, or controlled water removal. It is shown that the properties of the resulting mesoporous polymers are largely determined by the chirality of HT and the meta/para orientation of the boronic acids on the phenyl ring. Thereby HT's chirality is likely inducing coiling of the polymer chain, allowing for interweaving of the polymeric chains. These materials are crystalline and in spite of their lower surface areas (30−90 m 2 g −1 ) display mesopore volumes up to 0.441 mL g −1 . Scheme 1A provides a visual of the potential polymeric chains indicative of their stereocenters and points of potential molecular rotation. Hereafter these materials are denoted as "HT/polyboronates" or more specifically using the HT/1,x-BDB/solvent formalism. Table 1 lists the obtained specific surface areas (SAs) and pore volumes (PVs) as a function of the type of diboronic acid used, the applied solvent, and HT's diastereomeric excess. It is found that the meta orientation of the diboronic acids on the phenyl ring (1,3-BDB) gives higher SA and PV values compared to the para orientation of the diboronic acids (1,4-BDB) ( Table 1,
ACS Sustainable Chemistry & Engineering
Research Article entries a/b). This suggests for a distinct influence of the curvature of the polymeric chain. Interestingly, the use of 4,4′biphenyldiboronic acid (4,4′-BPDB) led to quasi similar results as obtained with 1,3-BDB (Table 1, entry c). This can be rationalized by the nonplanarity of biphenyl units and the consequent absence of p−p stacking of the aromatic rings. The use of acetone or CH 2 Cl 2 as the reaction solvents gives higher SAs and PVs than when THF or AcCN are used (Table 1 entries d−g). Additionally, the SA and PV values benefit also from higher reaction volumes (Table 1, entries d, h, and i).
Partial enrichment of (S,R)-HT over (S,S)-HT was described in a previous publication. 20 Specific crystallization of the (S,R)-HT diastereomer was achieved from hot 1,4dioxane, allowing for the evaluation of HT's chirality on the properties of the HT/polyboronates. Unfortunately, it has presently not been found possible to crystallize/purify (S,S)-HT. It is noteworthy that (S,R)-HT is a mesomer, making that it displays no net chirality on its own but does so effectively in a polymeric setting. Given that (S,R)-HT can build into the polymeric chain as (S,R) or (R,S), substantial variability in the curvature of the polymeric chain is likely present (Scheme 1B/ C). Actual and distinct influences of the diastereomeric (S,R)/ (S,S) ratio in HT on the properties of the materials are observable: (A) In using 38% d.e. (S,R)-HT, higher SA and PV are obtained then when 38% d.e. (S,S)-HT is used (Table 1, entries g vs j). (B) Elevating the d.e. in (S,R)-HT from 38 to 98% is found to increase both the SA and PV by around 30% for HT/ 1,3-BDB/THF (Table 1, entries k ↔ d). In the case of HT/1,3-BDB/acetone, only an SA increase of 30% is observed with an already high PV (Table 1, entries m vs g). (C) Using diastereomerically pure (S,R)-HT magnifies the effect of the meta/para orientation of BDB on the spread in SA and PV (Table 1, entries a, b, k, and l). It is noteworthy that no cooperative stereochemical effect occurs as hydrolysis of the formed HT/polyboronates does not reveal a preferential incorporation of one of the HT diastereomers. Irrespective the nature of the HT/polyboronates, the obtained SA and PV values correlate to a certain degree ( Figure S1_A).
In terms of yield, reactions between HT with 38% d.e. (1 > 2 or 1 < 2) and 1,4-BDB typically give between 65 and 75% yield of the solid HT/1,4-BDB polyboronate product. This is 73− 85% of the theoretical yield which is 88.6% given the loss of two waters molecules during the condensation of HT and
Research Article
BDB. Reactions between HT with 38% d.e. (1 > 2 or 1 < 2) and 1,3-BDB typically give between 55 and 65% yield of the solid HT/1,3-BDB polyboronate product. This is 62−73% of the theoretical yield. Reactions using pure HT (i.e., > 98% d.e.) are with 26−30% much lower yielding, and this irrespective of the HT diastereomer and the BDB nature. As shown in Figure 2A, the N 2 physisorption isotherms display a type IV isotherm and an H3 hysteresis. The latter is indicative of aggregates of platelike particles forming slitlike pores. 30 This is reminiscent of the pentaerythrithol/1,4-BDB (PE/1,4-BDB) material reported by Fujiwara et al. and Matsushima et al. 10,11 with the difference that the N 2 adsorption is fully reversible, strengthening our claim on mesoporous materials. Larger variably sized mesopores, spanning mainly the 5−50 nm ⌀ range, are evidenced ( Figure 2B). The sharp peak at 3.7 nm is an artifact relating to forced closure of the sorption hysteresis. 31 As can be inferred from Figure 2B, the pore size distribution is solvent dependent with the use of acetone leading to a larger mean pore diameter than when THF is used.
The here reported maximal PV of 0.44 mL g −1 is comparable to the one reported for the [MeOAc]50-H2P (0.42 mL g −1 ) COF yet obtained in a less expensive and more benign fashion than the COF. 32 The marked difference in SA between the HT/polyboronates and the latter COF material (90 vs 754 m 2 g −1 ) relates to the much larger pore diameters of the materials reported here (⌀ 5−50 nm) vis-a-vis the one for the above stated COF (⌀ 1.8 nm). Interestingly, some of the observed plate-like structures seem to have delaminated which may have contributed positively to the observed enhanced porosity ( Figure 3B3). • Using acetone as the reaction solvent yields equally a highly porous structure yet with a substantially different fine-structure (on the micrometer level) than observed when THF is used ( Figure 3B2/B3 vs Figure 3C2/C3). Overall the dimensions of the irregular voids between the platelets, as channels or cell-like structures, are in the tens of nanometers and thus consistent with the N 2 physisorption results and the appearance of an H3 hysteresis in the N 2 physisorption experiments. In spite of the high melting temperatures of the HT/polyboronates (∼600 K), TEM imaging was found impossible due to melting of the materials under the electron beam. XRD revealed a degree of crystallinity in all HT/polyboronates with the nature of the BDB (length, substitution pattern) affecting markedly the appearance of the different XRD patterns (Figure 4). Given the characteristic H3 hysteresis loop in the N 2 physisorption experiments and the absence of order in the SEM images (Figure 3), we assume that the observed crystallinity reflects some ordering in the pore wall structure. The nature of the used solvent has no effect on the crystallinity ( Figure S2). The diastereoselectivity of HT affects the crystallinity only in the case of the 1,4-BDB linker ( Figure 5).
The (S,R)-HT/1,4-BDB and (S,R)-HT/1,3-BDB polyboronates were characterized by 13 C CP-MAS and 11 B spin echo MAS NMR, and the results are shown in Figure 6. Analytical simulations of the 11 B spectra confirmed the existence of only one type of boron species, and this is irrespective of the nature of the BDB unit. (Figure 6B/D, red curve). FT-IR spectroscopy further confirmed the presence of boronate esters with characteristic vibrational peaks in the 1320−1300 cm −1 (B−O stretch) and 690−650 cm −1 (characteristic boronate ester peak 33 ) spectral ranges ( Figure S3_A). 11,34 In addition, FT-IR spectroscopy revealed the clear absence of OH groups (3500− 3000 cm −1 ), characteristic for the residual presence of HT or incomplete boronate ester formation in the synthesized HT/ polyboronates ( Figures S3_B/C).
■ CONCLUSIONS
A range of new crystalline (chiral) mesoporous materials were obtained by the facile and benign condensation of (chiral), biobased hexane-1,2,5,6-tetrol (HT) and simple aromatic diboronic acids. The so formed mesoporous HT/polyboronates are obtained by spontaneous precipitation from the reaction mixture, not requiring an extensive workup procedure. HT is easily accessible from cellulose and chiral pure (R,S)-HT is straightforward to crystallize. The applied simple diboronic acids are abundantly available due to their commercial use in Miyaura−Suzuki reactions, easy to synthesize, and generally
ACS Sustainable Chemistry & Engineering
Research Article considered nontoxic. 35−37 The low cost of these materials is ultimately considered to benefit potential commercial applications. The reported materials show an appreciable pore volume of up to 0.44 mL g −1 , rivalling the pore volumes of some reported COF materials yet obtained in a less expensive way. Importantly, presently no COF has been reported with pore diameters in excess of 5 nm. The (R,S)-HT (98% d.e.)/polyboronate represents a first example of a chiral mesoporous polyboronate. The high pore volume and the larger pore sizes of 5−50 nm offer great potential to the incorporation of enzymes or as supports to immobilize catalysts to convert bulkier substrates. The meta/para orientation of the boronic acids on the phenyl ring was shown to influence the obtainable surface areas and pore volumes. Uniquely, a distinct influence of HT's chirality (its d.e. value) on the materials' properties was evidenced. Future work will focus on testing these new materials toward a wide range of different applications, thereby hopefully demonstrating its potential as a valuable material.
* S Supporting Information
The Supporting Information is available free of charge on the ACS Publications website at DOI: 10.1021/acssuschemeng.9b02772. | 2019-07-26T08:41:33.347Z | 2019-07-03T00:00:00.000 | {
"year": 2019,
"sha1": "6f8c697c502428ecec4cf6aea8e6314665c34e05",
"oa_license": "CCBYNCND",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acssuschemeng.9b02772",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "d3273ce8e814a57e3d89c1711b28b4739a7f0b6e",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
14607411 | pes2o/s2orc | v3-fos-license | TAC from Mycobacterium tuberculosis: a paradigm for stress-responsive toxin–antitoxin systems controlled by SecB-like chaperones
Bacterial type II toxin–antitoxins (TAs) are two-component systems that modulate growth in response to specific stress conditions, thus promoting adaptation and persistence. The major human pathogen Mycobacterium tuberculosis potentially encodes 75 TAs and it has been proposed that persistence induced by active toxins might be relevant for its pathogenesis. In this work, we focus on the newly discovered toxin–antitoxin–chaperone (TAC) system of M. tuberculosis, an atypical stress-responsive TA system tightly controlled by a molecular chaperone that shows similarity to the canonical SecB chaperone involved in Sec-dependent protein export in Gram-negative bacteria. We performed a large-scale genome screening to reconstruct the evolutionary history of TAC systems and found that TAC is not restricted to mycobacteria and seems to have disseminated in diverse taxonomic groups by horizontal gene transfer. Our results suggest that TAC chaperones are evolutionary related to the solitary chaperone SecB and have diverged to become specialized toward their cognate antitoxins. Electronic supplementary material The online version of this article (doi:10.1007/s12192-012-0396-5) contains supplementary material, which is available to authorized users.
Taxonomic tree and distribution of solitary SecB versus TAC/AC
The taxonomic tree of bacterial species was generated from the available complete genomes (October 2011) on the basis of the NCBI taxonomic classification (http://www.ncbi.nlm.nih.gov/taxonomy). The tree was then edited with iTol (http://itol.embl.de/index.shtml) and annotated with the solitary SecB chaperones and TAC/AC systems identified. For the sake of clarity, only one strain per species was retained, giving 901 genomes in the tree. Therefore, in some species other strains may not possess the system.
Horizontal Gene Transfer detection
We used Alien Hunter (AH) software (Vernikos and Parkhill 2006) to identify regions that have unusual sequence composition in terms of k-mers for various values of k (called interpolated variable order). An alien region is one whose AH score is above a genomedependent and automatically calculated threshold based on the sequence composition of the whole genome (termed background composition). AH was run on all the 52 complete genomes possessing a TAC or AC system. Predictions were visualized using Artemis (Rutherford et al. 2000). In some cases, AH predictions could be reinforced when the region encoded other HGT signatures such as tRNA, integrase, transposase, phage genes, plasmid genes or TA genes.
MCL analyses
A graph was built in which nodes correspond to proteins and weighted edges represent the BLASTP log(e-value) obtained between a pair of proteins. Graph partitioning -detection of communities of highly connected nodes -was performed using the Markov clustering program. Sequences used correspond to the proteins whose locus tags are given in table S1.
For SecB chaperones we used the seed of the Pfam02556, corresponding to a subset of 114 representative sequences. Two sequences were removed: one from an archaea, since our analysis focuses on bacteria, and one that our analysis identified as part of a TAC system. The homology relationship was inferred when an e-value less than or equal to 10 -5 was observed between two chaperone or toxin sequences and 10 -3 between two antitoxin sequences.
Partitioning was performed by using an inflat factor of 1.2.
Determination of antitoxin and toxin families
On the base of MCL results, we used RPS-BLAST (Altschul et al. 1990) to detect conserved domains describing each community. The five MCL antitoxin communities, namely HigA, HicB, MqsA, MqsA-like and HTH-like are respectively described by the conserved domains pfam01381, pfam05534, tigr03830, a partial tigr03830 (no CXXCG N-terminal motifs) and a partial and weakly conserved pfam01381. MCL partition leads to 7 toxin communities from which 3 correspond to the pfam05973 that characterizes the HigB toxin of M. tuberculosis, thus these sequences were annotated as HigB toxins. Repartition of HigB toxins in the three communities globally follows taxonomic groups. The MqsR family was named according to the annotation of one of the members of the corresponding community. All toxins associated to HicB antitoxins presented the pfam07927 conserved domain. This domain characterizes the sequences homologues to the YcfA protein from E. coli. Since the BLASTP best hits for this protein are annotated as HicA toxins, we named this family HicA. Two toxin communities did not present any conserved domain, but were considered as potential new families. In some cases, putative toxin sequences were excluded from the MCL analysis, but were either described by conserved domains corresponding to the HigB or HicA families, either presented low sequence similarity with proteins annotated as toxins.
Strains and media
In vivo assays were performed in the strain Escherichia coli W3110 ∆secB::Cm R (Ullers et al. 2007). Bacteria were grown at 37°C in LB medium supplemented when necessary with ampicillin (100 µg/mL) or kanamycin (100 µg/mL).
In vivo assays
The in vivo assays were performed as described (Bordes et al. 2011) with minor modifications.
For complementation of Rv1957 function in controlling HigBA, the E. coli W3110 ∆secB strain co-transformed with plasmids pMPMK6-HigBA and pSE380∆NcoI, pSE380-Rv1957 or pSE380-SmegB, were grown to mid-log phase in LB containing 0.2% glucose, kanamycin and ampicillin. Cultures were serially diluted and spotted on LB-ampicillin-kanamycin agar plates with or without arabinose and IPTG inducers as indicated. Plates were incubated at 37°C overnight. For complementation of the SecB chaperone activity at low temperature, fresh transformants of E. coli W3110 ∆secB containing pSE380∆NcoI, pSE380-SecB, pSE380-Rv1957 or pSE380-SmegB were grown to mid-log phase at 37°C in LB containing 0.2% glucose and ampicillin. Cultures were serially diluted and spotted on LB-ampicillin plates with or without IPTG as indicated and incubated at 37°C overnight or at 16°C for 5 days. | 2017-08-03T00:42:52.377Z | 2012-12-22T00:00:00.000 | {
"year": 2012,
"sha1": "26da94731e7a5ae107ee47bebf68e03f0dc7d1b9",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12192-012-0396-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "26da94731e7a5ae107ee47bebf68e03f0dc7d1b9",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
256408768 | pes2o/s2orc | v3-fos-license | Role of 5-aminolevulinic acid in the salinity stress response of the seeds and seedlings of the medicinal plant Cassia obtusifolia L.
Soil salinity, one of the major abiotic stresses affecting germination, crop growth, and productivity, is a common adverse environmental factor. The possibility of enhancing the salinity stress tolerance of Cassia obtusifolia L. seeds and seedlings by the exogenous application of 5-aminolevulinic acid (ALA) was investigated. To improve the salinity tolerance of seeds, ALA was applied in various concentrations (5, 10, 15, and 20 mg/L). To improve the salinity tolerance of seedlings, ALA was applied in various concentrations (10, 25, 50, and 100 mg/L). After 10 mg/L ALA treatment, physiological indices of seed germination (i.e., germination vigor, germination rate, germination index, and vigor index) significantly improved. At 25 mg/L ALA, there was a significant protection against salinity stress compared with non-ALA-treated seedlings. Chlorophyll content, total soluble sugars, free proline, and soluble protein contents were significantly enhanced. Increased thiobarbituric acid reactive species and membrane permeability levels were also inhibited with the ALA treatment. With the treatments of ALA, the levels of chlorophyll fluorescence parameters, i.e., the photochemical efficiency of photosystem II (Fv/Fm), photochemical efficiency (Fv'/Fm'), PSII actual photochemical efficiency (ΦPSII), and photochemical quench coefficient (qP), all significantly increased. In contrast, the non-photochemical quenching coefficient (NPQ) decreased. ALA treatment also enhanced the activities of superoxide dismutase, peroxidase, and catalase in seedling leaves. The highest salinity tolerance was obtained at 25 mg/L ALA treatment. The plant growth regulator ALA could be effectively used to protect C. obtusifolia seeds and seedlings from the damaging effects of salinity stress without adversely affecting plant growth.
Background
Soil salinity has become a global problem. A large number of lands are being eroded by salt, and numerous plants are being subjected to increasing salinity stress. Soil salinity, one of the major abiotic stresses affecting germination, crop growth, and productivity, is a common adverse environmental factor. Soil salinity affects plant growth, the global geographic distribution of vegetation, and the restriction of medicinal plant yields (Zhang et al. 2011). Under salinity stress, plants are very adversely affected by the generation of harmful oxygen species, leading to oxidative stress (Ahmad et al. 2005;Wahid et al. 2007).
Several protective mechanisms change to different extents with increased amounts of oxygen free radicals. Such mechanisms include those involving free radical and peroxide scavenging enzymes, e.g., superoxide dismutase (SOD), peroxidase (POD), and catalase (CAT) (McDonald 1999;Li et al. 2008). SOD is key to the regulation of the amounts of superoxide radicals and peroxides. Hydrogen peroxide (H 2 O 2 ) can form hydroxyl radicals via the Haber-Weiss reaction, subsequently causing lipid peroxidation. CAT and POD are implicated in the removal of H 2 O 2 (Zhang et al. 2010). H 2 O 2 removal via a series of reactions is known as the ascorbate glutathione cycle. In this cycle, ascorbate and glutathione participate in a cyclic transfer of reducing equivalents resulting in the reduction of H 2 O 2 to H 2 O using electrons derived form nicotinamide adenine dinucleotide phosphate (Goel and Sheoran 2003). The germination vigor and rate of seed are also reduced under salt stress. Some other symptoms of salinity stress include malondialdehyde increase, protein degradation.
Chl fluorescence is widely used in analyzing photosynthetic apparatuses. Chl fluorescence is also employed to understand the mechanism of photosynthesis and the mechanism by which a range of environmental factors alter photosynthetic activity under both biotic and abiotic stresses (Sayed 2003). Fluorescence parameters have also been applied in the rapid identification of injury to leaves in the absence of visible symptoms, and in the detailed analysis of change in photosynthetic capacity (Maxwell and Johnson 2000). Therefore, Chl fluorescence may be used as a potential indicator of environmental stress and a screening method of stress-resistant plants.
5-Aminolevulinic acid (ALA) is a key precursor in the biosynthesis of all porphyrins compounds, such as Chl, heme, and phytochrome (Eiji et al. 2003). The exogenous applications of ALA regulate plant growth and development, as well as enhance Chl biosynthesis and photosynthesis, resulting in increased crop yield (Hotta et al. 1997a). In plants, ALA concentration is strictly controlled to less than 50 nmol·g -1 FW (Stobart and Ameen-Bukhari 1984). ALA undergoes enolization and further metalcatalyzed aerobic oxidation at physiological pH to yield superoxide radical (O 2 -·), hydrogen peroxide (H 2 O 2 ), and hydroxyl radical (HO·). Accumulated Chl intermediates are assumed to act as photosensitizers for the formation of singlet oxygen ( 1 O 2 ), triggering photodynamic damage in ALA-treated plants (Chakrabory and Tripathy 1992). Therefore, ALA accumulation enhances the levels of reactive oxygen species (ROS), leading to oxidative stress and herbicidal activity. Herbicidal activity has been reported to increase the accumulation of several Chl intermediates, such as protochlorophyllide, protoporphyrin IX, and Mg-protoporphyrin IX, when plants are treated with exogenous ALA at relatively high concentrations (5 mmol·L -1 to 40 mmol·L -1 ). However, low ALA concentrations (0.06 mmol·L -1 to 0.60 mmol·L -1 ) appear to promote rather than damage plant growth by increasing nitrate reductase activity, increasing fixation of CO 2 in light, and suppressing the release of CO 2 in darkness (Hotta et al. 1997b). ALA treatments of rice, barley, potato, and garlic plants at their early growth stages promote plant growth and photosynthetic rates, resulting in significant yield enhancements. Low-concentration ALA applications are also known to enhance plant tolerance to cold (Wang et al. 2003) and salinity stresses (Nishihara et al. 2003;Zhang et al. 2006). At concentrations over 5 mmol·L -1 , herbicidal effects are exhibited (Kumar et al. 1999), suggesting the great potential of ALA as a new non-toxic endogenous substance for agricultural applications (Wang et al. 2003).
Cassia obtusifolia L. is a well known traditional Chinese medicinal plant belonging to the medically and economically important family Leguminosae (Syn. Fabaceae), subfamily Caesalpinioideae (Joshi and Kapoor 2003). The seed of the plant, called Juemingzi in China, is widely used for treating headache, dizziness, as well as red and teared eyes. In previous investigations of this plant, a number of constituents have been isolated, including anthraquinones, anthrones, flavonoids, triterpenoids, and so on. The anthraquinone derivatives, anthronic, dianthronic, and anthraquinone glycosides of Cassia are responsible for its purgative action (Anu and Rao 2001 ).
The mechanisms of ALA in promoting stress tolerance in plants need to be elucidated. The present paper provides the first evidence of the capability of ALA to protect C. obtusifolia seeds and seedlings against salinity stress, significantly contributing to the understanding of the ALA role in promoting salinity stress tolerance. The specific objective is to determine the optimum ALA concentration that provides the best protection against this stress.
Chemicals
ALA was obtained from the Korea Advanced Institute of Science and Technology (KAIST, Korea). All other chemicals were of analytical grade and obtained from Sigma Chemical (St. Louis, Missouri, U.S.A).
Plant material
C. obtusifolia seeds were obtained from Institute of Medicinal Plant Development, Chinese Academy of Medical Sciences (Beijing, P.R. China). The seeds were surface sterilized with 2% (v/v) sodium hypochlorite solution for 10 min, and thoroughly washed with distilled water (Korkmaz 2005). The seeds were then germinated in covered 9 cm Petri dishes. Based on a preliminary experiment, 100 mmol·L -1 NaCl was used in the salinity stress experiment.
Seed germination treatments
The seed germination experiment included seven treatments: (1) distilled deionized water (CK1), (2) 100 mmol·L -1 NaCl (CK2), (3) 10 mg·L -1 ALA (CK3), (4) 100 mmol·L -1 NaCl +5 mg· L -1 ALA (T1), (5) 100 mmol·L -1 NaCl +10 mg·L -1 ALA (T2), (6) 100 mmol·L -1 NaCl +15 mg·L -1 ALA (T3), and (7) 100 mmol·L -1 NaCl +20 mg·L -1 ALA (T4). Water was supplemented to the dishes every day and three replicates with fifty seeds per dish were used for each treatment in a light incubator under a 12/12 h photoperiod (light/dark; 450 μmol·m -2 ·s -1 ; 25 ± 1°C). The experiment was repeated three times under the same conditions (n=9). Radicle emergence was the criterion used to assess germination, and was recorded daily for 6 d until the numbers stabilized. Germinated seeds were removed from the Petri dishes. The physiological indices of seed germination were determined as follow: germination vigor (GV) = A/C, germination rate (GR) = B/C, germination index (GI) = Σ(Gt/Dt), and vigor index (VI) = GI × S. A is the total number of seeds germinated in 4 d, B is the total number of seeds germinated in 6 d, and C is the total seeds in the experiment. Gt is the germination percentage after t days, and Dt is the days of germination. S is the radicle mean and length upon the termination of germination (6 d later). The fresh weight as well as length of radicle and plumule with different treatments were measured after the germination stopped.
Seedlings treatments
Seeds without any treatment were sown in pots filled with growth medium consisting of 4:1 peat and perlite. The pots were watered and placed in a greenhouse under a 12/12 h photoperiod, 25/20°C (light/dark) temperature regime and 65% relative humidity. At the tworeal-leaf stage, seedlings were irrigated with half-strength of Hoagland nutrient solution every day. After 25 d of pre-culture, the seedlings were at the stage of 4 to 5 real leaves and the treatment was started. The seedling experiment included seven treatments: (1) Hoagland nutrient solution (CK1), (2) Hoagland nutrient solution + 100 mmol·L -1 NaCl (CK2), (3) Hoagland nutrient solution + 25 mg·L -1 ALA (CK3), (4) Hoagland nutrient solution + 100 mmol·L -1 NaCl + 10 mg·L -1 ALA (T1), (5) Hoagland nutrient solution + 100 mmol·L -1 NaCl + 25 mg·L -1 ALA (T2), (6) Hoagland nutrient solution + 100 mmol·L -1 NaCl + 50 mg·L -1 ALA (T3), and (7) Hoagland nutrient solution + 100 mmol·L -1 NaCl + 100 mg·L -1 ALA (T4). The nutrient solution was supplemented to the pots every other day, whereas the ALA was supplemented every day. The experimental design was a randomized complete block with six treatments arranged in individual pots with nine plants per treatment and three replicates each. The experiment was repeated three times under the same conditions (n=9).The experiment with all treatment were executed at dusk because ALA easily decomposes in light. The seedling samples were collected for assays at the 4th, 8th, and 12th days, respectively.
Chlorophyll content determination
The contents of total Chl, Chl a and Chl b were determined by collecting fresh leaf samples (0.5 g) from randomly selected 9 plants per replicate. The samples were homogenized with 5 ml of acetone (80%, v/v) using a mortar and pestle before being filtered through a Whatman No.2 filter paper. The absorbance was measured using a UV-visible spectrophotometer (UV-2550, Shimadzu, Japan) at 663 and 645nm (Lichtenthaler 1987).
Chlorophyll fluorescence
The Chl fluorescence of leaves was measured at room temperature (25°C) using a pulse-modulated fluorometer (PAM-2500, Walz, Germany) after the leaves were dark adapted for 30 min. The detailed experimental protocol was as followed (Genty et al. 1989). The minimal fluorescence (F o ) with all PSII reaction centers open was measured with modulated light that was sufficiently low (<0.05 μmol·m -2 ·s -1 as not to induce significant variable fluorescence. The maximal fluorescence (F m ) with all PSII reaction centers closed was determined by a 0.8 s saturating pulse at 8000 μmol·m -2 ·s -1 in dark-adapted leaves. Then, leaves were continuously illuminated with 336 μmol·m -2 ·s -1 white actinic light. The steady-state value of fluorescence (F s ) was thereafter recorded and a second saturating pulse at 8000 μmol·m -2 ·s -1 was imposed to determine the maximal fluorescence in the light-adapted state (F m '). -Adams et al. 1996). Fluorescence measurements were performed in four-flag-leaf stage per treatment combination and all measurements were made between 8:00 to 11:00 a.m. and replicated at least six times.
Membrane permeability determination
The level of membrane permeability was represented by the relative conductivity. The electrical conductivity of leaf leachates in double distilled water was recorded at 40 and 100°C (Sairam 1994). Leaf samples (0.1 g) were cut into uniformly sized discs and placed in test tubes containing 10 mL of double distilled water in two sets. One set was kept at 40°C for 60 min, and the other was at 100°C in boiling water bath for 30 min. Their respective electric conductivities C 1 and C 2 were measured by a conductivity meter. The relative conductivity index was (C 1 /C 2 ) × 100%.
Determination of thiobarbituric acid (TBA)-reactive substances
The level of lipid peroxidation was measured in terms of the content of TBA-reactive substances (TBARS) (Heath and Packer 1968). Fresh leaf samples (0.5 g) were homogenized in 10 mL of 0.1% trichloro-acetic acid (TCA). The homogenate was centrifuged at 15 000×g for 5 min. About 2 mL of aliquot of the supernatant was mixed with 4 mL of 0.5% TBA in 20% TCA. The mixture was heated at 95°C for 30 min, and then quickly cooled in an ice bath. After centrifugation at 10 000×g for 10 min to remove suspended turbidity, the absorbance of the supernatant was recorded at 532 nm. The value for nonspecific absorption at 600 nm was subtracted. The TBARS content was calculated using its absorption coefficient of 155·mmol -1 ·cm -1 .
Osmotic substances
The osmotic substances measured in the experiment included total soluble sugars, free proline, and soluble protein. Total soluble sugars were estimated using anthrone reagent (Yemm and Willis 1954). Samples were extracted with 4 ml of 80% methanol at 80°C for 40 min and were then centrifuged at 2000×g for 15 min. The methanol supernatants of three successive centrifugations were used for the sugar analyses. About 4 mL of anthrone reagent was then added. The mixture was heated in a boiling water bath for 8 min, and then cooled. Optical density of green to dark green color was read at 630 nm.
Free proline accumulation is widely used as a parameter of salt stress tolerance (Storey and Wyn-Jones 1975). In the present study, free proline content in leaves was determined by the following method. Fresh leaf samples (0.5 g) were homogenized in 5 mL of sulfosalicylic acid (3%) using a mortar and pestle. About 2 mL of the extract was placed in a test tube. About 2 mL each of glacial acetic acid and ninhydrin were added. The reaction mixture was boiled in a water bath at 100°C for 30 min. The reaction mixture was cooled, mixed with 6 mL of toluene, and then transferred into a separating funnel. After thorough mixing, the chromophore containing toluene was separated, and absorbance was read at 520 nm against a toluene blank. The concentration of free proline was estimated by referring to a standard curve (Bates et al. 1973).
Soluble protein was extracted from 0.5 g of fresh leaf using 2 mL of 50 mmol·L -1 sodium phosphate buffer (pH 7.4). Soluble protein content was determined by the method of using bovine serum albumin (BSA) as a standard (Lowry et al. 1951).
Enzyme extraction and enzymatic activity determination
Fresh leaf samples (1.0 g) were rapidly extracted in a pre-chilled mortar on an ice bath using 5 ml of ice-cold 100 mmol·L -1 phosphate buffer (pH 7.8) containing 1.0 mmol·L -1 ethylenediaminetetraacetic acid and 5% (w·v -1 ) polyvinylpyrrolidone. After centrifugation at 12 000×g for 30 min at 4°C, the supernatant was used for the determination of enzymatic activities.
SOD activity was determined by first mixing 0.1 mL of the enzyme extract with 2.465 mL of 100 mmol·L -1 phosphate buffer (pH 7.8), 75 μL of 55 mmol·L -1 methionine, 300 μL of 0.75 mmol·L -1 nitroblue tetrazolium (NBT), and 60 μL of 0.1 mmol·L -1 riboflavin in a test tube. The test tubes containing the reaction solution were irradiated under 2 fluorescent light tubes (40 μmol·m -2 ·s -1 ) for 10 min. The absorbance was measured at 560 nm. Blanks and controls were run in the same manner but without illumination and the enzyme extract, respectively. One unit of SOD activity was defined as the amount of enzyme that inhibits 50% of NBT photo reduction (Xu et al. 2008).
POD activity was determined as follow. The reaction mixture contained 0.1 mL of enzyme extract, 2 mL of 0.1 mol sodium acetate buffer (pH = 4.5), and 0.5 mL of o-dianisidine solution (0.2% in methanol, freshly prepared). The reaction was initiated with the addition of 0.1 mL of 0.2 mol H 2 O 2 . The change in absorbance was recorded at 470 nm at an interval of 15 s for 2 min. One unit of POD was defined as 0.1 ΔA 470 /min (Kalpana and Madhava Rao 1995).
CAT activity was estimated as follow. The reaction mixture contained 0.6 mL enzyme extract, 0.1 mL of 10 mmol H 2 O 2 , and 2 mL of 30 mmol phosphate buffer (pH = 7.0). The absorbance was recorded at 240 nm immediately after enzyme extract addition at an interval of 15 s for 2 min. The blank did not contain enzyme extract. One unit of CAT was defined as 0.1 ΔA 240 /min (Goel and Sheoran 2003).
Statistics
Values were presented as the mean ± standard deviation of there replicates. Statistical analyses were carried out by ANOVA. Tukey's test was used to compare the means multiple treatments. To fit the normality, percentage values were arcsine transformed prior to statistical analysis. The significance level was set at P = 0.05.
Seeds germination
ALA with different concentrations significantly affected the seed germination indices GV, GR, GI, and VI of C. obtusifolia seeds under salinity stress (Table 1). The indices of seeds treated with only NaCl (CK2) significantly differed from the control (CK1).The indices improved after the treatment of ALA (CK3, 10 mg·L -1 )only. On the other hand, the indices varied with increased ALA concentrations. The indices improved with different ALA concentrations (T1-T4), and treatment T2 raised the indices to a level that did not differ significantly from the unstressed control (CK1). The optimum ALA concentration for alleviating C. obtusifolia seed damage was 10 mg·L -1 .
Seedlings growth
The radicle length, plumule length, radicle/plumule ratio, and fresh weight of C. obtusifolia seedlings under salinity stress treated with different ALA concentrations were measured after germination was stopped. The tendencies of the radicle length, plumule length, and fresh weight with different treatments were similar (Table 2). Treatment with only NaCl stress (CK2) resulted in a significant decrease compared with no NaCl stress treatment (CK1) and reached the minimum value. However, different ALA concentrations improved the growth indices. The radicle/plumule ratio increased with all ALL treatments and treatments T2-T4 restored the parameter to values that did not differ significantly from the unstressed CK1. Under salinity stress, plumule growth was inhibited to a greater extent than radicle growth. The radicle/plumule ratio increased, possibly indicating that the different vegetative organs of seedlings adjust this ratio to ensure maximum survival and seedling growth.
Chl content
The seedling leaves of C. obtusifolia treated under NaCl stress (100 mmol/L) had lower Chl content than the control CK1 (Figure 1). Chl a, Chl b, and total Chl content (CK1, CK3) decreased with the treatment days compared with CK2. ALA applied in increasing concentrations to salinity stress expectedly enhanced the Chl content of the seedlings because ALA is a precursor in Chl biosynthesis. On the other hand, seedlings treated with increased ALA concentrations caused different enhancements in Chl content. ALA concentration increased with 25 mg·L -1 ALA (T2) resulted in improved Chl content compared with the CK2. However, 10 mg/LALA has no significant difference compared with the CK2 at 4 and 8 days, further increased ALA concentration to 50 mg·L -1 (T3) caused a slight decrease of total Chl compared with 25 mg·L -1 (T2), and the minimum value was reached with 100 mg·L -1 ALA (T4).
Chl fluorescence
The photochemical efficiency of PSII (F v /F m ), the excitation capture efficiency of open PSII reaction centers (F v '/ F m '), actual photochemical efficiency of PSII (ΦPSII), and coefficients of photochemical quenching (qP) under 100 mmol·L -1 NaCl stress (CK2) all significantly decreased Data are the means of three independent replicates. Means ± standard errors (n = 9) within each column followed by different letters are significantly different at the P = 0.05 level. Data are the means of three independent replicates. Means ± standard errors (n = 9) within each column followed by different letters are significantly different at the P = 0.05 level.
compared with CK1 and CK3 (25 mg·L -1 ALA only), and the level increased with treatment days (Figure 2A-2D). However, after treatment with different ALA concentrations, these fluorescence parameters improved to various extents, and the differences were significant compared with CK2. The ALA concentration of 25 mg·L -1 at 4 d had the most significant effect and yielded the maximum values of 0.843 (F v /F m ), 0.692 (F v '/F m '), 0.872 (qP) and 0.586 (ΦPSII). The change in the non-photochemical quenching (NPQ) was contrary; it increased with the 100 mmol·L -1 NaCl (CK2) treatment and decreased with different ALA concentrations ( Figure 2E). The ALA concentration of 100 mg·L -1 had the opposite effect to the other parameters. Hence, the optimum ALA concentration was the relatively low 25 mg/L.
Membrane permeability
The level of membrane permeability was represented by the relative conductivity. Figure 3A shows that the relative conductivity of the seedlings treated with only NaCl stress (CK2) significantly increased compared with that of the seedlings not treated with NaCl (CK1 and CK3). The maximum value of 18.17%, which was 5.31 times that of CK1 (3.42%), was reached 12 d later. However, the relative conductivity decreased with different ALA concentrations, and reached the minimum of 5.13% in the 25 mg·L -1 ALA treatment (T2) at 4 d. Treatment with ALA did not significantly differ from the control CK1. These results indicated that salinity stress resulted in the significantly increased membrane permeability of leaves. ALA treatment inhibited the increase in membrane permeability, and effectively decreased the salt stress on the cell plasma membrane damage.
MDA
The results for lipid peroxidation, estimated as MDA content, are presented in Figure 3B. MDA content increased with treatment days under NaCl stress, and reached the maximum value on the 12th day with only 100 mmol·L -1 NaCl (CK2). The content significantly decreased with different ALA concentrations (T1-T4), and yielded the minimum value on the fourth day after treatment with 25 mg·L -1 ALA. All ALA treatments (T1-T4) had significant differences compared with the no ALA treatment (CK2).
Total soluble sugars, free proline and soluble protein
The contents of total soluble sugars, soluble protein, and free proline under NaCl stress (CK2) all significantly increased compared with CK1 and CK3 (Figure 4). All contents increased in various degree after ALA treatment. All maximum values were reached at 12 d after 25 mg·L -1 ALA treatment. The soluble sugars and free proline content treated with different concentrations of ALA was similarly and the contents were all increased after the treatment ( Figure 4A and 4B). The soluble protein content significantly decreased after the NaCl only treatment (CK2), and yielded the minimum value ( Figure 4C). However, the soluble protein improved with different ALA concentrations, and reached the maximum value on the 12th day after treatment with 25 mg·L -1 ALA.
Activities of three antioxidant enzymes
To investigate further the action of ALA on salinity stress in C. obtusifolia plants, antioxidant enzyme activities were determined. The activities of SOD, POD, and CAT in response to ALA treatment under a 100 mmol·L -1 NaCl condition are shown in Figure 5. The activities of CAT, SOD, and POD increased after salinity stress (CK2, treated with only NaCl and no ALA) compared with the control CK1 (no treatment) and the control CK3 (25 mg·L -1 ALA and no NaCl). This result showed the obvious response of seedlings to stress. SOD activities treated with different ALA concentrations (T1-T4) significantly increased compared with CK1 and CK2 ( Figure 5A). Over time, the activities of SOD initially increased, and then decreased. The maximum appeared 8 d after treatment. The activities under the treatments of 10, 25, 50, and 100 mg·L -1 ALA were strongly enhanced by 3.05-, 4.04-, 3.52-, and 2.70fold respectively, compared with CK1 on the 8th day under a 100 mmol·L -1 NaCl condition, as well as by 1.29-, 1.71-, 1.49-, and 1.14-fold, respectively, compared with CK2. These results showed that different ALA concentrations improved the activity of SOD, and that the effect of 25 mg·L -1 ALA was the most significant. POD and CAT activities were similar with SOD and reached the maximum with 25 mg·L -1 ALA 8 d after treatment. The activities of POD and CAT were 138.11 and 122.04 U·mg -1 ·min -1 , respectively. The various ALA treatments had significant differences from CK1 and CK2 ( Figure 5B and 5C). However, the activities did not significantly differ at days 4, 8, and 12 after treatment. This finding indicated that the effect of ALA could be sustained for a long time under salinity stress.
Discussion
Salinity is one of the most important abiotic stresses that affect crop productivity. Unlike drought, salinity stress is an intricate phenomenon that includes osmotic stress, specific ion effect, nutrient deficiency, etc. Consequently, salinity stress affects various physiological and biochemical mechanisms associated with plant growth and development. Plants have developed various combating mechanisms to cope with the deleterious effects of this stress. Seed germination is a major stage in the life history of plant. It directly affects plant growth, development, and morphogenesis, as well as indirectly affects yield. Therefore, a seed that is able to germinate quickly is the foundation of high and stable yield. In the current study, the levels of GV, GR, GI, and VI of C. obtusifolia seeds were effectively improved by exogenous ALA application. The results showed that ALA improved the germination and emergence of C. obtusifolia seeds under salinity stress. ALA may have imbibed into the seeds during priming and imparted tolerance to salinity stress during seed germination and emergence. Hence, ALA treatment could increase the germination ability of C. obtusifolia seeds under NaCl stress.
Previous studies have shown that Chl can be bleached under oxidative stress (Noriega et al. 2004). These results can be explained in two ways. On one hand, ALA at low concentrations acts as a regulator of Chl and heme biosynthesis. On the other hand, oxidative stress may occur as a result of ROS generated by higher ALA concentrations. ALA, is the key precursor in the biosynthesis of all porphyrin compounds such as Chl and heme. ALA formation in plants is the rate-limiting step in tetrapyrrole biosynthesis (Von Wettstein et al. 1995). In the current paper, the Chl content of C. obtusifolia leaves greatly decreased after salinity stress, signifying that salinity stress injured the synthesis photosynthetic pigments. Treatment with low exogenous ALA concentrations (25 mg/L) enhanced Chl content compared with non-ALA-treated plants under salinity stress. Hence, the exogenous appli- cation of low-concentration ALA prior to salinity stress is a possible method for overcoming an inadequate biosynthesis problem.
Photosynthesis is an important metabolic process in plant and significantly affects plant growth, yield, as well as resistance to adverse environmental factors. Plant photosynthesis is obviously affected by salinity stress. The direct results are photosynthetic system damage, photophosphorylation, and photosynthetic electronic transfer. The indirect results are those involving enzymes in dark reactions. Consequently, photosynthesis can be used as a significant index for evaluating plant growth and tolerance. Chl fluorescence measurements have recently been used to estimate rapidly and noninvasively the operating quantum efficiency of electron transport via PSII in plant leaves (Baker and Rosenqvist 2004). The photochemical efficiency of PSII (F v /F m ) is proportional to the potential maximal quantum yield of PSII (Hormann et al. 1994), F v /F m is the efficiency of the primary conversion of the light energy of PSII, which indicates the ability of PSII light energy utilization and is closely related to the photoinhibition of the photosynthesis degree (Maxwell and Johnson 2000), F v /F m is also called the optimal photochemical efficiency of PSII in the dark. F v '/F m ' represents the light energy conversion efficiency of the open PSII center, which is called the effective photochemical efficiency or antenna pigment transformation efficiency of PSII in light (Zhang 1999). F v /F m is one of the Chl fluorescence indices that are usually used in stress conditions. F v /F m is decreased under stress, thereby indicating PSII damage (Xu and Zhang 1999). The actual photochemical efficiency of PSII (ΦPSII) reflects the actual original light energycapturing efficiency of the PSII response center under some closed circumstances. In the current paper, Chl fluorescence kinetics indicated that F v /F m , F v '/F m ', and ΦPSII significantly decreased under NaCl stress. This finding revealed that PSII suffered damage in various degrees. Nevertheless, the light energy capture efficiency significantly improved upon treatment with different ALA concentrations.
The coefficients of photochemical quenching (qP) could reflect the redox state of PSII original electronic receptor QAs and the number of PSII open centers. qP reflects the PSII centers of openness to some extent, and the non-photochemical quenching (NPQ) is the photosynthetic apparatus of self-protective mechanisms (Havaux et al. 1991). The NPQ process is an adaptive mechanism that prevents photoinhibition and membrane damage to plants by adjusting the dissipation of excessive energy. The photosystem, by increasing nonradiative heat dissipation, could consume the excessive light energy absorbed by PSII. Consequently, the PSII response center is protected from damage by photooxidation and photoinhibition for absorbing excess light energy. In the present study, the level of qP declined under NaCl stress, indicating that NaCl stress led to decreased openness of PSII reaction centers. The accumulation of reduced electron acceptors may increase the probability of generating reactive radicals, which may further cause injury to PSII components (Barber and Andersson 1992). The light energy captured from the antenna pigment used for photochemical reactions and the photochemical activity of the PSII reaction center both decreased. Consequently, excess light energy accumulated in the PSII reaction center under NaCl stress. The photosystem was effectively protected by the dissipated excess light energy via the improved level of NPQ. ALA with different concentrations (T1-T4) improved the level of qP and decreased the level of NPQ, showing that the salinity stress-induced damage to C. obtusifolia had been alleviated by ALA. ALA with concentration of 100 mg·L -1 has the opposite effect to NPQ, NPQ improved significantly compared with other concentrations. This result indicated that higher concentration of ALA has significant inhibitory effect to C. obtusifolia seedlings.
Increased MDA content is a good reflection of oxidative damage to membrane lipids as well as other such vital molecules as proteins, DNA, and RNA. In the present study, the TBARS levels significantly increased compared with the controls under salinity stress. The peroxidation of membrane lipids may result in enhanced membrane fluidity, which may lead to enhanced electrolyte leakage and support the hypothesis that salinity stress can induce membrane lipid peroxidation. Salinity treatments caused by significantly increased lipid peroxidation during salt stress have been reported. Higher lipid peroxidation has also been reported in salt stresssensitive rice varieties (Dionisio-Sese and Tobita 1998). Increased lipid peroxidation due to salinity stress results in a significantly increased membrane permeability. The extents of lipid peroxidation and membrane permeability have been used as indices of salt injury and tolerance in Amaranthus (Battacharjee and Mukherjee 1996). Increased membrane permeability has been suggested to reflect the extent of lipid peroxidation caused by ROS (Sairam et al. 1998). In the present study, the TBARS content and membrane permeability increased under NaCl stress. This result indicated that the membrane was damaged by ROS, cell membrane peroxidation occurred, and the normal physiological function of the plasma membrane became disordered. After treatment with exogenous ALA, TBARS content and membrane permeability significantly decreased. Therefore, ALA alleviated the damage caused by NaCl stress.
The accumulations of soluble sugars, soluble protein, and free proline under stress protect plant cells by balancing the osmotic strength of cytosol with that of vacuoles and the external environment (Gadallah 1999). These solutes are cytosolic osmotic substances, and may also interact with cellular macromolecules such as enzymes to stabilize the structure and function of such macromolecules. A direct consequence of a higher osmolyte concentration is the maintenance of comparatively antioxidant enzyme activities (Smirnoff and Cumbes 1989). The results of the present study indicated that the contents of soluble sugar, soluble protein, and free proline treated with ALA in C. obtusifolia seedling leaves were significantly higher than those treated with NaCl stress (CK2). A lower osmotic potential within the cell was possibly maintained to help cells absorb water from the external environment, resulting in resistance to the damage caused by NaCl stress.
To endure oxidative damage under conditions of increased oxidative stress such as salinity, plants must possess efficient antioxidant systems. Plants do possess antioxidant systems in the form of enzymes such as SOD, POD, and CAT; they also have an efficient system for decomposing ROS, using the enzyme SOD in chloroplasts (Asada 1999). SOD is located in chloroplasts, mitochondria, the cytoplasm, and peroxisomes. SOD serves as the first line of defense against ROS (Liau et al. 2007). A high SOD activity can efficiently remove O 2 -·, which leads to the production of H 2 O 2. H 2 O 2 can be scavenged by CAT and GR (glutathione reductase) in the Halliwell-Asada pathway. The accumulation of H 2 O 2 then begins and exacerbates membrane lipid peroxidation, causing membrane damage. POD could remove SOD disproportionation products and synergizes with SOD, the essential condition of a salt-resistance mechanism. Protective enzymes increase to a high level to remove ROS and keep them at a low level. Consequently, the function and structure of undamaged membranes are maintained. In the current paper, three kinds of antioxidant enzymes significantly increased with different ALA concentrations, although the change trends are slightly different. Apparently, the three enzymes had different strategies for facing stress. Exogenous ALA treatment improved the abilities of the three enzymes for resisting peroxide damage to plant cells.
So far, the reason for ALA-induced increase in antioxidant enzyme activity in plants is still unknown. It may be related to the conversion of ALA into heme, as suggested by a previous study that exogenous ALA processing promotes heme efflux from intact developing cucumber chloroplasts and translates it into hemoglobin (Thoms and Weinstein 1990). 14 C has also been found to permeate into the porphyrin auxiliary molecules of peroxidase and pigment cells upon treatment with 14 C-ALA (Van Huyestee 1977). Evidently, heme is an auxiliary component of peroxidase, and ALA treatment promotes the synthesis of heme (Hunter et al. 2005). Consequently, peroxidase activity and anti-oxidative stress are increased.
Conclusion
The present study revealed that ALA with an appropriate concentration could significantly alleviate NaCl stress-induced damage to C. obtusifolia seeds and seedlings. The alleviation is achieved via improved antioxidant enzyme activities, increased Chl content and photosynthetic efficiency, strengthened capacity of scavenging ROS, increased membrane stability, decreased cell osmotic potential, as well as decreased membrane lipid peroxidation. | 2023-01-31T14:55:39.789Z | 2013-08-23T00:00:00.000 | {
"year": 2013,
"sha1": "fe80aebd708e5a01413e0c765c06de6f4b43a1d9",
"oa_license": "CCBY",
"oa_url": "https://as-botanicalstudies.springeropen.com/counter/pdf/10.1186/1999-3110-54-18",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "fe80aebd708e5a01413e0c765c06de6f4b43a1d9",
"s2fieldsofstudy": [
"Environmental Science",
"Biology",
"Medicine"
],
"extfieldsofstudy": []
} |
262030825 | pes2o/s2orc | v3-fos-license | The emerging potential of siRNA nanotherapeutics in treatment of arthritis
RNA interference (RNAi) using small interfering RNA (siRNA) has shown potential as a therapeutic option for the treatment of arthritis by silencing specific genes. However, siRNA delivery faces several challenges, including stability, targeting, off-target effects, endosomal escape, immune response activation, intravascular degradation, and renal clearance. A variety of nanotherapeutics like lipidic nanoparticles, liposomes, polymeric nanoparticles, and solid lipid nanoparticles have been developed to improve siRNA cellular uptake, protect it from degradation, and enhance its therapeutic efficacy. Researchers are also investigating chemical modifications and bioconjugation to reduce its immunogenicity. This review discusses the potential of siRNA nanotherapeutics as a therapeutic option for various immune-mediated diseases, including rheumatoid arthritis, osteoarthritis, etc. siRNA nanotherapeutics have shown an upsurge of interest and the future looks promising for such interdisciplinary approach-based modalities that combine the principles of molecular biology, nanotechnology, and formulation sciences.
Introduction
Small interfering ribonucleic acid (siRNA) nanotherapeutics are promising therapeutic modalities that address the unmet medical needs not fulfilled by small molecules or antibodies in finding the cure for many deadly diseases.
Onpattro (patisiran, 2018) for the treatment of polyneuropathy in patients suffering from familial transthyretin mediated amyloidosis [3] .This was followed by approval of Givlaari (givosiran, 2019) to treat acute hepatic porphyria [ 4 ,5 ] and Oxlumo (lumasiran, 2020) for treatment of primary hyperoxaluria type 1 (PH1) [6] and Amvuttra (vutrisiran, 2022) for the treatment of the rare disease hereditary transthyretin-mediated (hATTR) amyloidosis [5] .Alnylam Pharmaceuticals developed and marketed all these above listed siRNA therapeutics [7] .Leqvio (inclisiran, 2022) [8] , was initially developed by Alnylam, but trialed and marketed by a subsidiary of Novartis.It is important to mention here that the above drugs' mechanism and work are concentrated mainly on messenger RNA (mRNA) encoding in the liver and once the siRNA delivery limitations move past this, it can certainly have much wider applications.The most difficult challenge in siRNA therapeutics is getting siRNA delivered onto the target site.There are numerous siRNA delivery techniques, viz.lipidic NCs, chemical modification, dendrimers, lipidoids, exosomes, etc.Some of the major concerns in siRNA delivery are target site administration of the siRNA, prompt degradation by nucleases in the plasma, tissue and the cytoplasm, immune reactions, off-target effects and inability of RNA to cross the cellular membrane even after getting transported in the target tissue.Recent studies have used various methods to address the problems associated with target-specific delivery of siRNA.An interdisciplinary approach is needed to fully utilize the potential of siRNA.Formulation sciences help in the development of drug delivery vehicles for new treatments and medications.The utilization of nanotechnology principles in designing these drug delivery NCs, viz.solid lipid nanoparticles (SLNs), dendrimers, lipid nanoparticles, polymeric nanoparticles, lipidoids, etc. for siRNA delivery helped in the development of siRNA nanotherapeutics.Formulation of siRNA loaded NCs, conjugation of siRNA to a target ligand, development of ligand receptor pairs using cationic lipids as well as the combination of drugs and siRNA, chemical modification of siRNA followed by loading into the NCs are some of the strategies discussed herein [9] .Despite the significant progress made in utilizing nanomaterials for the treatment of arthritis, there are still gaps in our understanding of their therapeutic mechanisms and certain limitations hinder their widespread application.Currently, most research focuses on small molecules and monoclonal antibodies-based therapies working on symptomatic treatment.The potential strategy for treating arthritis could involve gene silencing by siRNA targeting specific pathways involved in the pathogenesis of rheumatoid arthritis (RA) and osteoarthritis (OA).This approach holds promise and may represent a new direction for the treatment of RA and OA in the future.This comprehensive review article highlights the current efforts for RNAi-mediated gene silencing using siRNA nanotherapeutics for the treatment of different types of arthritis.The technique's therapeutic potential and major challenges being faced by researchers in the delivery of siRNA are discussed.The recent research conducted on siRNA nanotherapeutics for RA and OA treatment have been exclusively reviewed and compiled for the readers.
siRNA mechanism of gene silencing
The siRNA is a double-stranded (ds) and non-coding RNA molecule.The ds siRNA molecule comprises of passenger strand and the guide strand [10] .The length of siRNA is approximately 7.5 nm and diameter 2 nm.Some of the properties of siRNA are listed in Table 1 .siRNA is synthesized when the long dsRNA is cleaved with the help of introduction of ribonucleotide protein, also known as Dicer Enzymes (RNAse III endonuclease).This ds siRNA connects to a complex structure of proteins called RNA induced silencing complex (RISC).Further, Argonate-2 (Ago2) helps separate the passenger strand from the RISC-loading complex (RLC) followed by the connection of the guide strand to the target mRNA leading to cleavage into small mRNA fragments, thus silencing the gene expression.The gene silencing mechanism of siRNA is shown in Fig. 1 .This ability of siRNA to silence genes has transformed or revolutionized the treatment of a variety of diseases [11] .
Challenges in siRNA delivery
There are two main types of administration routes for delivery of siRNA, the first one is localized administration and the second one is systemic administration.In localized administration, local delivery of siRNA refers to the administration of siRNA directly to the target organ or tissues, offering several advantages, such as enhanced bioavailability at the desired site and bypassing systemic barriers.Whereas, the systemic administration of siRNA faces more challenges than localized delivery, when siRNA is administered intravenously, unmodified naked siRNA molecules are susceptible to degradation by endogenous enzymes [12] .Additionally, their small size (around 13 kDa) leads to rapid elimination through the kidneys.Moreover, due to their negatively charged nature, siRNA molecules face significant barriers in crossing biological membranes.As with every new advancement, there comes a new challenge related to its effective delivery.The siRNA delivery is also limited due to some major challenges as depicted in Fig. 2 .
Stability and targeting of siRNA
siRNA is quickly degraded when it enters the cytoplasm, tissue as well as plasma.In serum, the naked siRNA half-life ranges from a couple of mins to 1 h [13] .Hence, accumulating target sites with quantities of therapeutical significance is a considerable challenge [14] .The small size of siRNA molecules [15] and the negative charge of siRNA stop them from entering the cell membrane, thus they can't accumulate intracellularly.Moreover, endocytosis-based siRNA delivery systems must account for endosomal escape.To prevent its destruction after entering the cell cytoplasm by intracellular RNA, siRNA must be efficiently identified and integrated into RISC [16] .To overcome these challenges, the stability and efficacy of siRNA complexes have been improved through the incorporation of locked nucleic acid modifications in their design.Studies have shown that sugar 2 -modifications are the most effective way to improve the effectiveness of siRNA.These modifications, which include 2 -O-methyl, 2 -fluoro, and 2 -methoxyethyl help improve the stability and binding affinity of siRNA duplexes.They also reduce the activation of the innate immune response [17] .These modifications have been successfully applied in the recently approved siRNA drug, Onpattro.
Off-Target effects
Apart from delivery issues, the RNAi paradigm of precise silencing showed practical problems.siRNA therapy can result in the silencing of genes that are not the desired gene targets, this is also known as off-target gene silencing [18] .Off-target gene silencing is undesirable as it may cause harmful gene expression mutations and unanticipated cell transformations.
According to recent studies, homology with 6 to 7 nucleotides in the "seed region" of the siRNA sequence is what primarily causes off-target gene silencing.An even larger likelihood of a siRNA duplex matching unwanted targets might result from RISC's poor choice of guide strand over passenger strand.When creating siRNA-based treatments, off-target silencing cannot be disregarded, and all prospective siRNA sequences must undergo extensive testing.Researchers have turned to surface-ligand modifications for more specific targeted delivery.These modifications aim to overcome the off-target effects associated with the enhanced permeation and retention (EPR) effect, which is not significant enough and may potentially have harmful consequences in the long run.Recent advancements in siRNA formulation have incorporated antibody conjugation, chemical modifications, receptor-targeted delivery to improve targeting specificity and reduce off-target effects.Also, the use of antibody conjugation with siRNA (antibody-siRNA complex) has shown potential in reducing the off-target effects [19] .Further, the use of chemical modification with siRNA can also help inhibit off-target effects.Specifically, surface modifications with peptide ligands, such as cyclic arginylglycyl-aspartic acid (cRGD), facilitate the targeting of blood vessels which provides nutrients.cRGD, exhibits a strong preference for binding to av β3 and av β5 integrins found overexpressed in endothelial cells of angiogenic origin [20] .cRGD binds to integrin receptors with more affinity than linear RGD, making it a preferred option for surface modification.In a study utilizing functionalized chitosan nanoparticles that targeted PLXDC1 (a receptor abundantly expressed in tumor vasculature), the modified cRGD nanoparticles inhibited tumor growth by approximately 90%, exhibiting a 60% increase in efficacy than non-modified nanoparticles [21] .
Finally, receptor targeted delivery focuses on targeting particular receptors that are articulated in the tissue type.By using GalNAc (N-acetyl galactosamine) conjugation and targeting the specific receptor, it confirms specific delivery to the desired tissue, diminishing the risk of unintentional effects on non-targeted tissues.Givosiran, an FDA-approved medication uses GalNAc conjugation to modify the 3 terminus sense strand of siRNA.This conjugate binds to the asialoglycoprotein (ASGPR) receptor, via receptor binding with terminal galactose moieties and subsequent endocytosis [22] .In vivo studies have shown that this conjugate increases RNAi activity by approximately 5-fold.Targeting transthyretin (TTR) mRNA, considerable reduction of TTR mRNA levels has been reported in liver cells, specifically hepatocytes.Furthermore, the GalNAc conjugate with a phosphorothioate (PS) linkage modification improves protection against 5exonuclease degradation and requires meagre siRNA dose.Further investigations have suggested that both monovalent and tri-antennary GalNAc conjugations improve the binding affinity and gene silencing efficiency [23] .
Immune response activation
The innate immune system provides a rapid, nonspecific response to protect from recognizing pathogens and eliminate them.As we have discussed earlier, siRNA (less than 30 nucleotides) avoids nonspecific stimulation of the interferon response by avoiding the immune system.However, further studies revealed that the production of cytokines by immune response elicited by siRNA both in vivo and in vitro [24] .This immune response can be triggered by the siRNA itself or by the vehicles used for in vivo siRNA delivery, such as cationic lipids [25] .Studies have shown that certain siRNA can stimulate the production of pro-inflammatory cytokines and interferon in immune cells through toll-like receptors (TLRs) and protein kinase R (PKR) pathways [26] .The immune response triggered by siRNA is often attributed to the presence of guanosinecytosine (GC) rich sequences [27] , which can activate various immune signaling pathways, including nuclear factor kappa B (NF-κB), interferon regulatory factors, and TLR-dependent or TLR-independent pathways (such as TLR7, TLR8 and TLR9).These pathways recognize siRNA as dsRNA and initiate inflammatory and antiviral responses.To avoid immune stimulation and enhance the therapeutic potential of siRNAbased treatments, various strategies have been explored as discussed below.Chemical modifications of the ribose backbone have been employed to reduce the immune response in some studies.Modifying the 2 -OH group with 2 -H (2 -deoxy), 2 -F, or 2 -Omethyl (2 -O-Me) substitutions can alter the innate immune response and allow siRNA to evade immune detection while maintaining their intended activity [28] .Additionally, combining deoxyribonucleic acid (DNA) analogues, 2-F-modified RNAs and locked nucleic acids has shown promise in enhancing silencing efficiency while reducing immunostimulatory characteristics.Nucleobase structure modifications have also been investigated to mitigate the pro-inflammatory effects of siRNA delivery.Methylation of specific nitrogenous bases, such as 5-methyl-cytidine and 5methyluridine, has demonstrated the potential to reduce the immunological response induced by siRNA [29] .By avoiding specific sequences, particularly those rich in uridine and guanosine, the immunological response can be reduced [30] .In the TLR-independent pathway, cytoplasmic RNA sensors like retinoic acid-inducible gene I (RIG-I), melanoma differentiation-associated gene 5 (MDA5) and PKR play crucial roles.RIG-I and MDA5 activate downstream signaling molecules that lead to the production of type I interferons and pro-inflammatory cytokines.PKR, on the other hand, is activated by binding to dsRNA, regardless of the sequence [31] .
Therefore, inhibiting the TLR pathway or chemically modifying the siRNA duplex to prevent activation of the innate immune system can help mitigate unwanted immune responses.These approaches aim to enhance siRNA efficacy while minimizing immunostimulatory effects, thereby improving the therapeutic potential of siRNA-based treatments.
Intravascular degradation and renal clearance
The degradation of siRNA by plasma nuclease enzyme is the first biological obstacle after injection.The systemic circulation of naked siRNA is unstable and more sensitive to A-type nucleases which are found both extracellularly and intracellularly [32] .Additionally, siRNA has a relatively short half-life of 10 min to 1 h due to quick renal elimination [33] .The rapid breakdown of siRNA by nucleases presents in plasma or tissues within a short timeframe of mins to hours limits the potential applications of siRNA-based therapies [34] .This is due to the comparatively lesser molecular weight of siRNA, which is approximately 13 kDa and 7 nm in length, it is passed through the kidney easily [35] .Hence, the modification of siRNA, conjugation with polymer, siRNA-cationic polymer, and cationic comb-type copolymers (CCC) are needed to reduce its degradation via nucleases, improves the in vivo characteristics and encourage appropriate therapeutic action.The modifications of sugar backbone oligoribonucleotide bases to enhance stability, prevent degradation in blood vessels and increased resistance of enzymatic degradation have been reported in the literature [36] .For example, base modification replacement of uridine with rF (2,4difluorotoluyl ribonucleoside) substitutions has been found to improve resistance against degradation by nucleases present in the serum [37] .Modification of sugar, interchange of CH 3 group or fluoride atom in the place of ribose 2 -OH group in both the sense and antisense strands of siRNA has been shown to enhance their resistance against endonucleases.This modification not only improves the stability of siRNA molecules but also enhances their therapeutic potency [38] .In the case of modification of backbone, replacing PS with morpholino oligomers has shown a more potent and longer half-life of siRNA duplex [39] .
When siRNA was conjugated with cholesterol, its halflife increased to 90 min and plasma retention was increased owing to plasma protein binding [40] .This cholesterol-siRNA conjugation also increased nuclease resistance, improved serum stability and circulation time [41] .However, naked siRNA doesn't demonstrate any silencing activity within the cell.The addition of cholesterol to siRNA not only improves its stability and circulation but also facilitates its intracellular delivery and gene silencing efficacy [42] .On the other hand, conjugation with polyethylene glycol (PEG) polymer also increases the half-life of siRNA with improved pharmacokinetics profiles [43] .
One more approach is siRNA cationic polymer, which has retained more than a 100-fold increase in concentration in blood as compared to naked siRNA.Besides, cationic CCC was found to improve siRNA stability against nucleases and plasma components.Complexing siRNA with CCC has a prolonged duration in blood circulation compared to naked siRNA or siRNA complexed with polyethyleneimine (PEI) modification [44] .
Entrapment through reticuloendothelial system (RES)
RES poses the biggest threat to nucleic acid therapies due to phagocytosis [45] .In RES, the macrophages quickly remove siRNA-loaded nanoparticles that are being opsonized [2] .Once in circulation, siRNA therapies must be shielded from the mononuclear phagocyte system (MPS) phagocytic cells [46] .So, the siRNA nanoparticles in bloodstream get transported very quickly to RES organs.The sluggish processing and removal of these carriers, however, causes its prolonged retention in the organs.In cases where the target organ is one that is rich in RES [12] , the process of RES filtration specifically based on siRNA treatments might be beneficial.RES absorption and biodistribution may be inhibited by a variety of elements, including carrier size, charge, and surface characteristics.Theoretical considerations suggest that carriers possessing a negative charge are more prone to elimination from the bloodstream compared to carriers with a positive or neutral charge [12] .
The NCs for siRNA delivery can be tailored with hydrophilic polymers like PEG, polyethylene oxide and poloxamers to give them stealth properties, prolonging their presence in the bloodstream.This modification prevents protein adsorption, opsonization and ensures cargo stability.The stealth properties are most effective for carriers with sizes less than 200 nm.However, excessive PEGylation can compromise the carrier's ability to facilitate siRNA uptake into cells by neutralizing its positive surface charge [12] .Therefore, a specific PEGylation strategy is required to maintain both evasion of the RES and efficient cellular uptake of siRNA.Despite these advancements, further research is needed to optimize the size and molecular size of PEG for designing an ideal siRNA delivery system.Understanding these parameters will enhance the development of efficient and targeted siRNA therapies.
Impermeability of membrane
siRNA is unable to cross through the cell owing to its negative charge, size and excessive hydrophilic nature.As a result, to overcome this obstacle, effective delivery of siRNA requires modification.The siRNA, net negative charge can be hidden by complexing it with cationic polymers or lipids [47] .Additionally, these negatively charged cellular membranes and positively charged nanoparticles interact to cause internalization [48] .Alternative methods for effectively delivering siRNA include combining it with aptamers, ligands or immunoglobulins that recognize specific target cell antigens.These conjugates bind to the cells, where siRNA is picked up through endocytosis mediated by receptors.This process results in the creation of endosomes and subsequent transport of siRNA in the cytoplasm.Numerous carriers are utilized, including cationic polymers and peptides, because of the hydrophilic and negatively charged nature of siRNA.Due to the variety of their physiochemical characteristics and activities, peptides have drawn interest as they exhibit tremendous potential as siRNA carriers.Cell-penetrating peptides (CPPs) [49] , non-covalent multifunctional peptide complexes and endosome-disrupting peptides are few kinds of peptides that can be employed depending on the purpose.
Endosomal escape
A significant barrier still exists after the siRNA has been internalized because it cannot escape endosomes.Therefore, for effective endosomal escape and siRNA gene silencing, a carrier or alteration that permits endosomal membrane breakdown is required.The transition from the extracellular to the endosomal milieu has been exploited by many modern delivery systems.The ability to traverse from the extracellular to the endosomal environment has facilitated the development of numerous delivery systems in recent times.Due to their large buffering capacity, it has been demonstrated that protonable cationic polymers may enhance the delivery efficiency of siRNA [50] .To evade the endosomal trap, siRNA carrier formulations can also contain versatile endosomolytic agents, viz.polymers, proteins, peptides, and small compounds like chloroquine [51] .These carriers elevate the concentrations of counter-ions, leading to osmotic swelling, endosomal membrane breach and subsequent release of siRNA into the cytosol [52] .
Different approaches have been utilized to promote the efficient release of siRNA from endosomes, enhancing siRNA delivery systems.PEI acts as a "proton sponge," facilitating siRNA release by inducing chloride ion influx and endosomal disruption.However, its cytotoxicity increases with size [53] .Poly (lactic-co-glycolic acid) (PLGA), a biodegradable copolymer, improves siRNA activity, provides sustained release and enhances cellular uptake.Combining PEI and PLGA in layer-by-layer complexes enhances endosomal escape [54] .Protonation of amino acids like arginine and lysine destabilizes cellular membranes, aiding siRNA release into the cytoplasm [55] .A PEG-based cleavable polymeric system, comprising lipid nanoparticles stabilized by PEG-conjugated vinyl ether lipids, allows for endosomal fusion and siRNA release in the acidic environment.Incorporating fusogenic molecules, such as dioleoylphosphatidylethanolamine (DOPE) [56] and fusogenic protein-based carriers like Hemagglutinin 2 (HA2 and glutamic-acid-alanine-leucine-alanine (GALA) peptides, promote endosomal escape by destabilizing the endosomal membrane.These approaches contribute to the development of promising platforms for siRNA delivery by improving endosomal release and enhancing siRNA therapeutic efficacy [57] .Some of the extracellular and intracellular barriers of siRNA effective delivery are depicted in Fig. 3 [58] .Hence, the delivery of siRNA has several difficulties, that need to be taken care of, while designing siRNA nanotherapeutics for the treatment of diseases.
Overcoming challenges of siRNA delivery using nanotherapeutics
The important attributes that are prerequisites for an effective siRNA delivery system are its non-immunogenicity, biocompatibility and degradable nature.For siRNA to be properly delivered to target cells or tissues, the delivery techniques should also protect active ds siRNA molecules against assault by serum nucleases.After systemic administration, specificity in target tissue must be ensured by the delivery techniques to prevent fast hepatic or renal clearance.Once siRNA is internalized into target cells through endocytosis, the delivery vehicle should facilitate its release from endosomes into the cytoplasm.This will allow siRNA and the endogenous RISC to interact [59] .Hence, there is a need to design effective NCs or delivery vehicles for siRNA that are non-viral to achieve cellular, tissue bioavailability and stability.Also, siRNA nanotherapeutics have been successfully developed by researchers in the last two decades.Some of these approaches to designing effective NCs for siRNA delivery are described below.
Lipid-based siRNA nanotherapeutics
Lipid-based nanoparticles (LNPs) possess significant potential for siRNA delivery owing to biocompatibility with limited toxicity as compared to their other counterparts like inorganic and synthetic nanoparticles.Because of their better pharmacokinetic profiles, high transfection efficiency into the mammalian cells and electrostatic interaction with nucleic acids, cationic lipids have become appealing raw material for siRNA delivery vehicles.The structure of lipid-based siRNA nanotherapeutics is depicted in Fig. 4 .
Stable nucleic acid lipid particles (SNALPs)
SNALPs comprise of an ionizable lipid of 1,2-dilinoleyloxy-N, N-dimethyl-3-aminopropane (DLin-DMA).It is an ether analog of ionizable lipid having oleic acid chains, viz.mixing of lipids in ethanol and siRNA in citrate buffer.This was followed by dialysis to remove ethanol.This technique has shown optimum particle size, high efficiency of siRNA encapsulation and mass manufacture.The structure of LNPs reveals reverse micelles of a hydrophobic core made of DLin-KC2-DMA.The Cryogenic transmission electron microscopy (cryo-TEM) and small-angle X-ray techniques have shown that at pH 4.0, the siRNA/DLin-KC2-DMA complex is sandwiched within the lipid bilayer [ 62 ,63 ].However, as the pH approaches neutral, the ionizable lipids form an amorphous oil phase in the middle of the LNP [64] .Cationic lipids of the 1,2-dioleoyl-3-trimethylammoniumpropane (DOTAP) type are frequently employed in laboratories and can be purchased commercially.A liposomal siRNA/DOTAP/Cholesterol platform was developed by Kim et al. in 2007 for siRNA targeting hepatitis B virus (HBV) delivery to the liver [65] .Yagi et al. demonstrated the use of cationic DOTAP, egg phosphatidylcholine and PEG lipid in a 24:14.8wt ratio to form a siRNA delivery complex [66] .For in vivo delivery and testing in several disease models, SNALPs have been produced by various researchers across the globe.For instance, a lipid bilayer composed of cationic lipids along with fusogenic acid contains siRNA in the center of SNALPs.To boost hydrophilicity and stability in the serum, PEG is applied to the surface of the SNALP.Compared to unformulated siRNA, siRNA-SNALP complex has a substantially longer half-life.Using a mouse replication model for HBV, an HBV targeted siRNA-SNALP (dose-3 mg/kg/d) caused a selective reduction in HBV mRNA [67] .Additionally, a single systemic dosage (2.5 mg/kg) of an Apolipoprotein B (ApoB) specific siRNA loaded in SNALPs demonstrated more than 90% silencing impact of ApoB mRNA in cynomolgus monkey's liver [61] .
Multifunctional envelope-type nano device system for siRNA delivery
The idea of a multifunctional envelope-type nano-device (MEND) for nucleic acid delivery was developed to overcome different delivery barriers by incorporating functional moieties in one nanoparticle [68] .MEND consists of a lipid bilayer modified with peptides, encapsulating siRNA in its inner phase with diverse functions.One notable modification of MEND is the incorporation of octaarginine (R8), enabling cellular uptake through macropinocytosis and efficient release of nucleic acids into the cytoplasm [69] .The integration of fusible or pH-responsive lipids into the lipid membranes help in improving the intracellular dynamics.Harashima's group developed pH-responsive ionizable first-generation lipid (YSK05), second-generation (YSK13) and third generation lipid (CL4H6).YSK05, resembles the structure of DODAP, contains a tertiary amine, unsaturated fatty acid chains, and a pKa of 6.4, YSK05-MEND exhibits high membrane fusion and gene knockdown activity [70] .YSK13-MEND, demonstrated over a 4-fold decrease in the ED50 (0.015 mg/kg) for knockdown of blood-clotting factor VII (FVII) compared to YSK05-MEND [71] .Through systematic studies on ionizable lipid structures, CL4H6 was developed with a pKa of 6.25.The head group structure of ionizable lipids was identified as a primary determinant of pKa, crucial for the distribution and endosomal escape of LNPs.Interestingly, the hydrophobic tail structure was found to have no significant effect on the apparent pKa.Furthermore, intravenous injection of CL4H6 LNPs in mice resulted in an ED50 of 0.0025 mg/kg for FVII knockdown, demonstrating a substantial improvement in gene-silencing efficiency through the systematic study of ionizable lipid structure [72] .
Lipidoid nanoparticles
The delivery of siRNA has also been studied using lipidoid nanoparticles composed of molecules resembling lipids.The flexibility of lipidoid structure allows for improved in vivo kinetics, effectiveness, and safety of lipidoids [73] .Lipidoids can be screened to see how changes in their partial structure affect their properties as lipidoids with minimal experimentation.Over the past decade, Anderson's group has conducted three screening studies on lipidoids for siRNA delivery.Each study evaluated the effectiveness of nanoparticles in achieving FVII knockdown via intravenous injection in mice.Love et al. developed C12-200 lipidoid nanoparticles, with an ED50 (0.01 mg/kg) of siRNA [74] .Whitehead and his team focused on enhancing biocompatibility and identified 304O13 nanoparticles, achieving gene knockdown with an ED50 (0.01 mg/kg) without severe cytokine induction or inflammation even at high siRNA doses (1 mg/kg) [75] .Dong et al. screened a peptide-based lipidoid library and discovered cKK-E12 nanoparticles with an impressive ED50 (0.002 mg/kg) for FVII knockdown, surpassing DLin-MC3-DMA-LNP [76] .
During the screenings, lipidoid nanoparticles composition includes lipidoid, DSPC, cholesterol and DMG-mPEG2000 at specific molar ratios, with particle size of less than 90 nm.Like LNPs, it is found that lipidoid nanoparticles are coated with ApoE in the bloodstream and are taken up by hepatocytes through hepatic low-density lipoprotein (LDL) receptors.The small particle size likely facilitates their accumulation in the liver by passing through fenestrae in hepatic vessels.However, the exact delivery mechanism is not fully understood.Recent studies have traversed the use of lipidoid nanoparticles for siRNA delivery into the inflammation sites [77] and oral administration for intestinal diseases [78] , expanding their potential applications beyond liver targeting in siRNA therapeutics [79] .
Solid lipid nanoparticles (SLNs)
SLNs are composed of solid lipids gaining significant attention in drug delivery for therapeutic and cosmetic applications due to their high biocompatibility.SLNs have a solid lipid core surrounded by a lipid membrane, they can encapsulate lipophilic drugs and facilitate sustained drug release [80] .SLNs have been extensively studied for siRNA delivery.One approach involves incorporating cationic lipids into SLNs, forming electrostatic complexes with siRNA.These siRNA complexed SLNs have shown potential for treating cancer and liver diseases.Another method is the hydrophobic ion-pairing (HIP) technology to incorporate siRNA into the hydrophobic SLN core via ionic complexes with cationic lipids [81] .For example, siRNA/DOTAP encapsulated into a triolein core, followed by the addition of phosphatidylcholine and PEGylated lipids.Studies have shown that siRNA can be released from SLNs for up to 10 d in mice, with effective genesilencing capabilities in vitro .Moreover, SLNs offer promising prospects for siRNA delivery, providing sustained release and efficient gene-silencing capabilities in various applications [82] .
Exosomes
Exosomes are natural endogenous vesicles capable of carrying nucleic acids and proteins, have emerged as safe, efficient and promising siRNA delivery carriers [ 83 ,84 ].Exosomes have shown tissue-specific accumulation and surface molecule variations depending on the cell type that produces them.To reduce safety concerns and immunogenicity, it is preferable to use exosomes produced from the recipient's own cells when delivering siRNA.Although exosomes are native siRNA delivery carriers, some studies have successfully encapsulated siRNA within exosomes obtained from human serum via electroporation.These exosomes have effectively delivered siRNA into human monocytes and lymphocytes, demonstrating the use of recipient-derived exosomes as siRNA carriers [85] .
Alvarez-Erviti et al. reported the use of modified dendritic cells (DCs) to produce exosomes containing the rabies viral glycoprotein (RVG) peptide, which enables neuronal cell targeting.siRNA encapsulated in these RVG peptideexpressing exosomes successfully silenced the BACE1 gene in the brain, exposing their potential to cross the bloodbrain barrier (BBB) and deliver therapeutic cargo to neuronal cells [86] .The exosome database ExoCarta ( www.exocarta.org)provides details on the proteins, mRNAs, miRNAs, and lipids that make up exosomes [87] .Exosome investigations should lead to the development of artificial exosomes or exosome mimics that can carry siRNA to targeted tissues.
Cationic lipoplexes & liposomes
Cationic lipoplexes and liposomes have been extensively studied as transfection agents among all lipid-based systems due to their potential to show charge-based interactions with the cell membrane [88] .In the context of siRNA distribution, various cationic transfection agents such as Lipofectamine 2000, DOTAP, Lipofectamine RNAiMAX, HiPerfect and Cardiolipin analogues, have been investigated [89] .However, upon administration to mice, these carriers have demonstrated dose-dependent toxicity along with an increase in inflammation.The study showed that Lipofectamine, DOTAP and nonionic/anionic liposomes exhibited harmful effects in decreasing order [90] .
Numerous formulation characteristics of liposomes enhance the effectiveness of siRNA delivery.There are several instances in the literature that demonstrate the use of both neutral and cationic lipids in creating siRNA delivery systems.The structure of cationic head, tail length of lipid and the type of linker, all have an impact on the transfection ability and toxicity associated with siRNA encapsulated by cationic liposomes.The development of such systems facilitates the endocytic uptake of siRNA enabling its intracellular release at the targeted site.It has been reported that the nitrogen/phosphorous molar ratio and concentration of nucleic acid are critical attributes to fabricate safe lipoplexes.Further, it has been reported that DOTAP or DC -Cholesterol associated with DOPE and complexation with PEG were developed as safe and effective lipoplexes [91] .
Bioconjugate siRNA
Another strategy to improve the effectiveness of siRNA in vivo is using biomolecules that can be chemically modified or incorporated into nanoparticles for covalent coupling with siRNA.Various biomolecules, such as dendrimers, peptides, lipophilic molecules, ligands, cholesterol, aptamers, antibodies, and biopolymers can be conjugated to siRNA to improve its delivery.The cell targeting, cell-penetrating peptides or biofunctional peptides can be covalently linked to siRNA [92] .Covalently attaching drugs to the peripheral groups of PAMAM (poly amidoamine) dendrimers have been utilized to improve the effectiveness and solubility of therapeutics, minimize non-specific toxicity, and achieve a controlled and prolonged release of the drug [93] .The controlled modification of poly-amidoamine dendrimer surfaces with targeting ligands reduces cytotoxicity, improves transfection efficiency, and enhances targeting capability [94] .In a study by Choi et al. a self-crosslinked fusogenic KALA peptide and a cell-penetrating peptide, Hph1 were functionalized with siRNA conjugated with branching PEG to suppress gene expression in MDA-MB-435 in cell lines using a polyelectrolyte complex micelle [95] .Intratracheally administered siRNA against the p38 mitogen-activated protein kinase (MAPK) and the HIV TAT cell-penetrating peptide were used to try and inhibit innate immune responses in the lung [96] .
When administered systemically, cholesterol-conjugated siRNA enhances intracellular activity and facilitates cellular uptake through LDL receptor-mediated endocytosis, taking advantage of the natural uptake of cholesterol by hepatocytes and lipoproteins.Target-specific siRNA conjugated with cholesterol has shown improved serum stability and increased suppression of liver apoB mRNA levels for apoB [97] .
Antibodies can be directly linked to siRNA for targeted delivery to specific cell types or tissues.Antibodybased therapeutics such as pertuzumab, cetuximab and trastuzumab have been routinely administered with excellent results.Trastuzumab emtansine (T-DM1), an antibody-drug conjugate combining trastuzumab with the anti-microtubule agent DM1, is a notable example of antibody conjugation.Antibodies attached to small molecules of chemotherapy have also shown therapeutic effects [98] .Some laboratory and in vivo studies demonstrated superior siRNA targeting with KRAS2 (Kristen rat sarcoma viral oncogene homolog or alternatively Kristen murine sarcoma virus2 homolog) and an anti-EGFR (epidermal growth factor receptor) antibody [ 98 ,99 ].
Aptamers, nucleic acid-based targeting agents, can also be coupled with siRNA.These synthetic single-stranded (ss) DNA/RNA ligands are designed to bind to specific targets selectively and effectively [100] .Aptamers can be employed to deliver siRNA/shRNA molecules that specifically target retineic-acid-receptor-related orphan nuclear receptor gamma (ROR γ t).By replacing the targeted genes, such as GATA3, Tbet and signal transducer and activator of transcription (STAT) 3, with siRNA/shRNA, the CD4 aptamer offers a flexible approach to introduce gene silencing molecules into CD4 + T cells.This enables the manipulation of various subsets of Th cells.To assess the potential advantages in autoimmune diseases, additional animal and clinical trials involving CD4-AshR-ROR γ t chimeras are needed [101] .Aptamers targeting transmembrane receptors have gained attention as active targeting moieties, like siRNA.Hence, these strategies involving biomolecule conjugation with siRNA hold promise for improving siRNA delivery and enhancing its therapeutic efficacy.
Polymeric nanoparticles
To overcome the limitations of nucleic acid formulations, siRNA is often incorporated into nanoparticles, which offer improved serum stability, better distribution, and controlled release.Natural biodegradable nanoparticles have shown significant therapeutic effects.There are two types of polymers commonly used in siRNA delivery research, viz.natural polymers, and synthetic polymers.Examples of natural polymers include albumin, chitosan, cyclodextrin, gelatin, atelocollagen, etc. [ 102 ,103 ].Synthetic polymers extensively studied for siRNA delivery include PLGA, PEI and PEG [104] .Cyclodextrin polymer (CDP) was the first nanoparticle delivery technology used in clinical trials for siRNA, where it was utilized as the basic material.CDP is derived from the breakdown of cellulose by bacteria.It is a polycationic oligosaccharide used by pharmaceutical companies to deliver small compounds [105] .Self-assembled CPD containing human transferrin (Tf) and PEG, referred to as CALAA-01, has shown better capacity to target cells from siRNA delivery formulation [106] .This method was then put to the test in a clinical phase experiment.Chitosan-based systems are another type of polymeric nanoparticles commonly used in nanomedicine.Chitosan is a polysaccharide derived from chitin, composed of N-acetyl-D-glucosamine (deacetylated unit) and -(1-4)-linked D-glucosamine (acetylated unit).Due to their positive charge, chitosan nanoparticles can easily interact with negatively charged siRNA through electrostatic interactions, facilitating the formulation process [107] .When atelocollagen/siRNA complexes were injected intratumorally, tumor development of an orthotopic xenograft cancer model was inhibited [108] .Natural polymers are advantageous for siRNA administration because they are biodegradable, have little immunological stimulation, and are simple to condense with nucleic acids.Despite having higher transfection efficiency, macromolecules containing cationic lipids may still cause cytotoxicity and an immunological response.Its adaptability for loading siRNA, biological compatibility, and the combined administration of small molecule treatments with siRNA, is due to the polymer's ability to change a chemical bond [109] .Several natural polymers, including gelatin, chitosan, sodium alginate, albumin and lectins, are employed for the delivery of therapeutic siRNA [110] .
Poly( Ɛ-caprolactone), PLGA, polycaprolactone and polyglutamic acid are examples of synthetic polymers used for the transport of siRNA [111] .Polymeric NCs have shown to be more adaptable to surface and chemical alterations, integration of small functionalities is one of the unique features of polymeric nanoparticles for the effective delivery of siRNA.
Other nanoparticle delivery systems for therapeutic siRNA
The two main categories of nanoparticles used for siRNA delivery are organic or soft nanoparticles and inorganic or hard nanoparticles.In the case of hard or inorganic nanoparticles, they are often coated with polymers to enhance their solubility.
Organic or soft nanoparticles
Organic nanoparticles are made from natural organic materials or synthetic organic materials such as selfaggregating surfactants or polymers.Common examples of organic nanoparticles include dendrimers, polymer nanoparticles, nanoemulsions and liposomes.Liposomes, composed of organic lipid molecules in a bilayer structure, can have various charges.Although neutral liposomes are commonly preferred, their limited entrapment efficiency is a drawback.To address this, the zwitterionic compound 1,2dioleoyl-sn-glycero-3-phosphatidylcholine (DOPC) is often used in the preparation of liposomes [46] .The resemblance of liposomes to natural biological membranes is the unique advantage of this NC making it the most widely used biocompatible NC.The liposome like particles can carry both hydrophilic and lipophilic drugs and are ideal NCs for siRNA delivery.
Inorganic or hard nanoparticles
These nanoparticles fall into the category of nonbiodegradable, bio-persistent inorganic and insoluble nanoparticles.They include metals and their oxides, carbon-based compounds such as nanotubes, fullerenes, and fibers [112] .Magnetic nanoparticles, specifically super paramagnetic iron oxide nanoparticles (SPION), are a type of inorganic nanoparticle [113] and gold nanoparticles are adaptable enough to be employed as both medicinal agents and delivery systems.It has been observed that the magnetic nanomaterials relevant for nucleic acid delivery are either positively charged particles alone or the formulations including positively charged lipids or polymers containing both positively and negatively charged particles.Quantum dots are a relatively new type of nanocrystal made of colloidal semiconductors, known for their exceptional electrical and optical properties, making them efficient carriers for siRNA.Nanodiamonds (NDs) are among the most advanced delivery technologies being investigated for siRNA therapies.In a study, a fluorescent cationic coated ND vector was synthesized for siRNA delivery into Ewing sarcoma cells in culture.The coatings were done using polyallylamine hydrochloride (PAH) and PEI.This vector depicted larger efficiency in inhibiting the gene expression, and lesser toxicity.A larger adsorption affinity of siRNA was observed with PAH-coated NDs and ND-PEI carriers exhibit lesser toxicity.Also, slower dissociation of the siRNA:ND-PAH complex than of the siRNA:ND-PEI ones and hence a lower siRNA-associated biological activity [114] .Carbon nanotubes, particularly those containing nanoneedles, are extensively researched due to their potential to induce cell death [46] .Fig. 5 depicts the structure of organic and inorganic nanoparticles.
Chemical modification of siRNA
The modification geometries of siRNA have been attempted to increase its stability for sustained circulation in vivo .Several modifications have been applied to different locations within the siRNA duplex to enhance nuclease tolerance.A popular technique is substituting the phosphodiester (PO4) group with Other methods include using 2 ,5 -phosphodiester links, a 2 -O-alkyl alteration combined with 4 -thiolation and the locked nucleic acid substitution with the 2 -and 4 -positions connected by a methylene bond [116] .Small compounds like 2,4-dinitrophenol (DNP) have also been used to modify siRNA, increasing its nuclease resistance and membrane permeability.
Chemically modified siRNA demonstrated in vivo absorption and targeted downregulation of endogenous proteins [42] .However, the breakdown of synthetic compounds used in the modifications can lead to the production of toxic metabolites.Concerns such as offtarget effects, decreased RNAi activity and decreased therapeutic index may arise from chemical modifications [117] .Additionally, the breakdown of a modified siRNA into non-naturally occurring chemicals in the body raises safety concerns regarding the potential toxicity of these metabolites.
Potential of siRNA delivery in the treatment of arthritis
The word "arthritis" comes from the Greek language, meaning "disease of the joints".It is generally characterized by joint inflammation (acute or persistent) with severe pain and structural damage to bone and cartilage degradation.The most noticeable signs include stiffness, pain, reduced range of motion and joint abnormalities.More than 100 different forms of arthritis have been identified like OA, RA, ankylosing spondylitis, psoriatic arthritis, etc.The classification of arthritis is depicted in Fig. 6 .This review article is mainly focused on siRNA therapeutics under the research and development pipeline in OA and RA.
Rheumatoid arthritis
RA is a chronic systemic inflammatory disease of autoimmune origin characterized by inflammation of the synovial tissue, leading to stiffness, swelling and pain in the joints that ultimately leads to cartilage damage.
There are multiple factors involved, viz.smoking, genetics, environmental factors, etc. that increase the predisposition of an individual towards RA [118] .According to the Arthritis Foundation (AF), the number of adults diagnosed with arthritis is close to 60 million, whereas approximately 300,000 children are affected by juvenile arthritis.Women are more prone to RA as compared to men and account for 75% of the total cases.Although RA may afflict persons of any age, it often appears in those between the ages of 30 and 50. to be produced in synovial fibroblasts.This is achieved by influencing several immune cells and inflammatory mediators.Chondrocytes, synovial fibroblasts, and synovial macrophages all generate matrix metallopeptidase (MMP) and a disintegrin and metalloproteinase with thrombospondin motifs 5 (ADAMTS), which cause damage of cartilage [119] .Several signaling pathways of the kinase inhibitors such as MAPK, phosphoinositide 3-kinase/protein kinase B (PI3K/AKT), NF-κB and janus kinase/signal transduction and transcription activation protein (JAK/STAT), among others, have been shown to be effective therapeutic alternatives in RA.Fig. 7 depicts the pathogenesis of RA in detail.
Symptoms of RA
Multiple joints can be affected by RA.Patients with RA may also notice that the same joints on both sides of their body are affected.For instance, both knees and wrists can be affected.RA typically manifests itself in small joints such as finger bones or wrist.Additionally, it can also affect the lungs, eyes and skin.The major symptoms of RA include joint pain and stiffness, redness of joints, loss of motion, weight loss, weakness and fever.
Treatment of RA
At present there is no permanent treatment available for RA.Therapeutic approaches used in treatment include non-steroidal anti-inflammatory drugs (NSAIDs), corticosteroids, disease modifying anti-rheumatic drugs (DMARDs), biological-disease modifying anti-rheumatic drugs (bDMARDs), IL-6 inhibitors, TNF-α inhibitors, etc.These treatments can help in partially treating the symptoms associated with RA.However, they also have some side effects and may not help in prolonging survival [119] .All the above treatments provide some disease modification and symptomatic relief from pain and inflammation but don't offer any permanent solution to the problem of RA.
The gene silencing potential of siRNA has shown promising results in reversing the disease progression of RA if effective delivery of siRNA is achieved.siRNA has several unique characteristics like rapid design of sequences for gene knockdown and its synthesis not requiring specific complex cellular expression.These characteristics have increased the interest of researchers and the pharmaceutical industry in developing siRNA therapeutics for RA treatment.The combination of drugs and siRNA in therapy holds promise for the treatment of RA.There are numerous ways to overcome the challenges faced during delivery of siRNA to the cells which have already been discussed above in Section 4 .
Limitations of nanoparticles or nanomaterials in the context of RA
Despite the significant progress made in utilizing nanomaterials/nanoparticles for the treatment of RA, there are still gaps in our understanding of their uptake mechanisms and certain limitations hinder their widespread application.Further exploration and research are necessary to investigate the biocompatibility of NCs and their metabolic pathways within the body.Additionally, achieving precise control over drug release rates and retention times should be a key focus for future advancements [120] .The major limitations include poor water solubility, poor hydrophobicity and limited bioaccumulation.However, these limitations can be overcome by selection of suitable polymers.
As nanotechnology continues to advance, we anticipate that advanced nanomaterials will play a pivotal role in the treatment of RA.These materials offer unique properties and capabilities that can be harnessed to overcome current challenges.By optimizing the design of NCs, researchers can improve their compatibility with biological systems, enhance drug delivery efficiency and minimize adverse effects.The precise control of drug release rates and retention times is a critical aspect that requires further investigation.By tailoring the design and composition of NCs, it may be possible to achieve controlled and sustained drug release, leading to prolonged therapeutic effects and reduced dosing frequencies.This approach can enhance patient compliance and minimize side effects.Furthermore, a deeper understanding of the interactions between nanomaterials and the immune system is crucial.This knowledge will aid in the development of NCs that can evade immune surveillance and effectively target specific sites of inflammation.Additionally, elucidating the metabolic pathways of nanomaterials within the body will contribute to their safe and efficient utilization.Currently, most research focuses on single-targeted therapies.However, a potential strategy for treating arthritis could involve targeting multiple pathways simultaneously.By using multiple drugs that synergistically block various pathways involved in the pathogenesis of RA, we may be able to alleviate the disease progress and enhance therapeutic outcomes.This approach holds promise and may represent a new direction for the treatment of RA in the future.
In the end, while there are still uncertainties and limitations surrounding the use of nanomaterials for RA treatment, the future holds great promise.Through continued research and development, we can advance our understanding of nanomaterial biocompatibility, metabolic pathways, and therapeutic mechanisms.By targeting multiple pathways simultaneously and achieving precise control over drug release, nanomaterials have the potential to deliver molecules like siRNA or mRNA for specific targeting of signaling pathways involved in the pathogenesis of RA.
Some recent work on siRNA nanotherapeutics related to RA
Interdisciplinary approach-based modalities, such as siRNA nanotherapeutics, are emerging as a promising area of research for the treatment of RA worldwide.Table 2 provides a brief overview of recent studies on siRNA nanotherapeutics in the context of RA as per the timeline.
Osteoarthritis
OA is a multifaceted and age-related, distinct form of arthritis that is linked to the breakdown of articular cartilage.It is also a degenerative joint condition.Some of the symptoms of this chronic condition are pain, localized tissue damage and cartilage damage.OA has become a concerning chronic disease due to the global increase in life expectancy and more elderly population [146] .It is generally recognized that reactive oxygen species (ROS) play a crucial role in cartilage degradation and chondrocyte mortality [147] .
Pathogenesis of OA
In the early stages of articular cartilage degeneration, hypertrophic chondrocytes play a significant role.These cells release MMPs and ADAMTS contributing to OA development.
The Runt-related transcription factor 2 (Runx2), regulated by hedgehog signaling, is a crucial transcription factor in chondrocyte hypertrophy.On the other side, parathyroid hormone (PTH) inhibits chondrocyte hypertrophy.Various factors including HIF-2 (hypoxia-inducible factor-2), TLR4 signaling Notch 1, NF-κB and Hes1 encourage the release of different degrading enzymes, such as MMPs & ADAMTS.However, miR-140 helps preserve cartilage by inhibiting the production of ADAMTS-5.Cartilage specific protein, carminerin is involved in chondrocyte calcification.All these changes are involved in the progression and pathogenesis of OA related degeneration of joints [119] .Fig. 8 depicts the pathogenesis of OA.
Symptoms of OA
OA joint pain is commonly aggravated by activity and relieved by rest.The key signs indicating an OA diagnosis include pain, reduced function, stiffness, joint instability and buckling or giving way.Patients may also report decreased mobility, deformity, edema and crepitus, as per age (OA is rare before the age of 40) without systemic symptoms like fever.In addition, persistent pain might cause psychological discomfort [148] .
Treatment of OA
Generally, OA treatments primarily address pain and inflammation symptoms using steroidal or NSAIDs drugs.However, as our understanding of OA underlying mechanisms has improved, new treatment targets called disease-modifying OA drugs (DMOADs) have emerged.
They help in reducing the resulting structural damage to avoid permanent impairment.The goal of the DMOADs currently undergoing clinical trials is to re-establish the stability of matrix metabolism.DMOADs efficiently manage the degenerative changes in osteoarthritic cartilage by concentrating on matrix-degrading enzymes, inflammatory cytokines, the Wnt pathway and OA-related pain [146] .Numerous DMOADs are in phase II/III clinical trials and Rapid cellular uptake of psi-tGCnanoparticles and TNF-α gene silencing efficacy microcomputed tomography and specific nanoprobe show targeted, accumulation at joint site. [129] ( continued on next page )
many repurposed drugs are under investigation.In 2010, the FDA approved the antidepressant duloxetine hydrochloride (Cymbalta R ) to alleviate OA discomfort, including lower back pain.This has been a significant breakthrough for those who are unable to handle NSAIDs or other medications.
Some recent work on siRNA nanotherapeutics related to OA
siRNA nanotherapeutics are emerging as a promising area of research for the treatment of OA worldwide.Table 3 provides a brief overview of recent studies on siRNA nanotherapeutics in the context of OA as per the timeline.
Conclusions
The potential of siRNA nanotherapeutics in the treatment of arthritis has shown an upsurge of interest and is very promising in finding the cure for disease progression in arthritis.The difficulty of effectively delivering siRNA to specific diseased sites in vivo is well documented in the literature and has been discussed at length in this review.The inefficient transport of therapeutic drugs to the local chondrocytes in avascular/articular cartilage presents a significant obstacle to the successful development of arthritis therapy.The use of versatile NCs in improving the delivery of siRNA to the desired target is the upcoming area of research in the successful development of siRNA nanotherapeutics.In the case of arthritis, the gene-silencing potential of customized siRNA specific to a particular pathway involved in pathogenesis has shown a new hope for its treatment.Knocking down/silencing of various pathogenic pathways, such as STAT1, Notch1, NF-kB, TNF-α, JAK-STAT and MMPs, etc. has been explored as a potential approach.In preclinical research, various nanostructures have been utilized as carriers for siRNA delivery.Self-assembling nanostructures enable synchronized release of siRNA and can achieve endosomal escape while also preventing the oxidation of siRNA in endosomes and the bloodstream.The combination of drugs and siRNA in therapy holds promise for enhancing treatment outcomes, minimizing side effects have opened new therapeutic horizons for the treatment of arthritis.Demand for cutting-edge therapeutic approaches is increasing and a revolutionary approach to treat the disease is made possible by the RNAi phenomenon and its success has led to FDA approval of five siRNA therapeutics in the last five years.The chemical modifications of siRNA and customized NCs have helped achieve long-term therapeutic potential at much lesser doses of siRNA in various diseases.However, none of the approved agents were used for the treatment of arthritis but the technology has opened the doors for researchers to explore the pathways involved in immune-mediated disease like arthritis.The short duration of siRNA research and development time as compared to small molecules, and monoclonal antibodies holds great promise in the future of this revolutionary technology.Further, these modalities need to be administered quarterly or half yearly and this will in turn help in improving patient compliance.With more advancements in the chemical modification of siRNA geometries and tailored NCs, the siRNA nanotherapeutics will revolutionize the precise and personalized treatment of inflammatory diseases like RA and OA.
Fig. 5 -
Fig. 5 -Structure of organic and inorganic nanoparticles for siRNA delivery.
An increased immunological response of T-cells is a defining feature of RA.Inflammatory cytokines, such as interleukin-1 (IL-1), IL-6, IL-17 and tumor necrosis factor-α (TNF-α) are produced by T-cells, synovial fibroblasts and macrophages.These cytokines can destroy bones by stimulating osteoclasts.Subsets of T-helper (Th) cells are Th1, Th2 and Th17 cells.Through the action of IL-1, IL-6, IL-21 and transforming growth factor (TGF), T cells can become Th17 cells.IL-17 produced by Th17 cells stimulates osteoclasts by causing the receptor activator of NF-κB ligand (RANKL) | 2023-09-18T15:04:36.599Z | 2023-09-01T00:00:00.000 | {
"year": 2023,
"sha1": "d387a06c5c300e772e5a8da5722cbdc57f04ce6f",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.ajps.2023.100845",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cf6bd72b4cbfe0e962ad2e8550184f40775c11cd",
"s2fieldsofstudy": [
"Materials Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
241662979 | pes2o/s2orc | v3-fos-license | Analysis of Normal Ankle Radiographic Angles
The aims of our study were to quantify the normal ankle angles, measures and reference points systematically on A total of 900 angles and 300 basic information were measured on 100 healthy ankles. The radiographic measurements were performed on standard weight-bearing anteroposterior (AP) and lateral view of the ankle. A total of 6 angles were made from the AP view, 3 angles from the lateral view. All angles were measured by 2 foot and ankle surgeons, independent of each other. For intraobserver reliability, after six weeks, the same angles were measured by each observer.
Backgrounds
The ankle supports the weight of the body, and when a fracture occurs, the most signi cant aspect of restoration of the anatomical alignment during treatment [1]. It is universally accepted that poor ankle articular reduction will accelerate development of post traumatic ankle osteoarthritis [2,3]. Proper ankle osteoarthritis correction requires extensive surgical experience and radiographic information [4]. Ankle osteoarthritis are evaluated in an objective manner with radiographic evaluation. Radiographs of the ankle provide information regarding the integrity of the ankle joint. Critical preoperative planning and intraoperative evaluation of radiographs are very signi cant for successful ankle osteoarthritis correction.
They reported a total of eighty-four radiographs of the foot. Steel et al [6] studied the wide variation about radiographic measurements of the normal adult foot. Their report demonstrated the bony relationships of the normal adult painless foot. Charles et al [7] reported the reliability of fty standard feet radiographic measurements. They studied the inconsistency of interobserver about the measurements. Thomas et al [8] reported radiographic angles of one hundred adult feet. They used a novel measuring techniques.
Oscar Castro-Aragon et al [9] reported the differences of the three different ethnic feet radiographic measurements. They demonstrated that there are morphological differences among different ethnic groups. However, their studies were only included the measures of the foot. Their studies did not comment on any type of the ankle.
Alexej Barg et al [10] analyzed the in uence of ankle position and radiographic projection angle on measurement of supramalleolar alignment on the two views. They found the difference of the measurements of the medial distal tibial angle(MDTA) with the hindfoot alignment views(HAV) and AP. They only reported the MDTA. Others angle of the ankle was not investigated in their study. Also, the study was included only seven samples. And their study was limited because all samples were male cadaveric.
Bradley et al [4] reported the radiographic angles, measurements and reference points of twenty-four normal feet and ankles. They presented a comprehensive and useful set of standard angles and reference points. Although their study has thirty-three angles and reference points, the data from twentyfour healthy feet were included. And this study only has ve angles of ankle.
Changjun Guo et al [11] studied the reliability of measurements of fty lateral ankle radiographs. They discovered that surgeon could nd an incongruous ankle joint on lateral radiographs when the displacement of the joint circle center of talus and the distal tibia articular was 4mm. They only analyzed the data from lateral radiographs and performed three measurements.
Michael et al [12] reported eighteen observers can achieve different results on standard foot and ankle radiographic measurements. Their research not only solidi ed the knowledge of the normal foot and ankle radiographic angles, but also provided some insight into variability of different observers. However, their study has its limitations. The primary author is taking part in the angle analysis team. And they only analyzed 4 angles of ankle.
There are several literature of foot radiographic measurements, however, lacking of the study that comprehensive analysis of standard ankle radiographic angles. The aims of our study were to quantify the normal ankle angles, measures and reference points systematically on ankle radiographs. Our study will provide a set of standard angle measures. The data can be used as a reference for preoperative design and intraoperative and postoperative evaluations.
Materials And Methods
A total of 9 angles and 3 characteristics were measured using 100 normal feet and collected from June 2018 to November 2019. The radiographic measurements were performed on standard weight bearing anteroposterior (AP) and lateral view of the ankle. A total of 6 measurements were made on the AP view and 3 measurements were made from the lateral view (Tables 1 and 2) (Figs. 1,2 and 3). All radiographic angles were measured by 2 foot and ankle surgeons, independent of each other. For intraobserver reliability, after six weeks, the same angles were measured by each observer. Redmond, WA). All statistical analyses were performed using SPSS version 21.0 software (SPSS Inc., Chicago, IL). Descriptive statistics were calculated for all angles. The level of agreement between the two observers was determined by the interaction correlation coe cients (ICCs) for continuous data. The method reported by Altman [13] was used to calculate the score of reliability; a score of 0.81 to 1 mean very good, 0.61 to 0.8 mean good, and 0.41 to 0.60 mean moderate.
Measurement Technique
A detailed explanation of all angles of measurement is given in Table1 and 2. And the measurement of every angle is shown in Fig.2 and Fig.3. Measurement of the ankle's angles are not simply because of the greater number of axial lines (Fig.1).On the AP radiograph, TAS is the tibial anterior surface angle between the tibial axis and distal tibia articular surface; TT is the talar tilt angle between the distal tibia articular surface and the talar dome articular surface; TC is the tibial-mortise angle. Between the tibial axis and the distal tibio bular which is drawn between the medial malleolus and lateral malleolus margins. TMM is the tibial axis-medial malleolus angle between the tibial axis and the medial malleolus articular surface. TLM is the tibial axis-lateral malleolus angle between the tibial axis and the lateral malleolus articular surface. MA is the medial malleolus and lateral malleolus angle between the medial malleolus articular surface and lateral malleolus articular surface. On the Lateral radiograph, TLS is the lateral tibial surface angle between the tibial axis and distal tibia articular surface which is drawn between the anterior margin and posterior margin of the tibial plafond. Kite angle is an angle between the talus axis which is drawn between the midpoint of the articular surface of the talus and the midline of the talus body and the calcaneal axis which is drawn between the midpoint of the articular surface of the anterior calcaneal and the line from the posterior superior calcaneal tuberosity to the lowest point of calcaneal. TTA is an angle between the talus axis and the line as the weight bearing surface. AND it is important to ensure the quality of the radiograph and the accuracy of reference points for the veracity of all measurements.
Results
A total of 900 radiographs and 300 characteristics from 100 normal feet were measured. The mean age of the subjects was 33.0 ± 9.1(range 18 to 46)years, with 57 females(57%) and 43 males(43%), with 44 left feet(44%) and 56 right feet(56%). Interobserver reliability and intraobserver reliability, as determined by the ICC, were very high with regard to angles of ankle (Table 4). Intraobserver is reliability greater than 0.8 and the interobserver reliability greater than 0.8, which indicating perfect consistency in the evaluation of ankle radiographic angles.
Discussion
Published data exist regarding any of the normal foot radiographic angles. Although these studies are commonly used and cited in orthopedics, they describe few normal ankle radiographic angles. Also, although some studies have described the normal radiographic angles of ankle, a paucity of study has been published that conveys the normal radiographic angles for the ankle as thorough and comprehensive as the data in the present study [5][6][7][8][9][10][11][12].
The foundation of radiographic ankle angles is that the relative radiographic de nitions. A bone's anatomic axis is the mid-diaphyseal line. And the joint orientation angle is the angle by between the joint line and a bone's anatomic axis or the opposite joint line [4,14].
The present study does have its limitations that should be mentioned. First, our data were only measured by two panelists. Ideally, measurements should have more team members to do. A second limitation is that this study was limited to a set of the AP and lateral X-ray of the ankle. Because we want to minimize the potential risk to the volunteers. Therefore, this study not only collected the large data but also had a lot of participants. We believe that these angles and reference points not previously reported, which can be valuable in clinical practice.
The TAS angle is very signi cant for the preoperative planning of ankle osteoarthritis. TAS is an angle formed between the anatomic axis of the tibia and the joint orientation line of the distal tibia. In varus ankle osteoarthritis(OA), the varus of tibial plafond is increasing as the stage of OA deteriorates [15]. The TAS angle can indicate the degree of the tibial varus. Nevertheless, we found it is different of the TAS angle compared with the previous study. The TAS angle has been measured as 92.4 ± 3.1 degrees (range ,88.0 -100.0 degrees) by Hinterman et al [16]. But the TAS angle has been measured at 88.3 ± 3.3 degrees (range,87.6-88.9 degrees) in our study. Hinterman et al [16] collected a cohort of 93 asymptomatic control subjects. And our study has 100 normal ankles were measured. Therefore, the sample could not have played role in the difference in the TAS angle. As far as we know, our study is the rst measurement of the TAS angle about China. Maybe there are differences in the angles of ankle among different race.
The TLS angle is formed between the anatomic axis of the tibia and the joint orientation line of the distal tibia which is drawn between the anterior and posterior margins. The TLS angle can illustrate the position of the tibial plafond on lateral X-ray. Changjun Guo et al [11] reported the measurement of the TLS angle.
In their study, three observers measured the TLS angle. The TLS angle has been measured rstly as 81.9 ± 1.92 degrees, 78.3 ± 3.64 degrees and 79.1 ± 3.67 degrees, respectively. In our study, the TLS angle has been measured as 79.7 ± 4.07 degrees. The data of the TLS angle are very close. Maybe there are similar in the angles of ankle among one ethnic. However, these need a large of a sample to prove.
The present study also researched the other angles to measure the data and reference points. Overall, this study nishes a systematic comprehensive measurement system of the normal ankle. These data will be valuable for surgeons and clinicians in the assessment of the ankle.
In conclusion, the data collected in this study have presented a comprehensive and valuable set of standard radiographic angles and reference points. These data will be useful in preoperative planning and intraoperative adjustment and postoperative evaluations of the ankle. A total of 9 angles of ankle was measured in our study, which can illustrate the ranges of the ankle position in the normal Chinese.
Declarations
Ethics approval and consent to participate Consent for publication Availability of data and materials The data of this study are real. The statistical results of these data are presented in this paper. All of the data will be available upon reasonable request of the corresponding author. | 2020-06-25T09:08:54.515Z | 2020-06-23T00:00:00.000 | {
"year": 2020,
"sha1": "24fd4941ab6cab2d663baee87511d75e1d1f5a11",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-34939/v1.pdf?c=1631858446000",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "6f34598e250e4892d832fd0caea67e608481d9b7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
16970918 | pes2o/s2orc | v3-fos-license | Substandard urological care of elderly patients with spinal cord injury: an unrecognized epidemic?
Background We report the anecdotal observation of substandard urological care of elderly paraplegic patients in the community suffering from long-term sequelae of spinal cord injuries. This article is designed to increase awareness of a problem that is likely underreported and may represent the ‘tip of the iceberg’ related to substandard care provided to the vulnerable population of elderly patients with chronic neurological impairment. Findings A registered Nurse changed the urethral catheter of an 80-year-old-male with paraplegia; patient developed profuse urethral bleeding and septicaemia. Ultrasound revealed balloon of Foley catheter located in membranous urethra. Flexible cystoscopy was performed and a catheter was inserted over a guide wire. Urethral bleeding recurred 12 days later. This patient was discharged after protracted stay in spinal unit. A nurse changed urethral catheter in an 82-year-old male with paraplegia. The catheter did not drain urine; patient developed pain in lower abdomen. The balloon of Foley catheter was visible behind the urethral meatus, which indicated that the balloon had been inflated in penile urethra. The catheter was removed and a 16 French Foley catheter was inserted per urethra. About 1300 ml of urine was drained. A 91-year-old lady with paraplegia underwent routine ultrasound examination of urinary tract by a Consultant Radiologist, who reported a 4 cm × 3 cm soft tissue mass in the urinary bladder. Cystoscopy was performed without anaesthesia in lithotomy position. Cystoscopy revealed normal bladder mucosa; no stones; no tumour. Following cystoscopy, the right knee became swollen and there was deformity of lower third of right thigh. X-ray revealed fracture of lower third of right femur. Femoral fracture was treated by immobilisation in full plaster cast. Follow-up ultrasound examination of urinary tract, performed by a senior Radiologist, revealed normal outline of urinary bladder with no tumour or calculus. Conclusion The adverse outcomes can be averted if elderly spinal cord injury patients are treated by senior, experienced health professionals, who are familiar with changes in body systems due to old age, compounded further by spinal cord injury.
Background
Do elderly patients get substandard health care?
Bergman and associates assessed quality of surgical care delivered to 143 consecutive patients 65 years or older, undergoing elective major abdominal surgery at a single university-affiliated hospital in Canada [1]. Adherence to 15 process-based quality indicators was measured, and a pass rate was calculated for each individual quality indicators. Quality of care delivered to elderly patients undergoing major surgery was generally poor and independent of patient characteristics. Quality indicators with the lowest pass rates included postoperative delirium screening (0%), level of care documentation (0.7%), cognition and functional assessment at discharge (4.9%), oral intake documentation (12.6%), and pressure ulcer risk assessment (35.0%).
Elderly patients need to be treated by senior, experienced doctors
We wish to highlight the importance of clinical experience of doctors and nurses in providing optimum medical treatment especially to elderly spinal cord injury patients. Significant decreases in operative time, length of intubation, and hospital stay were seen as experience of the surgeon in transoral robotic surgery for head and neck tumors increased. Over a four-year period, the mean operative time decreased by 47%; total mean intubation time decreased by 87% and mean hospital stay decreased from 3.0 days to 1.4 days [2]. We report three elderly spinal cord injury patients, who received suboptimal urological care in the community or in a district general hospital, although care was delivered by qualified staff. These cases illustrate the need for elderly spinal cord injury patients to be treated by senior, experienced health professionals, who will be able to recognise the alterations in urinary tract or skeletal system due to chronic spinal cord injury, which are compounded by old age. Unfamiliarity predisposes to complications after routine procedures such as urethral catheterisation or positioning of lower limbs in lithotomy for cystoscopy. Similarly, failure to comprehend the changes in neuropathic bladder can lead to misdiagnosis during routine ultrasound examination, as indeed happened to one of these patients.
Duties of a doctor when substandard care is delivered
We discuss duties of a doctor following adverse outcomes. In the United Kingdom, the "Good Medical Practice Guidance" publication developed by the General Medical Council outlines the professional requirements of doctors practising in the United Kingdom when adverse outcomes occur [3]. Doctors must be open and honest with patients if things go wrong. If a patient has suffered harm or distress, the doctor should put matters right, if that is possible; offer an apology; explain fully and promptly what has happened and the likely shortterm and long-term effects. The National Health Service in England, United Kingdom commits to ensure that when mistakes happen while receiving health care, patients receive an appropriate explanation and apology, delivered with sensitivity and recognition of the trauma the patient has experienced, and know that lessons will be learned to help avoid a similar incident occurring again [4]. Barraclough has specified principles of greater transparency and open disclosure after an adverse outcome. Firstly, when unintended harm occurs, it involves informing patients and carers about what has gone wrong in an empathic way that includes an expression of regret. Secondly, it involves in depth analysis of the problem, including root cause analysis for severe problems. Thirdly, it involves a commitment by individual (doctors) and organisations to fix the problems identified [5].
Case reports Patient # 1
An 80 year-old male, had undergone thoracolumbar decompression for spinal stenosis and developed L-2 complete paraplegia. This patient was transferred to spinal unit for rehabilitation. He had an indwelling urethral catheter. This patient developed temperature and a drop in oxygen saturation. Clinical diagnosis was pneumonia. Chest X-ray revealed hyper inflated chest; lungs were clear with no active lesions; maintained cardiothoracic ratio. Urine culture showed growth of Pseudomonas aeruginosa sensitive to Meropenem. This patient was prescribed Meropenem intravenously. Subsequently, this patient developed severe paraphimosis and circumcision was performed. Histology revealed mild non-specific chronic inflammation. There was no evidence of neoplasia. While this patient stayed in a rehabilitation facility, urethral catheter was changed by a Registered Nurse. Following catheterisation, he developed profuse bleeding per urethra and high temperature. Urgent ultrasound examination revealed no urinary catheter in the bladder. (Figure 1) The balloon of Foley catheter was seen in membranous urethra, 7 cm from the tip of penis. (Figure 1) Flexible cystoscopy was performed; bleeding was seen to arise from the site where the balloon of Foley catheter had been inflated. A 16 French Foley catheter was inserted over a 0.032" guide wire. This patient received Meropenem one gram every eight hours intravenously. White cell count was 17.7. Neutrophil: 16.6. Urea: 8.2 mmol/L. Creatinine: 87 umol/L. C-reactive protein: 82.0 mg/L. Random glucose: 15.5 mmol/L. Lactate: 7.6 mmol/L. Urine culture showed coliform species sensitive to amoxicillin and gentamicin. Blood culture yielded Proteus mirabilis sensitive to amoxicillin and gentamicin. This patient was prescribed amoxicillin 2 grams every eight hours intravenously. His condition improved. However, twelve days later, this patient again developed profuse bleeding per urethra; bleeding subsided following prolonged local compression over perineum. Subsequently, this patient required exchange of urethral catheter over a 0.032" guide wire. Therefore, this patient needed ambulance to bring him to spinal unit every four weeks for change of urethral catheter.
Patient # 2
An 82-year-old male underwent decompression at T-11/ 12 for spinal stenosis, four years previously in 2008 because of pain and weakness in lower limbs. He was walking with two walking canes before the operation and did not have problem with bladder and bowel control. However, after surgery, this patient could not move or feel his legs at all. Urgent MRI revealed extradural haematoma with compression of the spinal cord at T-11 and T-12 levels. This patient underwent revision of decompression of T-10 to T-12 and evacuation of blood clot. The second operation did not produce recovery of his motor power and sensation in lower limbs. This patient also lost control of his bowels and urinary bladder. This patient was managing his bladder by indwelling urethral catheter; catheter was changed by a District Nurse. In 2011, urethral catheter got blocked and the catheter was changed during night by a Registered Nurse. The catheter did not drain urine. This patient was getting pain in lower abdomen. This patient attended spinal unit in the morning. On clinical examination, an unusually long segment of Foley catheter was lying outside the penis. The balloon of Foley catheter could be palpated in distal penile urethra. On close inspection, the balloon of Foley catheter was just visible behind the urethral meatus. (Figure 2) The balloon was deflated. A 16 French Foley catheter was inserted per urethra. About 1300 ml of urine was drained. This patient developed profuse haematuria, which subsided over the next 48 hours. Subsequently, this patient preferred to come to spinal unit for change of urethral catheter, as he and his wife lost trust in the community health care professionals for changing urethral catheter. This patient required ambulance to bring him to spinal unit every fortnight for change of urethral catheter.
Patient # 3
A 91-years-old lady had paraplegia, sustained as a result of vascular accident which occurred 24 years previously. She had been managing her bladder by indwelling urethral catheter. Suprapubic cystostomy was performed when she was eighty years old. This patient did not develop stones in urinary bladder; there was no history of passing blood in urine. Ultrasound examination of urinary tract, performed in 2011, revealed no hydronephrosis. In 2013, at the age of 91 years, this patient underwent routine ultrasound examination of urinary tract. This patient did not pass blood in urine; she did not get urine infections. There was no problem with suprapubic catheter. Ultrasound scan was performed by a Consultant Radiologist, who reported a large 4 cm × 3 cm soft tissue mass in the urinary bladder. (Figure 3) Appearances were highly suggestive of urinary bladder transition cell carcinoma. Urine cytology showed benign epithelial cells, scattered columnar cells, mixed inflammatory cells and background bacteria. No malignant cells were seen. Cystoscopy was performed without anaesthesia in lithotomy position. Positioning was challenging, but no untoward incident was noticed in the operation theatre. Cystoscopy revealed normal bladder mucosa; no stones; no tumour. This patient was discharged home three days after she underwent cystoscopy. When she returned to her house, her daughter noticed that right knee was swollen and there was deformity of lower third of right thigh. This patient was admitted to a hospital for assessment of mild swelling of whole of right leg, particularly, knee. Diagnosis was "right leg swelling -osteo-arthritis of knee". The swelling was non-tender, tense; no erythema was present. Doppler scan of right leg revealed no evidence of deep vein thrombosis. X-ray of right thigh or knee was not taken. This patient was advised that Computed Tomography of abdomen and pelvis would be considered as outpatient if the swelling did not settle or became worse. This patient stayed in bed at home. But the swelling did not subside and deformity of lower third of thigh became more obvious. Therefore, this patient was brought to spinal unit by her daughter. X-ray of right thigh and knee revealed previous dynamic hip screw internal fixation right hip ( Figure 4); comminuted fracture distal third of Follow-up ultrasound examination of urinary bladder and kidneys was performed by a senior Consultant Radiologist four weeks later. Ultrasound scan revealed normal ultrasound appearance of the kidneys with no hydronephrosis and no obvious renal calculi. Suprapubic catheter was in right position. Outline of urinary bladder was normal; there was no neoplastic filling defect or calculus. (Figure 3) Previous ultrasound examination was performed without filling the bladder with sterile 0.9% sodium chloride solution. Most probably, the appearance of a tumour during previous ultrasound examination was caused by a collapsed bladder folding around the balloon of Foley catheter.
Patient's daughter had to rush home from her holidays abroad when the initial ultrasound examination reported presence of bladder tumour. Luckily cystoscopy revealed no tumour, but this elderly patient sustained fracture of femur when her lower extremities were placed in lithotomy position for cystoscopy. She required plaster cast, and it was difficult for her to manage at home with the knee in plaster cast in extended position. Therefore she stayed in spinal unit until the fracture healed.
Discussion
Substandard care of elderly patients: unrecognised epidemic?
We found that some elderly patients receive substandard care in the hospital as well as in the community. Although a new study shows that optimal emergency care for acute coronary syndrome in the very elderly significantly increases their chances of survival-just as it does in younger patients-it also reveals that optimal care for acute coronary syndrome is often withheld from nonagenarians and centenarians [6,7]. A review of Veterans Administration (VA) pharmacy and patient care records to identify instances of inappropriate prescribing among 850,154 patients, who received care at 124 VA facilities during 1999 and 2000, revealed that, overall, 26.2 percent of elderly patients were given drugs identified as inappropriate or suboptimal for older patients [8].
In United Kingdom, the Care Quality Commission of England has been reported to say that too often, staff appears to pay more attention to paperwork than to those they are looking after. Unacceptable care has become standard in some trusts, with doctors and nurses talking down to patients, ignoring their calls for assistance and failing to help them eat, drink or wash. After carrying out spot checks at geriatric wards in 100 hospitals, the commission found that 35 needed to make improvements, 18 were failing to meet legal standards and there were "major concerns" at two trusts. Its report is the latest to conclude that pensioners, who account for almost half of in-patients, are routinely denied the most basic care because of a culture of neglect among staff. In some places, elderly patients were left rattling their bed rails or hitting water jugs on tables to attract nurses' attention [9,10]. In the three cases reported here, a common link seems to be failure to recognise the complications promptly by the health professionals, who carried out the procedures. Similarly, failure to seek advice and expert opinion from senior colleagues appears to be another problem.
How to prevent substandard care of elderly patients?
In patient #3, a collapsed bladder was misdiagnosed as bladder tumour by a Consultant Radiologist. Misdiagnosis of bladder tumour during ultrasound examination is very rare. DeCaro and associates reported the case of a woman treated with dextranomer-hyaluronic acid for childhood vesicoureteral reflux misdiagnosed with a bladder tumor on transvaginal ultrasonography [11]. In the past, we have misdiagnosed bladder tumour as vesical debris in a paraplegic patient [12]. We learn that frequent, informal and honest discussions of a patient's clinical condition, is likely to reduce urological errors in spinal cord injury patients [13]. In this elderly patient, such a honest discussion with a senior colleague would have prevented the misdiagnosis of bladder tumour and stopped this elderly patient from undergoing cystoscopy unnecessarily.
Most importantly, elderly patients should be treated by senior, experienced doctors and nurses
Quite often, elderly patients may be used as "guinea pigs" for learning by trainee doctors and student nurses because many elderly patients are very polite and are not assertive. Whereas a trainee doctor or a student nurse is unlikely to practise their clinical skills on a rich banker, who is very assertive, an elderly patient who is docile and hard of hearing, often "agrees" to be seen by a junior doctor or a student nurse. It is interesting to note that none of the three patients complained about the substandard medical care, which they received. These elderly patients accepted graciously the consequences of substandard care; such behaviour probably reflects upon their upbringing and the moral code, which many elderly people practise.
Professional requirements of a doctor when elderly patients receive substandard care
When elderly patients receive substandard care, patients should be told the truth; doctors should express regret, and give information on how similar outcomes will be prevented in the future [14]. The Medical Protection Society also supports the principles of full and open communication with a patient after a serious adverse outcome including an apology [15]. A wall of silence after an adverse incident can compound mistrust, and provoke formal complaints and legal action. If it is clear that something has gone wrong, an apology is called for, and it should be forthcoming. Contrary to popular belief, apologies tend to prevent formal complaints rather than the reverse. An open and honest explanation and an apology where appropriate, may be what are needed to reassure a patient and avoid any escalation. Doctors should resist the desire to prove they are "right", which can lead to an escalation of emotion and ultimately an argument. Doctors should express empathy and avoid an argument. "Winning an argument" is not the same as achieving a good outcome.To epitomise "Patients don't care how much you (doctors) know until they know how much you (doctors) care" [16].
Take home message
Elderly spinal cord injury patients should be treated by senior, experienced doctors, who are familiar with changes in body systems due to old age, compounded further by spinal cord injury. In all the cases reported here, procedures were performed by junior, inexperienced nurses or doctors and mistakes occurred to the detriment of patients. When an adverse outcome occurs, the General Medical Council in United Kingdom state that it is a professional requirement of a doctor to put matters right (if that is possible); offer an apology; explain fully and promptly what has happened. Patients and especially, their relatives want information on how similar outcomes will be prevented in the future. The National Health Service of United Kingdom commits to ensure that when mistakes happen, patients receive an appropriate explanation and apology. | 2018-02-02T09:50:52.677Z | 2014-01-21T00:00:00.000 | {
"year": 2014,
"sha1": "1acb6ccf1530317b8b2b97f51dd2d1e8c31654d2",
"oa_license": "CCBY",
"oa_url": "https://pssjournal.biomedcentral.com/track/pdf/10.1186/1754-9493-8-4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "08718380df491773790791bf372e90957ea0d459",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
227179702 | pes2o/s2orc | v3-fos-license | Processes of building trust in organizations: internal communication, management, and recruiting
Abstract The article explores ways to build trust in a very specific area of internal and external communication: recruitment. At first it explores the links that communication and vulnerability have with the sciences that study the human dimension of organizations. Second, it addresses the functional and personal dimension of processes in relation to the forces of order or dispersion that press every organization on a personal or structural level (centripetal and centrifugal force). Trust presents itself as the ring that unites and harmonizes the creative or destructive power of those forces. It is an essential element that makes it possible to manage the vulnerability of organizations and their limitations on an organizational or human level. The hypothesis being made is that recruitment is an essential element of internal and external communication and has a strategic importance for the future of the organization because the mobility of the labor market, contractual conditions, and reduced turnover times require creating an environment of trust and transparency in a short time. This begins with the selection process and develops throughout the future professional projection of workers within the company.
individuals, groups, or structures have on the overall behavior of the organization (Langton, Robbins, and Judge 2018, 2)-and organizational communication, under-stood as an open system where a balance of relationships is created between communication and power so that individuals can achieve collective goals (Wrench, Lynn Majocha, and Carter 2015, 65). The subject of psychology regarding people in organizations has also been explored. This focuses on the employee and the relationship between his expectations and those of the institution, all from a psychological point of view (Goldstein et al. 2017, 4). The sociology of organizations has studied the organization as a small company (Adler et al. 2014, 7) and-in the field of human resource studies-focuses on people as resources to find the most effective way to develop their potential at the service of the company (Trost 2020, 15-23).
From a broader perspective, corporate communications (Belasen and Belasen 2019, 367-70) and public relations have developed an overall communicative vision that should consistently manage internal communication as a part of corporate communication, namely the part that addresses internal publics-employees (Burton 2017, 55). External institutional communication is intrinsically linked to internal institutional communication because a large part of the communication of each organization is carried out by employees through the relationships they develop with other public groups and through their behavior. This explains the link between companies' Corporate Social Responsibility and their communication with internal audiences (Bedn arik 2019, 48-49).
This study starts from the premises of the research mentioned above, which argue that well-managed corporate communication generates trust with different stakeholders and that trust limits the present and future vulnerability of organizations. In the current context (digital as well as physical) the hypothesis of this study is that if recruiting activities are the first contact a company might have with a specific group of people, then communication becomes a strategic (not only functional) tool that the hiring process involves in order to generate trust. It occurs at the beginning of a possible professional relationship even when the people involved in the process are not yet chosen. Following this path, it is necessary to move away from the idea that communication is strictly important within organizations, and move toward the analysis of the links between trust, vulnerability, the human dimension of the company, technology, management, and recruiting.
The development of technology has generated a more volatile dimension of relationships and products and has advanced strong business changes in a short window of time. This article aims to study the relationship and feed-back processes between the trust created by the organization's culture and internal communication, which is developed through specific relational processes and work management mechanisms (Martilla 2019, 22-23). It is a matter of looking for some parameters to achieve a balance that allows for addressing the vulnerability of organizations without diminishing the trust it deserves in the long term, beginning with recruitment.
The concept of trust has been the key to understanding the social dimension of mankind. Many philosophers and sociologists link it to the possibility of regenerating current society, both through the role of local communities and through communal deliberation (Taylor, Nanz, and Taylor 2020, 25-28) and also through generalized reciprocity and honesty that creates what Putnam calls "social trust"-or trust in other citizens-and, therefore, social capital (Putnam 2001, 354, 348-387). With statistical data on the evolution of society, Putnam states that people who, from the beginning, trust more than others are able to do more for society in an altruistic way. Other authors have tried to relate philosophical and sociological aspects of Aristotelian roots, with the ability to generate trust in organizations (Mayer, Davis, and Schoorman 1995, 719-720), or have come to propose some models of trust that can give a more human sense to organizations (Guill en Parra, Lle o de Nalda, and Santiago Marco Perles 2011a, 605-614).
The different definitions of trust that have followed one another over the years have a number of concepts in common, according to a study by Alston and Mayer. They are values linked to the presence of trust but they are not able to reach its full meaning because trust is an outcome-an intangible asset that is the result of the interaction of many factors, which can be seen whether or not it is there. It cannot be articulated as a simple result of a mechanical process because there are many personal values involved-many intangible values-which have a great impact, even in contexts where digital processes and technology drive business relationships (Alston 2014, 4, 10-13;Mayer, Davis, and Schoorman 1995, 718). Several authors (Colledge, Morgan, and Tench 2014, 482-483) agree that trust is linked to relationships and that trust can grow or decline in an organization, a role, or a person. Trust implies uncertainty; it is built through specific activities that are shared regardless of whether the outcome of the activity is satisfactory or not. Trust may be mutual between two people, but at times it could be one-sided. It also implies a present judgment about the past that may allow one person to trust another or an institution in the near future. Trust, as an essential element in organizations, is communicated indirectly through competence, integrity, ways of managing personal interaction, and the culture generated by a management style (Kodish 2017, 363).
Ultimately, following the previous scheme, trust seems to be the link that allows the mutual ability to adapt between each worker and the organization: the organization tries to create a context in which the staff can individually develop their creativity because they need the ideas and the contribution of the employees. In the same way, the employees, through the trust generated by the institution, adapt their behavior to the culture and the corporate approach of the organization (La Porte 2003, 132-134). This trust, which becomes the link of union and development of workers and, therefore, of the organization, is threatened by the limits that each institution has-by its intrinsic vulnerability.
Vulnerability management is very much about trust and is defined as the cyclical practice of identifying, classifying, remedying, and mitigating risks arising from the structural or human limitations of the organization (Foreman 2010, 1-9). The way the organization's natural or artificial vulnerability is managed generates greater or lesser confidence in employees. Companies try to understand future risks by studying their own vulnerability and manage to arrive at a possible typology that can generate early solutions or avoid harmful future scenarios (Sheffi 2015, 27); however, the vulnerability of an organization is not necessarily linked to unexpected extreme situations but also to the common way in which it deals with limitations. A journalistic company or oil company, food producer or automotive company has a different vulnerability and risk depending on the specific nature of its business. The volatility generated by certain technological processes can increase the risk in organizations with a high digital environment (Husdal 2010, 1-27). The authors mentioned, therefore, express the characteristics of the link between communication, trust, and vulnerability.
The human dimension and processes
Taking the aforementioned ideas to the next step, it can be affirmed that the creation of internal communication processes and the development of corporate listening attitudes can ensure harmony in the management of vulnerability (Gode 2019, 385). Employees would feel less affected by vulnerability if there is a balanced way of regularly managing it. That would help to keep the trust that maintains the mutual organization-personal adaptation.
On the one hand, La Porte states, in a study on non-profit organizations, that the processes generate in people a perception of temporal stability, a perception of security-of a solid working structure -and therefore generate trust. In fact, it is not easy to see the vulnerability behind processes unless there are problems that highlight their vulnerabilities or show their ineffectiveness. But there is one aspect that is not entirely positive: as they develop, processes may create a false sense of security and lose the necessary flexibility with which workers have to deal with problems (La Porte 2003, 80-85, 158). There is a need for flexibility that guarantees a development in the growth of the organization in an open operating system, together with the social context in which it carries out its business. If the organization is too focused on the processes, it could lose the characteristics of an open system, which allows it to adapt to consumers-to the needs of the cultural context. This then allows it to change and evolve and helps workers to grow with the institution. It is important to consider that the human dimension implies a risk-a vulnerability-but we sometimes forget that organizations are made up of people and cater to customers or consumers, who are also people. The structure tries to coordinate talents and efforts, ensuring that the fallible capacity of individuals is counterbalanced by the collective support of the judgment and work of other employees, that is, by the company as a whole. The balance between managerial security as a system and the limitation and size of the individual requires the creation of criteria that generate harmony.
On the other hand, when the human dimension of leadership does not take vulnerability into account, it could create processes that provide security but are too tied to the personal dimension of the leader-a leader who grows, who changes, who eventually brings into the fold another leader, in an organization that evolves and changes. The leader's total trust should be extended to the organization and the organization should extend this same trust to the leader, striking a balance. It can be seen that in many contexts an employee's trust is in the leader but not in the organization, or there might be a great trust in the organization but not in the immediate superior that represents it. When the trust is well rooted and in line with the mission and values, the employee is able to forgive the organization or its leaders for collective or personal mistakes that may show vulnerability (Gillespie, Dietz, and Lockey 2014, 374). Therefore, trust is not only deeply linked to vulnerability (and risk, as we said before) but also to the processes that uphold the organization over time: there are business processes-also communicative ones (O'Neil 2008, 264)-that provide a sense of security, transparency, and an important impetus for a given organization during a certain period of time. However, once the people who created the processes are gone, there is a danger that the processes will lose the human dimension and become purely functional, rigid paths that have lost contact with reality (M endez et al. 2012, 16). This could determine the future failure of the organization, when the economic, social, and cultural conditions have changed. In the temporal development of an institution, initial creativity (vulnerability linked to the conceptual character of ideas), needs security and concreteness that are consolidated with the actual organization of work and the creation of processes that allow for the development and carrying out, in a concrete way, those initial ideas.
Vulnerability, therefore, is not only linked to the human dimension of organizations, but also to the organizations themselves because after all they are a human creation-a human extension of the "economic" dimension of people (L€ amsa and Pu cetaite 2006, 131). Vulnerability is not only linked to creativity but also to the processes of idea development, because it is one thing to generate new ideas and another to carry them out. Creativity has to do with the vital principle of the organization, with the human dimension-its soul, in a certain sense; development processes have to do with the organizational structure-with the way of uniting efforts and making different people's creativity interact, because in organizations, the added value that is offered does not derive only from the sum of the talents of those who compose them, but from the interaction of different talents, in certain physical and temporal circumstances, an interaction that is favored by the culture of the organization, by the internal communication process, by many various elements that are difficult to list all together. This is why the functional dimension in organizations should be at the service of the human dimension, and the human dimension at the service of achieving the mission, through the practical functional dimension. A lack of balance and harmony between the two aspects could damage the circle of trust and thus lead the organization to a lack of creativity or a practical functional impossibility. If the business processes created to ensure productivity are not integrated with a human dimension, they could damage the creative capacity-the ability to take risks and find alternative solutions or new paths of organizational growth. The evident formal trust that comes about in a process must take the human dimension into account.
Before thinking about the processes that can generate trust, it is perhaps important to stress that every organization is subject to a centrifugal force and a centripetal force, both linked to trust-to survival over time. The external context in which companies operate and technology with its many possibilities push the organization outwards, towards dispersion. These are phenomena linked to the complexity and forces of organization-disorganization (Broekstra 2014, 6). It is a force that leads people to discover new possibilities and leads the company into the future. This centripetal force corresponds, so to speak, with a centrifugal force that pushes towards structure, towards order and the creation of control processes, towards the achievement of objectives, and towards management.
In this sense, employees are involved in both forces. People who are closer to the reality in which the company operates, who work at lower levels of the hierarchy, or who have a greater creative capacity, tend to disperse and lack coordination of activities that make effective processes possible. On the contrary, people who have a more formal disposition or who are higher in the hierarchy, or who are further from the reality of the context in which the organization operates, tend to control, create structures, and manage processes. Sometimes these two forces take place within each worker.
These elements allow us to affirm that trust is part of the culture that is created within the organization and that it must be strengthened and managed in both a personal and functional way. In order to relate the human side of the organizations with management and trust, it seems appropriate to find some key aspects to generate trust and sharing with the values of the organization.
Management processes that generate trust
We have previously seen some elements of trust from a philosophical point of view applied to organizations. After having seen the human side of the processes, it may be interesting to go deeper into the way management generates trust through processes within the company. Luhmann's sociological perspective frames trust as an attitude that allows individuals to make decisions that involve risks. It is therefore linked to the ability to discern between different alternatives and act accordingly, even though the possible damage may be greater than the advantage one hopes to have. It is important to stress that, according to this approach, trust is an element that can strongly influence the ability to act and the degree of participation of individuals, who are called upon to assess the risks arising from their actions, according to their expectations (Luhmann 1989, 127-134). Other authors understand the act of trust as a combination of belief in the reliability of individuals and their own preferences, including risk aversion, reciprocity, and altruism (Sapienza, Toldra, and Zingales 2007, 3). Hence, the issue of trust becomes central in considering the collective action of people working within the same professional context because they contribute to socially build organizational security, understood as the result of joint action, carried out by individuals, technologies, discursive practices, norms, and knowledge (Bruni 2010, 17).
However, in order to grasp the way in which the process of identifying personnel with the organization is constructed, it is necessary to start from the concept of organizational identity, which coincides with what is important, distinctive, and lasting in an organization, and which, in a relationship of exchange, influences individual behavior and depends on it. Identification can therefore be interpreted as the sense of belonging that binds together, in a relationship of trust, individuals and the organization, performing a function of recognition internally, and of distinction towards the outside world (Mazzei 2006, 12). From this point of view, trust represents the social glue, i.e. the nucleus around which different organizational behaviors are structured, including commitment, collaboration aimed at achieving results, and involvement of employees, through a process that involves the entire organization, significantly affecting personal interactions, expectations, values, rules, identity, roles, reputation, and more generally, the employees' perception of the organization (Puusa and Tolvanen 2006, 31). In other words, identification based on a relationships of trust allows individuals to share a common code, and thus to appreciate, recognize, and make the preferences, choices, and needs of others their own, as well as the values and expectations of the company.
As mentioned above, the effectiveness and efficiency of organizations appear to be the result of the combination of a multiplicity of factors, linked by trust, which relate to individuals and groups (formal or informal) operating within them, and also to the organizational structure itself and the interactions that link these elements. The security and, consequently, the reliability of organizations can be considered as social competencies that find an important constitutive element in the interaction between individuals and organizations: it is in fact the interactions between individuals, within an organized system, which allow the social construction of security (Gherardi, Nicolini, and Odella 1997, 87-95). Therefore, the added value of organizations comes from social and relational capital, i.e. the relationships created by individuals with the surrounding environment, and finds in communication the key to a delicate process of exchange that determines the very behavior of the organization. It is, in fact, through the ability to bring together the various actors involved, to activate them and encourage their participation in business processes, that communication takes on its proper meaning.
In the company context, the flow of communication that generally allows the sharing of opinions, ideas, conventions, and emotions is also enriched by professional information that permeates employees' work days, allowing coordination and encouraging certain organizational behaviors, which significantly affect their performance and, more generally, those of the company. Working is in fact an activity that requires communication skills and discursive practices, which are a constitutive part of the professional activities and occupational identities of those who participate in the activities (Gherardi and Murgia 2012). This is why it is interesting to observe the mechanisms that favor synergies between the different actors involved in the communication process and that determine organizational behavior, influencing the levels of effectiveness, efficiency and productivity-and therefore also organizational security. In order to combine these factors, managers should have a broad vision of the communicative dimension of decisions they make.
It is therefore in the companies' interest to identify, measure, and possibly improve attitudes that facilitate effective communication, a climate of trust, psychological security, employee involvement and commitment to objectives. It is well known that individuals have a better chance of professional growth and personal development in an environment where they can find support and encouragement. This means that if you want to use [ … ] the intelligence [of man], the human environment (superiors, colleagues), the material environment (machines, equipment, offices, departments) as well as the organizational environment, they must be functional for man and not vice versa.
[ … ] He no longer merely produces and earns money, but must be rewarded by work. (E€ ordegh and Spairani 1996, 16) Effective modes of communication, which tend to successfully establish and maintain relationships with the surrounding environment, must take into account the specificities of the players involved, and in particular the biographical elements, personality traits, intellectual abilities, and learning skills-aspects that directly affect the way individuals work (La Porte 2003, 71). There are also elements linked to the commitment of employees (such as the attribution of greater responsibility and autonomy or salary levels), or linked to their skills and knowledge, as well as to the motivational levers of each one. Saltson points out that work performance is determined by three factors: work environment (information, materials, and tools needed to carry out tasks), motivation to complete tasks, and abilities (skills and methods of execution) (Saltson 2016, 190). Therefore, if we consider a company as a complex system, aimed at transforming the resources possessed into others of higher value, be they material (which belong to the company's assets) or immaterial (which instead depend on information), it is possible to grasp the central role of communication in producing the second type of resources, i.e. in generating a widespread relationality that binds individuals and the surrounding environment, through the diffusion of knowledge and the building of trust (Mazzei 2006, 7-8).
In agreement with Mazzei, it is possible to propose the following definition of internal communication: The set of interaction processes aimed at generating the catalyzing resources for the functioning of the company. These resources are the knowledge that feeds work processes and the identification of employees in the goals and values of the organization, which motivates them to introduce knowledge into business processes. (Mazzei 2006, 8) Communication is a strategic element, considered in the light of the role it plays in the processes of socialization, perception, and representation of reality and organizational culture, which links the company and the stakeholders. As a result, internal communication allows employees to identify with the organizational community they belong to, establishing relationships of trust with the company and with their own colleagues.
In this phase, the focus is on the relational and communicative dynamics connected to the performance of work activities, on the awareness that the professional context shapes the communicative patterns and that these, in turn, shape the professional context (Bruni and Gherardi 2007, 135). The response to the vulnerability of organizations requires a mobilization of both human and technical resources, both of the organizational structure and of the specific work context, in order to share a common culture of security that can develop effective work procedures, of which communication processes are a constitutive component (Gherardi, Nicolini, and Odella 1997, 93).
Building and maintaining trust in one's own company therefore appears to be an essential step, which can facilitate gaining customers' and investors' trust, and should be carried out through a twofold method: communicative and managerial. From a communicative point of view, managers are required to "take care" of their employees through internal communication that is capable of conveying the identity of the company, sharing its values, beliefs, and distinctive skills, so that the corporate population is informed and aware of the conventions and objectives that guide the organization. The communicative practices should be accompanied by another fundamental message that leaders are called upon to spread: coherence in the behaviors implemented. Organizational behavior, whether communicated internally or externally, is unlikely to be truly effective if it is not perceived by the workers as such, or if it is not reflected within their own company perimeter. Building trust means keeping the company's beliefs alive and actively managing the values, the reasons why the company operates, and the principles that guide it (Sinek 2009, 105).
These considerations lead us to observe the central role employees play in the brand building process, as a relational factor, since their behavior can strengthen the values communicated or, on the contrary, damage the credibility of the proposed message. In light of this, the consistency between the way the brand is promoted externally and the experience offered to employees becomes central. According to Bergstrom, in fact, internal branding is directly connected with the effectiveness with which the value and relevance of the brand are transmitted to the employees, and with the ability to build a strong link between each task carried out and the "brand essence" ability to aggregate people (Bergstrom, Blumenthal, and Crothers 2002, 135). Thus, it is internal communication that also plays a decisive role in external communication.
In this context, human resources and marketing find themselves cooperating and sharing the same tools. HR must equip itself with the tools of the marketing sector, favoring an action complementary to it, promoting a form of communication aimed at creating internal engagement, while marketing has the task of involving the external public. Trust and reputation are the two main pillars that support organizations in these internal and external representations of their identity.
Trust is indeed an abstract concept, nevertheless, like a tangible good-as an actual functioning tool-it must be built and maintained in order to maintain the efficiency of the organization. As already mentioned, developing internal communication processes aimed at promoting the identification of employees and inspiring trust also means adopting managerial tools that place employees at the center of a strategy that attributes value to them and frames them first as customers and then also as testimonials of their work, so that they embody the company's values and offer a conscious and consistent representation of the brand. The concept of knotworking elaborated by Engestr€ om, well defines how much the collaboration between the different actors depends not only on the ability to weave relationships between them, but also on the ability to keep the network "knotted"-or held tightly together-through specific tools and working practices (Bruni and Gherardi 2007, 31).
Trust is therefore, in our opinion, a multiplying element of the capacity of motivation; but it must be promoted, stimulated, and inspired through practices aimed at guiding and synchronizing employees and teams. It is precisely in the group dimension that interpersonal trust, effective communication, and a way of guiding human resources development become more important. Indeed, it is precisely those employees who are part of a team who tend to demonstrate higher levels of involvement (Buckingham and Goodall 2019, 18-29). It should be remembered, however, that trust is generally placed in the members of a team who are deemed to be able to contribute successfully to the team's activities, so trust is also associated with each person's professional integrity and level of competition (Buli nska-Stangrecka and Bagie nska 2019,10).
If a strictly controlled way of managing employees can inhibit the engagement and creative contribution of group members, a way that tends to encourage greater levels of autonomy and empower people contributes to the consolidation of existing relationships of trust and thus to increase the level of cooperation in achieving the common goal. Building trustworthy relationships is essential to promoting a greater level of cooperation, especially in organizational contexts where relationships between people are less frequent, i.e. where interactions are too rare to allow the relationship to be accompanied by reputation (La Porta et al. 1997, 4). Fostering trust means increasing the possibility that workers' actions are framed within a common culture of reference, which leads them to work according to the achievement of company objectives and not in favor of particular interests. Working in tune with the organization means being more inclined to take calculated risks, to develop one's own exploratory capacity, and to have more freedom to seek greater innovative content in the solutions proposed.
The current pandemic has redefined relationships between workers and the company, changing locations, hours, and working methods, demonstrating the inadequacy of leadership models based on employee control, which are increasingly inapplicable in the face of changing working conditions (Parker, Knight, and Keller 2020). Working toward objectives, aided by technology when working remotely, has given employees greater autonomy in organizing and managing their work, while increasing their responsibility for results. More flexible ways of working bring with them a management of work by objectives, but also a higher level of accountability, where trust and communication play a key role. In this context, the interaction that is established between managers and workers must have authentic, clear, and direct communication.
It seems important to nourish the existing relationship of trust and combine it with the mechanism of control. Consciously using technological tools, as tools are able to reduce the vulnerability that results from this delicate historical moment, influences the definition of new meeting spaces, which are both physical and virtual (Floridi 2020). It should be noted that collaboration can be improved by increasing the degree of interdependence of employees, in particular by making them responsible for achieving results, sharing information, and workloads, but also through the attribution of rewards to the entire team and through motivation and the belief that they can achieve the goals set (Buli nska-Stangrecka and Bagie nska 2019, 3).
It is important to consider that the relationship of trust must be strong and deeply rooted between the different actors, so that true involvement comes about and an effective boost to efficacy is impressed on the company's organizational processes. This is why it is also important to consider the trust that management is able to get from employees in order to properly manage any change processes. As a result, offering them a broad and integrated overview of work processes can encourage them to become more aware of the tasks and objectives to be pursued. In the same way, broadening the level of discretion with respect to tasks and encouraging opportunities to participate in decision-making processes can encourage a greater commitment to achieving the objectives that the organization sets for itself-precisely because it offers employees new perspectives to interpret their role, new stimuli to achieve results, and different ways of solving problems, which also include greater proactivity and creativity, i.e. important factors of innovation for organizations (Lee et al. 2019, 826).
In conclusion, trust is a strategic element-a crucial tool-for the construction of social and psychological resources and is related to openness to change, resilience, motivation and security with which one interprets one's organizational role. Those elements can optimize the skills of teams and companies, thereby increasing the chances of retaining qualified resources that can bring value to the organization. However, it should be pointed out that the possibilities of maintaining resources within the organization vary according to the type of company and the different professional figures: it is easy to think that in companies with a large presence of unskilled laborers, there will be a faster turnover. In any case, it is possible to identify the main elements that differentiate a possible company approach taking into account the worker's role and his place in the company hierarchy: it could be more expensive to replace resources that occupy strategic roles and consequently the level of retention for certain professional figures could be higher.
The selection process
The acquisition of personnel represents a crucial aspect for any company, since the choice of employees is directly related to the efficiency and productivity of the company. Therefore, the results of the selection process have a direct impact on the overall performance of the organization in terms of economy, time management, quality of the work done, or company climate.
The selection process represents an important opportunity for candidates and recruiters to get to know each other. It is the initial moment in which to offer, on the one hand, the best representation of oneself and one's skills and, on the other, the values and organizational culture that support the company, so that the foundations can be laid for a medium to long term working relationship. The selection process focuses, in fact, not only on the candidate's experience but also on his or her ability to adapt to a specific organizational culture and the degree of sharing essential values. The level of reliability, which is transmitted on both sides, is enabled by "impression management" through which strengths are highlighted and future development plans are credibly argued.
In the current environment, candidates for new company positions tend to have shorter expectations in terms of time. They approach a new company with the aim of gaining further professional experience, without really intending to stay very long. They are thinking of finding better job positions in the future, even conditioned by a delicate socio-economic context like the present one, where employment contracts reflect a certain precariousness. Talking about job security means not only taking into account the outcomes of critical circumstances but also analyzing the set of cultural, organizational, and socio-technical variables that gravitate around situations characterized by complexity and uncertainty, to which there are also relative questions related to quality and working conditions (Gherardi and Tuccino 2015, 8). The investment of companies in recruiting implies an investment of resources to find the right candidates and training for new employees, so that they can be efficient for the company in the shortest period of time. The problem is that the initial training is given to candidates who, quite often, are known not to remain in the organization for long unless they turn out to be extraordinary candidates. This has implications for the company's vulnerability and trust in the short and long term: both face a relationship that may not be lasting and still have little knowledge to generate trust. This affects the training that generates trust.
If the company is aware that the relationship may not last very long, it should develop an internal communication strategy that can generate trust in a short space of time and, in the same way, feed it when the employee leaves the company: the relationship of trust should survive the working relationship. Perhaps in the future that same person could become a candidate for other positions in their company or a partner in shared projects with other companies. In order to be able to ask for the involvement of candidates in articulated, demanding selection processes, not only in terms of energy but sometimes also economic (think of the long journey of the selection process), it is necessary that the selection methods used are perceived as reliable, i.e. candidates can count on fairness, precision, and clarity, and are offered a great opportunity to express themselves and be evaluated by competent and empathetic staff. Conveying the perception that the assessment is not objective, transparent, and fair enough could cause candidates to withdraw from the process, discourage others, and produce an overall negative return for the organization, which may even have to defend itself against any legal action taken by candidates. It is also necessary to consider the dual role of the candidates, who are ultimately also potential clients of the company and therefore privileged interlocutors.
The selection process creates an exchange, in which the company "sells" its identity (mission, vision, and values) to the skills and professionalism that it seeks on the market. In proposing their candidacy, professionals are inclined to evaluate the company first of all symbolically, positively analyzing the image, the brand, and the way in which the company presents itself. Next, they will evaluate the traditional organizational and characteristics of the job: an aspect that companies must take into account when proposing their recruitment strategy (Goldstein et al. 2017, 86). In fact, it is crucial to identify the target audience and prepare the most suitable methods to efficiently acquire new personnel. The way in which the search for personnel is made public corresponds to the different breadth and characteristics of the recruitment pool it draws on, which is a direct consequence of the communication strategy that the organization implements. Within a company, using an informal search method-word of mouthcould strengthen the trust between some employees but could, in the same way, exclude entire segments of the company's population from internal mobility paths, while job posting systems could give greater transparency to company mobility paths. In order to effectively communicate the trust that candidates have placed in the company and the company in them, in both cases it is important to ensure that the candidate receives feedback on the outcome of the application, possibly indicating the reasons that led to their exclusion from the selection process, so that the rejection does not affect the employees' motivation (Costa and Gianecchini [2005] 2019, 143).
The communication of personnel recruitment is not only the means by which to reach people possessing certain characteristics. The company, during the selection process, also has the opportunity to seek to understand more deeply the way in which the candidate has looked into the job position available, not only to assess the interest and motivation in the role, but also, indirectly, to have valuable feedback on the effectiveness of the communication strategy applied to recruitment, which affects the degree with which the organization generates trust and, therefore, makes itself appealing.
If it is necessary to make the search for personnel publicly known, a company can use a number of tools and methods: self-application, partnerships with schools, universities, masters, professional or business associations, consulting companies, corporate websites, job boards, social networks, gamification techniques, online recruiting. These are the main tools that companies can use and are closely related to the different roles to be filled and the professional skills to be sought, but are also related to the general way in which the company wants to communicate its identity, internally and externally, and its organizational needs in terms of skills to be acquired. Therefore, it is essential that the company has a well-defined recruitment policy that can be effectively pursued, since the performance and future structure of the organization depend on it. An effective selection process is, in fact, the result of proper human resources planning, the precise definition of skills, abilities, and knowledge the role requires, as well as future development plans (Tom c ıkov a 2016, 3).
The sending and receiving of applications is almost exclusively through the use of IT tools, but it should be kept in mind that technology does not always make it possible to reduce the distance between candidates and recruiters but on the contrary, can constitute a filter that directly affects the intention to apply. As it will be analyzed later, the trust that the potential candidate places in the IT procedure used by the company can be influenced by expectations (i.e. the extent to which the technology will operate according to expectations) and reliability (i.e. the absence of errors or problems), aspects that configure trust in IT as a "dynamic construction that is created and consolidated through experience with the technology itself" (Mariani, Curcuruto, and Zavalloni 2016, 200). This means that the company's appeal alone is not enough to make the jobseeker apply for a job, noting how technology, and in particular the website, plays an essential role in changing the perception of users or potential candidates, influencing their intention to apply (Mariani, Curcuruto, and Zavalloni 2016, 207). Undoubtedly, digitization is meant to speed up and simplify work processes. The correspondence between a job and the most suitable candidate for a given role requires that the preferences of recruiters and candidates are considered in equal measure, highlighting how technology is not always fully adequate to respond to different needs, but risks flattening processes that are difficult to standardize. Thus, the comparison and exchange of information between companies and technology suppliers are essential in order to look at processes from different perspectives (Furtmueller, Wilderom, and Tate 2011, 245) and take into account the needs of all those involved in the selection process.
The communicative dimension of the selection process
Information and communication technologies have led to the multiplication of communication channels and conventions that contribute to changing the paradigm of personnel selection. Now recruiters and candidates may be actively using communication tools, especially social media, that give them access to a large amount of information about one another and promote their ability to identify, evaluate, and choose the most suitable profiles or the companies that best meet their expectations. In other words, the "search and selection" activity is conducted by both actors involved in the process: companies and candidates, who may share interests, resources, and information through social media (Goldstein et al. 2017, 103). The more traditional method of recruiting is complemented by the proactive and attentive approach of potential candidates. As a result, companies are called upon to use new communication and relational models that enable them not only to search for suitable candidates to meet specific short-term organizational needs, but also to attract the most qualified resources on the job market, in a cyclical and long-term perspective, based on strategic workforce planning. Attracting and retaining talented employees with strong skills, high potential, aspirations, and active engagement seem to be the main challenges organizations face today. In communicative terms it means establishing a dialogue between the actors involved, generating and comparing mutual expectations, sharing values, expressing common goals, and laying the foundations for building a relationship of trust. The company is asked to make itself known to potential candidates, establish a relationship with them and at the same time differentiate itself from its competitors, inspiring trust and leveraging its reputation; the candidate is asked to communicate authentically so that he can speak beyond his CV and the company can get to know his personal characteristics and the motivation behind his candidacy for a specific role in a given company.
In order to make the selection process effective, it is not enough to share with a multitude of potential candidates, internal or external, only the information related to a specific position. The layout of a job advertisement is essential, and it is necessary to respect the rules dictated by the channel and the tools with which are used to reach a specific target audience. For this reason, it is increasingly important to create a web ad with an SEO perspective and with a graphic format that makes the information available online including through mobile devices, in order to ensure greater visibility to content that needs to be clear, simple, and understandable, so that it can be credible. In addition, the company needs to convey to its interlocutors the characteristics that make a particular role unique, communicating the reasons why it is important to be part of that organization. The added value of a staff acquisition process lies in the ability to make the candidate interested in the brand even before the position offered.
In the era of "confidence in work" people tend to want to work in a company that makes a social impact, promotes staff development through an inclusive organizational culture, and offers professionally rewarding opportunities (Edelman Trust Barometer 2019, 29). Increasingly, companies are being selected by talented professionals. In this sense, it is the candidate himself who takes an active part in the job hunt, putting to use his knowledge of technology and contributing to redefine the sphere of sociotechnical relations established between himself and the organization (Bruni 2014, 334).
The company's reputation is built through employer branding, which helps to create the company's image, position the organization in the job market, and communicate its culture and value offered. It is an element that has a direct impact on the methods and quality of selection, since it allows companies to maximize their ability to attract talent, reducing the time and costs involved in finding qualified personnel (Berthon, Ewing, and Hah 2005, 154). In fact, it will be the candidates themselves who will apply to the organizations that have incisively communicated the values with which they can identify themselves, by using effective employer branding.
It is interesting to note that the process that leads to the acquisition of future employees depends not only on the work done by the recruiter but also on the experience that the organization is able to offer to its internal staff. According to HBR, in 2025 50% of the workforce will be made up of millennials for whom more and more relationships are mediated by digital communication channels (Businaro and D'Acunzo 2019, 110). This requires HR to reconsider organizational tools in order to develop greater consistency between the experience offered to employees and that offered to customers, especially in terms of technology. The aim is to improve performance through the development of a greater sense of ownership and advocacy among employees and to increase value for the customer. In particular, it is noted that the construction of a valuable employee experience passes through the organization of workspaces according to different operational needs, an inclusive organizational culture that fosters trust and collaboration, HR systems, and technology that ensure a greater degree of flexibility (Businaro and D'Acunzo 2019, 110-111).
In a context characterized by rapid socio-economic changes, by the increasing obsolescence of skills, and by the increasingly evident push of technological innovations on the company's organizational fabric, the ability to maintain a high degree of flexibility not only in terms of mental structure and behavior on the part of employees, but also in terms of organizational practices and managerial functions, constitutes a competitive advantage for the entire company system. This perspective becomes central in considering staff recruitment mechanisms, which must take the skills required of the company by the evolution of the socio-economic context into account. A propensity for continuous learning, the ability to accept and promote change, and to move effectively in complex and unclear situations represent distinctive skills, whose evaluation must take into account the selection process in order to increase the degree of suitability of new hires with respect to the real needs of the job placement. Like the other organizational processes, recruiting is also affected by the impulse of digitalization, not only because digital skills are increasingly in demand in the labor market, thus constituting an aspect to be evaluated during the selection process, but also because the search and selection models must be equipped with adequate digital tools to reach the best candidates.
Technology, trust, and recruiting
The collective dimension of work emerges in considering the complex of relationships that originate from the coordination of actors and tools, which together define the different working practices, going so far as to shape the use and functionality of technologies. Assuming a relational perspective allows us to grasp the mutual interactions that are established between technology, users, and contexts of use, as part of a single sociotechnical system (Bruni and Gherardi 2007, 76-77). From this point of view, technology should not be understood as a set of tools with a predefined role, but rather as a complex of systems in which technical and social aspects are closely connected, creating a sociotechnical network (Hilgartner 1992, 43-44), from which risks and errors may arise, but through which the organization also realizes its own level of resistance to vulnerability (Bruni 2010, 129). Therefore, we want to interpret the technological component not only in consideration of its use, but also in reference to the sharing of information and knowledge, as well as the creation of relationships, which connect different contexts.
Like a technical object, trust is defined as a real working tool, i.e. a resource that is connected to individual or collective risks and actions and to the reference context, feeding on social interactions and acting as a link between various social spheres. New technologies, which offer effective channels of communication to attract, engage, and communicate with potential future employees, are part of this scenario. In particular, social media offers the opportunity to connect all the actors involved, who can participate and interact in an even more direct way, albeit mediated by the digital tool, activating a communication channel that can strengthen the link between the HR department and employees or candidates, acting in a complementary way to other tools already adopted. Using digital solutions in HR means building relationships and promoting the involvement of employees through collaboration, transparency, and greater trust between employees, the HR department, and the company (Imperatori 2017, 71). The contribution of social media lies, in particular, in the ability to create a network of relationships through which to promote dialogue between employees and the company, encouraging their integration with the organizational culture, as well as the creation of a real community, which can contribute to creating a collaborative climate geared towards knowledge sharing, to the point of positively influencing the appeal of the company, especially towards younger and high potential candidates (Imperatori 2017, 71).
The Human Resources department, working in synergy with the communications department-the center of gravity of the network and manager of social exchanges, can therefore strengthen its role and gain the trust of its interlocutors (Smith and Mounter 2008, 210). In order for the communication channel to be realized, it should be recognized and shared, and both departments should pay maximum attention to the communication flow, aimed at contextualizing and defining the perimeter of the operational practices developed, so that a real and profitable involvement of employees can materialize (Smith and Mounter 2008, 210). Indeed, trust depends on the quality of relationships and not on the quantity of social exchanges developed.
Online forums, talent communities, video interview platforms that integrate AI processes, personality, and hard or soft skills assessment tools, are some of the solutions used by recruiters to identify the ideal candidate. If, on the one hand, the personnel selection process carried out through such tools and algorithms capable of processing large amounts of data can lead to a depersonalization of the recruiting process in favor of predictive algorithms, on the other hand there is a desire to customize the relationship with candidates, implementing varied and articulated communication strategies capable of reaching potential candidates and grabbing and holding their attention. The more the relationship between candidates and recruiters is personalized, the greater the company's chances of attracting the most interesting candidates. This is why it seems crucial to pay careful attention to communication with candidates, keep them informed about the selection process and career opportunities available, facilitate dialogues with recruiters, and reassure them that the way their personal data will be processed is proper. A large part of the trust that the company can gain in this phase depends on these factors (Furtmueller, Wilderom, and Tate 2011, 255).
The digitalization of hiring processes is not only associated with the efficiency of the process, which therefore includes reducing costs and reducing the time needed to find qualified personnel, but also offers innovative ways to validate skills through testing, collecting the testimonies of colleagues or employers, analysis of social reputation, making the assessment process more objective and open, more focused on professionalism, motivation, and value that the person can bring to the organization (JP Morgan Chase, Freedman Consulting 2016, 29). Through social media it is the candidate himself who makes his skills known and promotes himself, triggering continuous feedback on whether or not he is reliable, competent, and capable.
The experiences offered to employees, as well as to candidates, are becoming more and more crucial to facilitate their full involvement within the company community. It is an aspect that brings about the transformation of talented professionals into brand ambassadors who consequently increase the company's reputation, fueled by their comments and recommendations, i.e. the communication that originates from them and that goes on outside the company perimeter. The brand ambassadorscharismatic and strongly identified employeesact as testimonials of the organization, exploit their ability to communicate in an authentic and reliable way, produce and disseminate content that can generate interest and involvement from internal and especially external stakeholders, helping to create a network between candidates and employeesand between the company and the public for which the message is intended (Martini and Zanella 2017, 45).
To summarize, it can be said that employees are the main resource for corporate marketing-an aspect that configures each company as a social network (Martini and Zanella 2017, 45), able to promote the company value and messages through its employees-through each one's own network. It is therefore clear that the involvement of employees becomes decisive in terms of marketing and hiring, and that it is internal communication that increasingly generates the communication that is directed outwards.
The impact of technology in the workplace is leading to the transformation or elimination of jobs, professions, and careers. New skills are being created and redefined, forcing workers to adapt their mindset and skills to rapidly changing organizational contexts. Thus, hiring or promoting an employee on the basis of performance and professional experience gained or on the basis of what is expressed in his or her curriculum vitae may not always prove to be the best choice for the company. A different organizational context and different levels of hiring, motivation, and aspiration may affect the performance of each individual; in a similar way, a career advancement does not guarantee the same contribution to the organization previously made, just as a good performer may not be a good leader. Increasingly, "knowing how to be" connects "knowledge" and "know how" with business objectives. This is why it seems important to prospectively evaluate not only the candidates, as future collaborators, but also to evaluate the potential of current workers, in order to identify the people prepared, motivated, and able to manage relationships that in the future will bring value to the organization (Chamorro-Premuzic, Adler, and Kaiser 2017).
It is clear that a company cannot afford to focus only on so-called "talents" (thus excluding and demotivating the largest part of the company's workforce), but is required to build a truly inclusive environment that is able to enhance the work of all employees and welcome them within its corporate project, so that the expression of the potential of each person is encouraged, obtaining from them a commitment proportional to their abilities. In this context, internal communication is the privileged tool to identify and manage the relationships between talented professionals, their bosses, and other staff not included in the same cluster, through a personalized, transparent, and shared approach to the management of each person's needs (Aielli et al. 2006, 120).
Recognizing employees' potential also means identifying their needs and expectations, i.e. fostering a relationship of trust from which long-term sustainable development involving the various organizational levels can be generated. Talent management thus seems to give way to a strategic management of Talent, as a managerial and developmental approach that favors the expression of potential and converts it into value for the company, according to the company's objectives, culture, and strategy (Aielli et al. 2006, 120).These authors emphasize how talent management also passes through the administration of working relationships, all the more so in a socio-economic context characterized by an increasingly shorter duration of working relationships and increasing intercompany mobility.
Consequently, the company's objective is not so much to maximize retention, but rather to optimize the mutual benefits deriving from the employment relationshipto the point of designing adequate systems that can manage not only internal mobility, but also outgoing mobility through the provision of tools that support the exit and placement of resources on the labor market, thereby supporting the construction of a boundary-less career. This approach thus appears to be part of a management and communication strategy aimed at enhancing the relational capital and the wealth of identity and skills developed within the organization, which allows for the further development of a company's appeal.
In conclusion, we can say that the functional aspects of models of generating trust are not enough, and it is necessary to rediscover the human dimension of processes (Guill en Parra, Lle o de Nalda, and Santiago Marco Perles 2011b, 46). In fact, there is a deep relationship between communication and hiring not only from a functional point of view but also from a strategic point of view, because it determines the trust that candidates will have in their future employer. The processes of building trust within organizations therefore requires an internal communication strategy that is reflected in the management, in the personal and functional dimension with which HR manages people and in the hiring strategy (short-and long-term). The trust that ties all these elements together manages to mitigate the limits and vulnerabilities of organizations, which are an integral part of their existence, since they are constructed by people (creative but limited people) to offer products or services to other fallible people, in a context that is influenced by the historical circumstances of the time.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Notes on contributors
Gian Luca Gara (Pontifical University of the Holy Cross, Researcher) completed his degree in Applied Social Sciences with a focus in Human Resources (La Sapienza University, Rome); he also obtained a special mention at the Dario Ciapetti Degree Award ceremony with the thesis "Communication for the Integration of Immigrants between the Nonprofit Sector and ICT." He is a consultant in the areas of potential assessment, research, and personnel selection for the company Cornerstone International. He collaborates with the Pontifical University of the Holy Cross carrying out research activities on organizational and audiovisual communication issues. | 2020-11-28T14:06:24.202Z | 2020-09-01T00:00:00.000 | {
"year": 2020,
"sha1": "6dcf0630cb674bf16faa2ac0e2c7dea70daaced5",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/23753234.2020.1824581?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "6dcf0630cb674bf16faa2ac0e2c7dea70daaced5",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
245813997 | pes2o/s2orc | v3-fos-license | Beyond pulmonary vein isolation for persistent atrial fibrillation: sequential high-resolution mapping to guide ablation
Purpose We aimed to evaluate whether outcomes with ablation in persistent (PsAF) and long-standing persistent (LsPsAF) AF can be improved beyond what can be achieved with pulmonary vein isolation (PVI) alone, using individualized mapping to guide ablation. Methods We studied 20 pts (15 M, 68 ± 11y) with PsAF (14) or LsPsAF (6) referred for first-time AF ablation. Following antral PVI, individualized mapping (IM) was performed using a high-density mapping catheter stably and fully deployed for 30 s at each of 23 ± 9 sites per patient. Activation data were reviewed, and an ablation strategy designed to intersect areas of focal and rotational activity. Mean follow-up was 429 ± 131 days. The study population was compared to a matched contemporary control cohort (CC) of 20 consecutive patients undergoing conventional ablation. Results Despite the IM group having a higher median comorbidities score, 3.5 vs. 2.5 in the CC group, indicating potentially more complex patients and more advanced substrate, cumulative freedom from AF after a single procedure was achieved in 94% of patients in the IM group vs. 75% in the CC group at 1 year and remained the same in both groups at the conclusion of the study (p = 0.02). There was a similar trend in atrial arrhythmia-free survival between both groups (84% vs. 67% at 1 year) that did not reach statistical significance. The procedure duration was longer in the IM group by a median of 31.5 min (p = 0.004). Conclusions Individualized mapping to guide AF ablation appears to achieve significantly greater AF-free survival compared to conventional PVI when applied as a primary ablation treatment. The results of this pilot study need to be confirmed in a larger, randomized trial. Supplementary Information The online version contains supplementary material available at 10.1007/s10840-021-01115-7.
Introduction
Pulmonary vein isolation (PVI) has become the cornerstone of atrial fibrillation (AF) ablation procedures. With technological advancements mainly improving the reliability of acute ablation lesion formation, the success rate in patients with paroxysmal AF can reach up to 72% with a single procedure and up to 90% with multiple procedures [1]. However, the results are far from ideal in non-paroxysmal AF, with single procedural success rates often below 50% at 1 year and at best reaching 70-80% after multiple procedures [2]. Improving outcomes with ablation of non-paroxysmal AF beyond PVI has proved challenging, with complex fractionated atrial electrogram (CFAE) ablation and empiric linear ablation not improving outcomes in a large randomized study [3]. Individual patient mapping to guide ablation is highly attractive, aiming to address the drivers of AF in an individual beyond PVI, but poses challenges in catheter design, including a balance between global mapping and resolution, electrode contact, signal processing, and interpretation.
In this study, we assessed the potential for individualised, live, intra-procedural mapping to guide ablation beyond PVI, to help improve outcomes in patients with persistent (PsAF) and long-standing persistent AF (LsPsAF).
Methods
A total of 21 patients with PsAF or LsPsAF presenting for AF ablation were recruited to this pilot study between April 2018 and December 2019. Written informed consent was obtained from all patients, and the study was approved by the institutional research committee. Exclusion criteria included age below 18 years and not presenting in AF at the time of the procedure. One patient was excluded because the AF was terminated during PVI. Baseline patient characteristics were compared to those of a matched contemporary conventionally treated control cohort (CC) of 20 consecutive patients undergoing ablation for non-paroxysmal AF ablation with contact force sensing catheters by the same operators at our centre between January and December 2017. Only de novo procedures were included. The previously validated FLAME score [4] was used to compare the complexity of patients and ablation substrate. It is an easily calculated method predicting single and multiple procedural outcomes for non-paroxysmal AF ablations based on the left atrial size and comorbidities. In patients with a high score, even multiple procedures are usually ineffective.
Electrophysiologic procedure
All procedures were performed under general anaesthesia, using the CARTO 3 mapping system (Biosense Webster Inc., Irvine, CA) with the CARTOFINDER module. Mapping was performed using a 4-4-4 mm 20-pole PentaRay™ NAV catheter. A ThermoCool® SmartTouch® catheter was used for ablation (Biosense Webster Inc.), and a decapolar Dynamic XT™ catheter (Boston Scientific, MA) was placed in the CS.
AF cycle length (AFCL) was measured over 10 s in the distal CS, where there was a strong correlation with post-PVI left atrial appendage (LAA) cycle length (r = 0.76, p < 0.001). Hence, comparisons were made of AFCL in the distal CS at baseline, post-PVI and pre-cardioversion [5].
Mapping of AF
Following PVI, multiple 30 s recordings of left atrial unipolar electrograms were sequentially acquired using a stably deployed PentaRay Nav catheter with well-apposed, fully open splines applied to the left atrial endocardium. By appropriate flection of the PentaRay catheter and differential alignment of the sheath, including where appropriate catheter inversion, we ensured stable deployment with the splines fully open and excellent contact with the left atrial endocardium, as corroborated by the shape of the PentaRay and electrograms. The CARTOFINDER system performs QRS subtraction on the electrograms (EGMs) to remove ventricular far-field ventricular signals, leaving only atrial signals to be annotated. For each unipole, a bipolar electrogram window is created from the two nearest electrodes and unipolar signal annotation is performed within this window using wavelet analysis. The local activation (LAT) time is displayed in a dynamic fashion relative to the current time within the 30-s recording. CARTOFINDER then creates "4D" activation maps during a 250 ms window referencing each electrogram in relation to all the other electrograms from all the electrograms of the PentaRay NAV catheter. This window then moves through the 30-s recording to display a changing activation map over time to depict wavefront propagation [6]. Unipolar electrograms and annotations from each PentaRay NAV electrode are displayed on the system alongside the 4D maps to allow the electrophysiologist to oversee the process (Supplementary videos 1 and 2).
Identification of areas of interest
A mean of 23 ± 9 recordings were taken per patient. Recordings were often overlapping to ensure consistency and coverage.
Visual analysis of the recordings
After completion of the acquisition of all post-PVI mapping recordings, 4D activation maps and electrograms were visually reviewed to identify repetitive atrial activation patterns (RAAP). Areas demonstrating repetitive focal activation patterns (FAP) or rotational activity patterns (RAP) were marked and included as ablation targets. They were later compared with areas of RAAP identified by the automated algorithm in the CARTOFINDER module described below.
Automated algorithm to identify RAAPs
Repetitive FAPs are defined as having radial activation over two or more cycles. The automated algorithm takes any earliest unipolar electrogram with a QS morphology within a 10-mm radius and 50 ms window prior to its annotation. Given the regional acquisition of data, the algorithm rejects points if they originate from a distal electrode. If the pattern occurs for two or more consecutive beats, the site of electrode recording is marked in green (Fig. 1c), with more repetitions within 30 s resulting in darker shades, and the time intervals during which they occur highlighted on the electrogram map (Fig. 1a, Supplementary video 1). Repetitive rotational activation is defined as having 2 or more rotations of 360° within the recording. To identify such patterns, CARTOFINDER utilizes the PentaRay catheter design to assess ring formation by grouping electrodes from each spline located at a similar position along the spline. For each group of channels, the algorithm seeks "head meets tail" formation with certain conditions ensuring spatial and temporal continuity before determining whether the propagating wavefront and corresponding electrograms meet the criteria for rotational propagation. Temporal continuity requires the activation duration of the unipolar electrode group to be more than 50% of the acquisition CL. Spatial continuity is achieved by allowing no more than 20-mm distance between electrode locations having consecutive activation timing. Rotational conduction occurring for 2 or more consecutive beats is depicted with blue electrode colouring (Fig. 1c), with increasing rotational events during a 30 s recording shown as darker shades (Fig. 1b, Supplementary video 2). The algorithm has been validated in two independent studies [7,8].
Ablation strategy
An ablation strategy was planned using design lines outlining areas of focal activity and intersecting areas of rotational activity, and delivered, joining to anatomical/electrical barriers on at least one side of the line for all targeted rotational patterns, and for focal activity, in addition to targeting the focus and immediate surrounding area, joining to barriers where the focus was nearby to avoid the creation of narrow isthmi. Limited ablation of focal activity was performed within the LAA when necessary, often concentrating on the mouth of the LAA (Fig. 1d).
Ablation technique
Ablation was performed using the ThermoCool® Smart-Touch® catheter with a target contact force of 10 g (5-20 g) at 30 W for 30 s for the first 12 patients and 50 W/10-12 s thereafter [9].
Procedure endpoints
The procedure was ended once the pre-planned lesions were delivered, or if AF terminated to sinus rhythm (SR). If AF organized to AT, this was mapped and ablated, but if AF continued, AFCL was measured, and DC cardioversion was performed. Finally, PVI and integrity of any linear ablation joining to two electroanatomical barriers were confirmed during SR and pacing.
Follow-up
Follow-up was a mean of 429 ± 131 days in the study group and 482 ± 163 days in the control group (p = NS) including a 90-day blanking period. Patients had at least two follow-up visits: at 3 months and 1 year after the procedure. The longterm outcome of the ablation procedure in both groups was assessed based on a 7-day ECG Holter monitor performed around 1 year from the procedure, 12-lead ECG at the time of clinic follow-up appointments, and patient-reported symptoms. Patients were considered AF-free in the absence of symptoms suggestive of AF including palpitations (> 30 s) and absence of AF on ECGs and 7-day recordings beyond the 90-day blanking period.
Statistical analysis
Statistical analyses were performed using STATISTICA 13.3 (StatSoft Power Solutions Inc., Poland). Continuous variables are given as mean ± standard deviation (SD) for normally distributed variables, or median (interquartile range) for non-normally distributed variables. The Student t test, or its non-parametric equivalent the Mann-Whitney U test where appropriate, was used for comparison of continuous variables. Fisher's exact test was used for comparison of nominal variables. Logistic regression was used to assess probability to predict outcomes. Kaplan-Meier curves were used to show event-free survival, and a Cox's F test was used to compare event-free survival rates between groups. Differences with a two-tailed p value below 0.05 were considered significant. Based on a mean of 23 ± 9 recordings per patient, the automated algorithm identified RAAPs in all patients. However, while focal activity was found in all, rotational activity was only demonstrated in 42%. A mean of 6.6 ± 3.6 (range 1-13) areas of focal and 0.8 ± 0.8 (range 0-2) of rotational activity per patient were demonstrated. The intensity of 42.3 ± 64.9 repetitions per activation source per patient was identified in FAPs and 7.1 ± 2.3 in RAPs. The most common locations for both FAPs and RAPs were the left atrial appendage and its base. Frequent RAAPs were also observed on the anterior wall towards the right-sided WACA line, on the roof and low posterior wall. Areas of RAAP were targeted with ablation lines or spokes anchored to an electrical barrier. A comparison of lesion sets in both groups is presented in Table S1 (supplementary material).
AFCL analysis in the IM group showed no significant difference in AFCL (pre-PVI) between those patients who remained arrhythmia-free and those with recurrence, median 182 (173-196) vs. 170 (164.5-176.5) ms respectively, p = 0.1) (Fig. 2). A univariate model did not show that the AFCL at baseline, post-PVI, or pre-DCCV was related to long-term freedom from AT/AF (p = 0.37).
Cumulative freedom from AF at 1 year was achieved in 94% in the IM group vs. 75% in CC (p = 0.02), and freedom from AF/AT was achieved in 84% in the IM group vs. 67% in CC (p = 0.1). In each group, 3 patients (15%) remained on antiarrhythmic medication at the end of follow-up (Fig. 3).
Comparisons of outcomes between IM and CC groups including the use of anti-arrhythmic drugs at the end of follow-up are shown in Table 2 and Fig. 3, and cumulative results are shown in Fig. 4, with cumulative AF-free survival of IM patients undergoing first-time ablation being significantly better than CC.
No procedural complications were noted in the IM group and one pseudoaneurysm was the only procedural complication in the CC group.
Main findings
A primary application of IM-guided RF ablation for firsttime non-paroxysmal AF resulted in 94% cumulative AFfree survival at 1 year. Automated post-PVI maps identified focal drivers in all patients, while rotational activity was seen in less than half. There was no significant difference in AFCL in relation to outcome, but a trend toward shorter baseline AFCL (pre-PVI) was observed in patients with worse long-term outcomes. Overall AFCL changes during the procedure did not predict long-term outcomes.
Earlier studies
There have been numerous efforts over the past two decades to improve outcomes in non-paroxysmal AF. Based on developing knowledge of AF mechanisms, various strategies were proposed [10]. Initial studies showed variable efficacy with linear lesions [11], posterior wall isolation [12,13], left atrial appendage isolation [14][15][16], targeting CFAEs [17][18][19] and non-PV triggers [20,21]. In contrast, other studies [22][23][24] suggested that most of these strategies did not improve outcomes over PVI alone, a finding eventually confirmed in the STAR AF II trial [3], randomized comparison of PVI, PVI . 4 Cumulative AF-free (a) and AT/AF-free (b) survival after first-time AF ablation with or without antiarrhythmic drugs in the individualized mapping group compared to the contemporary control cohort with additional empiric linear, or CFAE ablation, with a trend towards better outcomes with PVI alone.
Mechanisms and mapping of AF
Focal firing has been recognized as an arrhythmia mechanism for over a century [25,26], but the initiation of AF through rapid PV activity was described in 1998 [27]. Subsequent studies showed that perpetuation of AF may not be totally random, with rapidly changing wavefronts driven by high-frequency localized re-entry demonstrated by highresolution optical mapping in sheep atria [28]. Focal impulse and rotor modulation (FIRM) technology [29] suggested the presence of rotors and focal impulses in nearly all patients with AF, as did epicardial ECG-based imaging (ECGI) [30]. Targeting such areas of organized activation was associated with termination [29] or organization of AF in remote atrial regions [31]. While the methodology of FIRM was similar to the algorithm used in CARTOFINDER, the main drawback of this technique was relatively poor resolution due to global mapping with a 64-pole basket catheter. This allowed the creation of only a single map of the whole chamber, showing an approximation of a wavefront propagation which was likely not very realistic given the rapid and dynamic change of beat-to-beat wavefront direction in AF.
Global mapping
Previous CARTOFINDER studies using a 64-pole basket catheter showed promising short-term results [6,32,33]. Basket catheters can potentially map most of the atrium [34,35], but global, simultaneous mapping comes at the cost of incomplete electrode contact and low (4-10 mm) resolution. Similar to ECGI, no stable rotors were identified, but transient focal and rotational sources were observed that recurred frequently at the same sites.
High-resolution mapping
While recognizing potential limitations with sequential mapping of constantly changing arrhythmias, high-resolution mapping with CARTOFINDER over 30 s at every location allows identification of temporal trends in FAP and RAP. Data acquisition, interpretation and treatment based on this approach do not appear to lengthen procedural duration compared to CC. Potential limitations of sequential recordings with a limited field of view may in the future be mitigated using catheters with additional electrodes and longer splines that enhance coverage without compromising contact or resolution.
Repetitive atrial activation patterns
Previous studies using different methodologies indicated a variable prevalence of focal and rotational activation patterns. While our study shows that the automatically identified number of FAP is markedly higher than RAP, in keeping with earlier studies with CARTOFINDER [33,36,37], both FIRM [29] and ECGI [30] demonstrated a higher prevalence of rotors than focal breakthroughs: 70% vs. 80% for rotors and 30 vs. 20% for focal firing, respectively, with FIRM [35] suggesting greater spatial stability than ECGI. Predominant locations of the RAAPs identified in this study are similar to those found with ECGI [30], while FIRM did not specify them [29,35]. The number of RAPs identified in this study was, indeed, lower than with the other methods. Reasons may include several factors including the requirement for head meets tail criteria to be met in the rotational algorithm, the rejection of potential focal sources if arising from the most distal spline of the Pentaray catheter to ensure true focal activity and not activity coming into the "field of view" of the catheter, and the fact that only the left atrium was mapped. There of course has not been a formal direct comparison between specificity and sensitivity of these techniques, and thresholds for stability required to declare an area as a source of RAP vary by technique. There has been wide variability in the frequency of observed RAAPs (both focal and rotational) between different studies and variability in the association between duration of AF and the number of left atrial sources. The total number of reported RAAPs clearly represents the observed number of left and right atrial sources as well as both focal and rotational sources, albeit with a majority being rotational and a majority being left atrial with numbers varying between 1.08 and 2.4 per patient [29,[38][39][40]. Some studies have particularly focused on reporting rotors as the more dominant source. The intensity of a RAAP source is represented in Cartofinder by the number of repetitions and it varied widely within and between patients. The lower number of repetitions in the rotational sources may result from a very robust algorithm that rejects all sources with incomplete rotation around the pivot point. Moreover, while high-resolution mapping may enhance detection of focal sources and micro re-entry, it also has the potential to miss larger rotational circuits if the Pentaray is not applied at the centre of the circuit. Imminent improvements in catheter technology that will maintain resolution while enhancing "field of view" per application will address this and again in the core of the design of future studies.
Comparative efficacy
Cumulative AF-free survival in our CARTOFINDER population was 94% after a single IM procedure at a 1-year follow-up. This appears to be favourable to 1-year results in STAR-AF II (approximately 61%, 54% and 50% in the PVI alone, PVI + CFAE and PVI + linear ablation groups respectively) [3]. Moreover, STAR-AF II had a significant redo rate that varied between 21 and 33% depending on the subgroup.
Although numbers are small, our results seem to be at least comparable to studies involving other individualized mapping technologies. The multicentre FIRM registry showed single-procedure freedom from AF after a follow-up of 1 year of approximately 80% in patients with PsAF [41]. However, a subsequent randomized study did not confirm the convincing benefit. STAR global mapping using a basked offers a different approach, focusing on identifying sites that most frequently demonstrate the earliest activation instead of visually depicting wavefront propagation. During a minimum follow-up of 12 months, 80% of patients were free from arrhythmia [42]. Ablation guided by ECGI targeting driver regions was associated with 64% patients remaining in sinus rhythm after 1 year [30], although 59% required AAD and 18% redo procedures. In a later multi-centre study, 77% of patients were free from AF at 1 year, but 49% developed AT requiring further management at follow-up [43], in contrast to 15% in the present study.
Study limitations
This study has the following potential limitations: (1) it is a single-centre, nonrandomized, retrospective analysis, although the IM approach was prospectively designed based on an earlier feasibility study; (2) while only two operators performed the procedures, both the visual and automated analysis of data and design of lesion sets are straightforward and widely applicable with minimal additional training for experienced electrophysiologists; (3) follow-up was based on Holter ECG recordings, ECGs and symptomatic review rather than implantable recorders, but this was true of both the IM and CC groups. Thus, although the majority of patients presented with sustained forms of arrhythmia recurrence (AF and AT), some asymptomatic non-sustained arrhythmia episodes may have been missed in both groups. (4) Finally, mapping and ablation were limited to the left atrium.
Conclusions
Individualized, high-resolution arrhythmia mapping to guide ablation beyond PVI based on CARTOFINDER shows much promise if applied as a primary treatment for the first-time ablation. There is a pressing need to improve outcomes in patients undergoing ablation for PsAF and LsPsAF, and the promising results of this pilot study need to be confirmed in a larger, randomized trial.
Authors' contributions VM, concept/design, data collection, critical revision of the article, and approval of the article; KMR, data collection, data analysis/interpretation, and statistics, drafting the article; JJ, data collection, critical revision of the article, and approval of the article; RS, data collection, approval of the article; TW, critical revision of the article, approval of the article.
Data availability
The data underlying this article will be shared at reasonable request to the corresponding author.
Declarations
Disclosures Dr Markides has a consulting agreement and has received honoraria for lecturing at Biosense Webster symposia and satellite sessions. He has received an educational grant by Biosense Webster that helps support a clinical fellow.
Other co-authors have had no involvements that might raise the question of bias in the work reported or in the conclusions, implications, or opinions stated.
Ethics approval Written informed consent was obtained from all patients and the study was approved by the institutional research committee.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. | 2022-01-09T14:21:10.414Z | 2022-01-08T00:00:00.000 | {
"year": 2022,
"sha1": "315760c09c093f7a56754706de7509adcd7631fc",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10840-021-01115-7.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "315760c09c093f7a56754706de7509adcd7631fc",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
13980317 | pes2o/s2orc | v3-fos-license | Test-Bed based Comparison of Single and Parallel TCP and the Impact Of Parallelism on Throughput and Fairness in Heterogenous Networks
Parallel Transport Control Protocol (TCP) has been used to effectively utilize bandwidth for data intensive applications over high Bandwidth-Delay Product (BDP) networks. On the other hand, it has been argued that, a single-based TCP connection with proper modification such as HSTCP can emulate and capture the robustness of parallel TCP and can well replace it. In this work a Comparison between Single-Based and the proposed parallel TCP has been conducted to show the differences in their performance measurements such as throughput performance and throughput ratio, as well as the link sharing Fairness also has been observed to show the impact of using the proposed Parallel TCP on the existing Single-Based TCP connections. The experiment results show that, single-based TCP cannot overcome Parallel TCP especially in heterogeneous networks where the packet losses are common. Furthermore, the proposed parallel TCP does not affect TCP fairness which makes parallel TCP highly recommended to effectively utilize bandwidth for data intensive applications.
INTRODUCTION
Parallel TCP uses a set of parallel (modified or standard) TCP connections to transfer data in an application process. With standard TCP connections, parallel TCP has been used for effectively utilize bandwidth for data intensive applications over high bandwidth-delay product (BDP) networks. On the other hand, it has been argued that a single TCP connection with proper modification can emulate and capture the robustness of parallel TCP and thus can well replace it [1].
From the implementation of parallel TCP, it is found that, the single-connection based approach (such as HSTCP) may not be able to achieve the same effects as parallel TCP, especially in heterogeneous and highly dynamic networks. The Parallel TCP achieves better throughput and performance than the single-connection based approach.
LITRATURE REVIEW
The concept of Parallel TCP is not new and its original form is the use of a set of multiple standard TCP connections. In late 80s and early/mid 90s the use of multiple standard TCP connections was exploited to overcome the limitation on TCP window size in satellite-based environments [3, 4, and 5]. But it still needs some experiments to show its efficiency and effectiveness.
The applications that require good network performance often use parallel TCP streams and TCP modifications to improve the effectiveness of TCP. If the network bottleneck is fully utilized, this approach may boosts throughput by unfairly stealing bandwidth from competing single-based TCP streams. To improve the effectiveness of TCP is much easy compared to improve the effectiveness while maintaining fairness [2]. So, TCP fairness should be measured in real test bed experiment to show its sensitivity of parallelism.
More recently, there has been a focus on improving the performance of data intensive applications, such as GridFTP [6,7] and PSockets [8,9]. These solutions focus on the use of multiple standard TCP connections to improve bandwidth utilization. However, these studies did not compare or identify the differences of the performance between the use of a set of multiple standard TCP connections and a single-based TCP emulating a set of multiple standard TCP connections.
Some solutions use a set of parallel connections to outperform the single standard TCP connection in terms of congestion avoidance and maintaining fairness. The approaches described in [10,11] adopt integrated congestion control across a set of parallel connections to make them as aggressive as a single standard TCP connection. It is shown that, the approaches are fair and effective to share bandwidth. In the references [12,13], by using a fractional multiplier the aggregate window of the parallel connections increases less than 1 packet per RTT. And only the involved connection halves its window when a packet loss is detected. Similar to Parallel TCP, pTCP uses multiple TCP Control Blocks and it shows that pTCP outperforms single connection or single TCP based approach such as MulTCP [14].
On the other hand, some solutions have the capability of using a single TCP connection to emulate the behaviour of Parallel TCP. MulTCP [14] makes one logical connection behave like a set of multiple standard TCP connections to achieve weighted proportional fairness. The recent development on high-performance TCP has resulted in TCP variants such as Scalable TCP [15], HSTCP [16], HTCP [17], BIC [18], CUBIC [19], and FAST TCP [20]. All of these TCP variants have the effect of emulating a set of parallel TCP connections.
After the fast growth of the communication devices and the increasing of network speeds, the single based TCP protocols may not be able to fully utilize the high speed network links (bandwidth). The most important thing which should be taken into account is the fairness between the connections which is a mix of single and parallel TCP connections that share the same link.
THE OBJECTIVES
In this work, real test bed experiment has been conducted to compare Single-based TCP with parallel TCP to observe the impact of using Parallel TCP rather than Single-connection based approach on increasing the performance and bandwidth utilization. Furthermore, to show the impact of using parallel TCP on TCP fairness while it shares the same bottleneck with single-based TCP connections. Jean's Fairness Index (JFI) [21], has been used in this experiment. JFI well known as the following formula: Where Xi is the measured throughput for flow i, for N flows in the system.
THE METHODOLOGY
In this work a comparative study has been conducted to compare between the single-based TCP and the proposed parallel TCP algorithm in terms of throughput, throughput ratio and TCP fairness based on test-bed experiment with real CUBIC version which has been implemented in the kernel 2.6.27 of Linux OpenSUSE11.1.
The Algorithm
In this paper, a new parallel TCP algorithm has been proposed and implemented to overcome the problems of the existent algorithms. Parallel TCP algorithms were suggested to increase the throughput of the standard TCP, because Standard TCP could not fully utilize the high-speed links, but it still suffer from fairness problem. The next two subsections will show the control flow of the sender and receiver sides' algorithm.
Receiver-Side Model
As shown in figure 1, the main thread will keep listening for any new connection request on the determined pair of IP address and port number. If any new connection request received and authorized, the main thread will send signal to connection thread manager to establish a new connection thread. The connection thread manager will create a new connection in separated thread and will send a signal to the monitor thread to start its job, which is monitoring the active connections and gathering the data when all the connections are being ended. Each connection thread will start receiving data from the involved sender until the end of the data, and then a termination of connection signal will be sent to the monitor thread to update its state. However, the monitor thread will keep working until it receives the termination signal of the last connection from the parallelized connections set, then it will start gathering and ordering data, consequently save this data into its final form.
Sender-Side Model
As shown in figure 2, the main thread of the sender will start working to send the data to the receiver. First, it will divide the data into chunks based on the number of connections that will be used to transfer this data. Then, it will send all the data chunks to the communication thread to start its job. Communication thread will start creating threads (connection threads) one thread for each chunk of data, and it will give different sequence numbers for each connection. Therefore, each connection thread will start data pipelining which means it will start creating packets and sending it to the receiver. When the end of the data chunk is reached, FIN packet will be sent to the receiver to start connection termination.
Network Topology
In this work a typical single bottleneck (dumbbell topology) with a symmetric channel and reverse path loss-free has been used as shown in figure 3. The targeted traffic has been observed in presence of the background traffic to provide the heterogeneity which is the main point of this study.
Performance Metrics
The Throughput, throughput ratio and TCP Fairness between the targeted and the background flows is the main point of interest in this work.
Throughput: is the amount of data which have been received from the source to destination over TCP connection against the time.
Throughput Ratio: is the amount of data which have been moved successfully from one source to destination in a given time period over a physical or logical link.
TCP Fairness: it is known as Fairness Index which used to determine whether the applications that use the same link or connection are receiving a fair share of bandwidth or not. The Fairness Index is denoted as F, while F is a decimal value resides between 0 and 1 which represents the percentage of fairness.
HARDWARE AND SOFTWARE
In this experiment, open source C# MONO-Developer 1.0 and MONO Framework 2.0.1 has been used to build the Traffic Generator and the Traffic Collector tools on Linux OpenSUSE 11.1 operating system. This experiment has been conducted on a real network that consists of four PCs with OpenSUSE installed on it, two of them are senders and the other two PCs are receivers. One sender and one receiver have been used to provide the background traffic and the others used to provide the targeted traffic.
While in the bottleneck routers, the Linux based MikroTik RouterOS operating system version 2.9.27 has been used which provide a high level of setting to control bandwidth, routing algorithm and queue management algorithms. This routers configured to obtain the same network environment that being proposed by Qiang Fu (2007) [1].
THE RESULTS
It is well known that, in single-based TCP, AIMD algorithm reduces its congestion window to the half when packet loss detected, that leads to decrease the total throughput to the half as well. Thus, it leads to insufficient bandwidth utilization. Contrarily, in the proposed Parallel TCP algorithm, the timeout which occurs in one of parallelized flows will not affect the congestion window of the other flows in the same set consequently will quietly affect the aggregated congestion window.
For instance, suppose that we have parallel of five TCP connections, and timeout takes places on two flows, this will result the reduction of around 2/10 from the total throughput of this parallel connection. The value of 2/10 represents the half of congestion window of the affected flows.
However, the results of this work reveals that, the single-based TCP can not emulate the parallel TCP in terms of performance Throughput and throughput ratio, thus cannot replace it especially in heterogeneous networks. Figure 4 and figure 5 show that, the increasing of the number of parallel TCP connections that used for one application process increases the throughput of that application, which makes the use of the proposed parallel TCP algorithm highly recommended for data intensive applications to achieve better performance especially when high speed network links are used. Since the parallelism occurs in the application layer that divides the data into small chunks and provide them to the lower layers in order to establish an independent connections for them. These flows will behave independently in the lower layers of the network and it will result positive impact on TCP fairness as shown in figure 6 and figure 7.
CONCLUSION
In this paper, it has been demonstrated that the proposed Parallel TCP outperforms single-based TCP in high speed and highly dynamic networks and it (parallel TCP) produced better throughput than the single-based TCP while it maintains the fairness with single based flows.
FUTURE WORK
In this work, the comparison was conducted using CUBIC and it was not good enough to say that, parallel TCP is better than single-based TCP but in the nearest future, there is an intention to conduct a new experiment which will involve some of high speed TCP variants such as HTCP, HSTCP, Scalable TCP and NewReno, and the measurements of loss ratio as well. | 2016-09-06T02:39:03.000Z | 2009-11-13T00:00:00.000 | {
"year": 2016,
"sha1": "9e59a5cba54467cb396bb1be513811356b86a03d",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1609.01374",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "9e59a5cba54467cb396bb1be513811356b86a03d",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
270763828 | pes2o/s2orc | v3-fos-license | The evolution, facilitators, barriers, and additional activities of acute flaccid paralysis surveillance platform in polio eradication programme Bangladesh: a mixed-method study
ABSTRACT Background The Global Polio Eradication Initiative (GPEI) helped develop the standard acute flaccid paralysis surveillance (AFP) system worldwide, including, knowledge, expertise, technical assistance, and trained personnel. AFP surveillance can complement any disease surveillance system. Objective This study outlines AFP surveillance evolution in Bangladesh, its success and challenging factors, and its potential to facilitate other health goals. Methods This mixed-method study includes a grey literature review, survey, and key informant interviews (KIIs). We collected grey literature from online websites and paper documentation from GPEI stakeholders. Online and in-person surveys were conducted in six divisions of Bangladesh, including Dhaka, Rajshahi, Rangpur, Chittagong, Sylhet, and Khulna, to map tacit knowledge ideas, approaches, and experiences. We also conducted KIIs, and Data were then combined on focused emerging themes, including the history, challenges, and successes of AFP surveillance programme. Results According to the grey literature review, survey, and KII, AFP surveillance successfully contributed to decreasing polio in Bangladesh. The major facilitating factors were multi-sectoral collaboration, Surveillance Immunization Medical Officer (SIMO) network activities, social environment, community-based surveillance, and promising political commitment. On the other hand, high population growth, hard-to-reach areas, people residing in risky zones, and polio transition planning were significant challenges. Bangladesh is also utilizing these polio surveillance assets for other vaccine-preventable diseases. Conclusion As the world is so close to eradicating polio, the knowledge, and other assets of the AFP surveillance, could be used for other health programmes. In addition, its strengths can be leveraged for combating new and emerging diseases.
Background
At the 41 st World Health Assembly (WHA) in 1988, Global Polio Eradication Initiative (GPEI) was launched to eradicate poliomyelitis, a potentially fatal infectious disease caused by three viral serotypes by the year 2000 [1].The World Health Organization (WHO), in collaboration with Rotary International, the US Centers for Disease Control and Prevention (CDC), the United Nations Children's Fund (UNICEF), and Global Alliance for Vaccine Initiative jointly led the goal of polio eradication [2].One of the critical strategies for GPEI was establishing a unique surveillance system to detect all Acute Flaccid Paralysis (AFP) cases in children in every country [3].AFP is the sudden onset of weakness or paralysis of limbs often associated with fever in children under fifteen years of age that occurs in polio and other diseases with a similar features, such as Guillain-Barre syndrome, transverse myelitis, traumatic neuritis meningitis, encephalitis, and brain tumours [4,5].AFP surveillance has both passive (reports from field) and active surveillance (active search by surveillance teams) systems.AFP surveillance quality depends on early detection and reporting, specimen collection and reverse cold chain, prompt transportation of specimens to the laboratory, and timely reporting of results [5,6].Despite decline in worldwide cases, Afghanistan, and Pakistan -have yet to eradicate wild polio.According to WHO, circulating vaccine-derived poliovirus type-2 (cVDPV2) cases are spreading globally rapidly and the number of cVDPV2 in 2020 was 1,009, 254% higher than in 2019 [7].
The knowledge assets including, evolution, and innovations learned from the AFP surveillance system can complement other surveillance activities to detect disease outbreaks in the community [8].Many countries now use the AFP surveillance system, to monitor other vaccine-preventable diseases (VPD) [9,10].Similarly, in some African countries of AFP surveillance teams have been deployed to identify active case searches for COVID-19 [11].
Bangladesh has adopted a polio eradication goal since 1995.It has implemented four basic strategies following WHO's suggestion, including i) Routine immunization (RI), ii) Supplementary immunization activities (SIAs), iii) AFP surveillance, and iv) Mopup campaigns [12,13].The country with WHO South-East Asia Region achieved polio-free status on 27 March 2014 [14], and since maintaining all ten key performance indicators for AFP surveillance [12].Bangladesh has also integrated other VDP surveillances into the AFP surveillance [15].An evaluation of the AFP surveillance system showed that the system correctly identified children with polio and non-polio diseases and guided mop-up campaigns to raise awareness and promote polio vaccination [12].The AFP surveillance team in Bangladesh has also been serving as frontline health workers for the COVID-19 response [16].Thus, there is a necessity to synthesize the vast array of AFP surveillance activities; however, no study has taken a comprehensive and systematic approach to capturing the rich knowledge surrounding the AFP surveillance system in Bangladesh.This paper describes the evolution of AFP surveillance in Bangladesh, its facilitators, and barriers and additional activities leveraged from implementing the programme.
Methods
This study is a part of the 'Synthesis and Translation of Research and Innovations from Polio Eradication' (STRIPE) project, which is a consortium of seven international academic partners, including the James P Grant School of Public Health, BRAC University, Dhaka, Bangladesh, and is led by the Johns Hopkins Bloomberg School of Public Health (JHSPH) [17].A sequential explanatory mixed-method study design (Figure 1) was used to map, synthesize, and disseminate lessons learned from the global polio eradication effort in Bangladesh [18].
We conducted a grey literature review regarding polio eradication programme in Bangladesh, which was used to develop the survey questionnaire, followed by key informant interviews (KII) guidelines to complement the mixed-method findings.
For grey literature, a team of two independent researchers explored Government of Bangladesh websites with permission and related organizations such as Rotary International, UNICEF, BMGF, WHO, GAVI, and the International Centre for Diarrhoeal Disease Research, Bangladesh (icddr,b), and downloaded the relevant information.We searched for unpublished grey documents in the repositories of the Expanded Program of Immunization (EPI), the Directorate and General of Health Services (DGHS), the WHO Bangladesh office, and other individuals related to polio eradication initiatives in Bangladesh.This literature review included grey documents on the polio program in Bangladesh since 1988 till 2018.Documents that addressed GPEI support in Bangladesh, effects for GPEI roll-out/impacts in Bangladesh, polio vaccine, polio-related fieldwork or lab work undertaken by GPEI partners, wild poliovirus and vaccine-derived polio studies, and polio surveillance were included, screened, translated if needed, extracted, and transferred into Qualtrics© (online survey software) for analyzing the data under different GPEI strategies, including setting up
Qualitative data
Quantitative data Qualitative data Interpretation of entire analysis and mobilization, implementation, end-game strategies, impact on other health programs and health systems, facilitators, and barriers.The researchers reviewed and crosschecked the extracted data among themselves for comprehensiveness and any duplication.However, only AFP surveillance-related data were used for this manuscript.
We also conducted a quantitative survey with the members of Bangladesh's 'Polio Universe', directly involved in polio eradication activities in Bangladesh for a minimum of 12 or more months between 1988 and 2019, from the EPI database, and additional respondents identified through snowball sampling.The questionnaire was designed by the STRIPE project team based on the Consolidated Framework for Implementation Research (CFIR) and the socioecological model [17] (Table 1).The Bangladesh team piloted the questionnaire and made a few adjustments to make it ready to use in Bangladesh context.A research team was sent to six administrative divisions of Bangladesh (i.e.Chittagong, Dhaka, Khulna, Rajshahi, Rangpur, and Sylhet Divisions) to collect data for paper-based surveys.The survey data were translated into English, cleaned, and analyzed using Stata 13.It was ensured that the full definition of each response (Table 1) was read and explained as it was written before the respondent answers.
We conducted KIIs following the survey with a subset of survey participants, considering their expertise in polio eradication activities and survey nominations.Informed written consent was obtained, and interviews were recorded, transcribed, translated, and checked for any errors.We developed a priori codebook for deductive coding, following the CFIR and the socio-ecological model.A data display matrix was also developed to summarize the findings including relevant quotes from the participants and emerging codes were added to priori codes.Later, thematic analysis was performed to describe the evolution of AFP surveillance in Bangladesh, its facilitators, and barriers and additional activities leveraged from implementing the AFP surveillance programme.Ethical approval was obtained from the Institutional Review Board of the BRAC James P. Grant School of Public Health, BRAC University.
Results
We identified total 92 literature and included 69 documents and excluded 23 following the inclusion and exclusion (Not related to Bangladesh, clinical, and lab-based paper) criteria.The grey documents we selected for data extraction include bulletins, reports, fact sheets, scientific papers, manuals, newspapers, and PowerPoint presentations.Table 2 describes the language, type, and source/publisher of the grey literature.This is to be informed that some of the grey literature have multiple sources or publisher.
One hundred and twenty people were approached for survey and one hundred nine (90.8%) among them participated in survey, of whom 83 completed the paper-based survey, and 26 completed online (Table 3).Most respondents had more than ten years of experience with polio, and the maximum respondents worked in Government agencies (62.3%, n = 104) and WHO (17.4%, n = 29), respectively.The majority conducted vaccination (20.8%, n = 53), surveillance (17.3%, n = 44), and strengthening the delivery system (15.7%,n = 40), respectively.However, many of the respondents have conducted multiple roles in polio programme of Bangladesh and only data were collected who worked for AFP surveillance.
Seventeen KIIs were conducted at the national level, and one at the sub-national level.Half of the KII participants were from the government, followed by other partners.Among the participants, 13 of them were involved in AFP surveillance.
The following themes emerged from the analysis of the grey literature review, survey, and KII findings.
Evolution of AFP surveillance in Bangladesh
The AFP surveillance in Bangladesh was introduced in 1990 through the existing government structure.
The Government assigned the Upazila Health and Family Planning Officer (UH&FPO) and the Civil Surgeon (CS) as Disease Surveillance Focal Persons (DSFP) at Upazila and district levels, respectively, and Medical Officer for Disease Control as Hospital Surveillance Officer (HSO) and Local Surveillance Officers (LSO) [19,20].There were two surveillance systems including active surveillance, and passive surveillance system.Due to low reported cases, Government assigned volunteers and Non-government organization (NGO) workers in Upazilas and districts in 1996 for active case findings [20].In urban areas, international and national organizations, including the United States Agency for International Development (USAID), BASICS, Immunization and Other Child Health (IOCH), Urban Primary Health Care Project (UPHCP), and Stop Transmission of Polio (STOP) program supported the primary surveillance system [21].Finally, the country established a comprehensive AFP surveillance in 1997.However, the country is maintaining a standard AFP surveillance performance (Table 4) that is detection of >2 non-polio cases per 100,000 population <15 years of age since 2002 [22].
WHO created the post of Surveillance Medical Officer (SMO) network in 1999 supported by GPEI.In 2002, District Immunization Medical Officer (DIMO) recruited by EPI merged with SMOs to form Surveillance Immunization Medical Officer (SIMO) [22].Forty-four survey respondents and all KII respondents accredited the WHO's contribution and their SMO network for strengthening the AFP and VDP surveillance.
The survey findings noted that the Virology Department of the Institute of Public Health (IPH) was the National Polio Laboratory (NPL) to diagnose polio cases, established in 1995 and is now an WHOaccredited laboratory in Bangladesh.In 2005, it included measles and rubella surveillance and was redesignated as the National Polio and Measles Laboratory (NPML).
Bangladesh achieved certified standards for ten AFP surveillance performance indicators in 2001.Due to effective AFP surveillance, Bangladesh detected 18 imported cases of wild poliovirus type 1 (WPV1) in 2006 from western Uttar Pradesh, India and contained the outbreak through extensive surveillance and supplementary immunization activities (SIAs) [22].
We do not have any indigenous cases since August 22, 2000.However, India was a threat to Polio importation.The newly detected cases from were restricted through surveillance, all surrounded people were vaccinated.KII_G66.Bangladesh since 2006 and no isolations of wild poliovirus since the establishment of environmental surveillance.
For more than ten years, Bangladesh has achieved and maintaining a certificate standard of AFP surveillance performance indicators with a stool sample collection rate of 99%.Currently, 162 active surveillance and 787 passive surveillance sites are functional [12,23].The key events of the polio program in Bangladesh are shown in Figure 2.
Role of the AFP surveillance in polio eradication of Bangladesh
When asked about the responsibilities of the survey respondents in polio eradication programme in Bangladesh, 44 respondents among 109 chose AFP surveillance as their primary responsibility reflecting the significant emphasis given to conducting AFP surveillance.The free text of the survey and the KII also highlighted that AFP surveillance led by the government under the Expanded Program on Immunization (EPI) and supported by the WHO successfully contributed to the decrease in poliomyelitis cases (Figure 3) and in attaining polio-free status for Bangladesh.One of the key informants also described the importance of AFP surveillance, Since I started working, if I speak about the most critical two phases of polio eradication work, one would be the intervention that is the vaccination, and another is the searching for polio patients through surveillance.The surveillance still exists, and we still search for AFP cases, and the credit goes to EPI & SMOs said KII_G36.
Multi-sectoral collaboration, support, and activities
Nearly half (n = 18) of survey respondents said that process of conducting the activities (implementation, including the planning, execution strategies, reflection, and evaluation of activities, or adjustments made to the plan) was the main success factor for AFP surveillance (Table 5).This reflects the collaborative effective strategies of the government, EPI, the GPEI core partners, and other international and national NGOs contributed to the success of the AFP surveillance.Both free survey responses and KIIs (Figure 4) signify that the strong collaboration and social engagement among all relevant ministries, local Government, political leaders and parties, civil society, and local NGOs, professional EPI workforce, polio laboratory team, WHO SMO network, provided technical and financial support, facilitated the implementation, and kept Bangladesh polio-free.One KII responded said, International organization like GAVI, UNICEF, WHO. ..., altogether worked for polio, and they never delayed any EPI related activities.KII_G35.Another respondent stated, Initially the task was started by the health ministry, but later to mobilize the community volunteers like Imams, Headmaster, other ministries like health, religion were involved.KII_NG55.
The SIMO network
The second highest, 31.71%(n = 13) survey respondents selected polio eradication program characteristics s.The WHO SMO network's tremendous technical support, including identifying potential AFP cases, preserving, and transporting the sample, reporting to relevant bodies and follow-up, was also accredited by all the KII respondents.One of them described them as 'the backbone of polio eradication in Bangladesh' KII_G66.The grey literature review found similar findings [12,19,20].Again, the literature [19] and the KIIs describe the role of SIMOs in strengthening the RI, developing EPI guidelines, assisting natural disasters and executing emerging and re-emerging infectious diseases surveillance.One KII respondent said, Gradually, we (SMOs) entered the EPI.So, now in new vaccine introduction, RI monitoring, then rapid convenience, and data quality assessment, our SMOs are involved.Another respondent said, AFP surveillance, and the surveillance for measlesrubella, NNT, Japanese Encephalitis JE, AEFI, and other VPDs are now implemented by the government with the assistance of SIMO KII_G58.
Social Environments
Most (53.7%, n = 22) of the survey respondents mentioned that the community setting, and context were major contributing external factors to successful AFP surveillance (Table 4).The nationwide community participation during National immunization days (NID)s and Supplementary national immunization days (SNID)s had high visibility in the polio eradication program.To increase the case notification and awareness, the Government trained and mobilized community volunteers and local NGO workers in Upazilas and districts named 'key informants'.These informants were front-line primary health care workers, BRAC volunteers, Imams of local mosques, Union Council members, village doctors, local healers, EPI outreach sites caretakers, ORS and contraceptive depot holders, school teachers, scouts [21].All KII respondents acknowledged their valuable contribution identifying AFP cases.One of the respondents said, EPI did have limited staff to conduct the NID and child-to-child search.There were around six lakh volunteers, from community and people from different sectors. . . .they were very committed.KII_G55.Another respondent said, Their contribution is excellent . . .If they did not give information, we would not have been able to strengthen surveillance.They are the real ones. . . .KII_G64.
High political commitment
Since 1971, political instabilities have taken place, including conflicts, strikes and shutdowns threatening the overall development of the country.Fortunately, the Government was highly committed to polio eradication efforts, including AFP surveillance from the beginning [24,25].Around 17% of survey respondents informed political factors, which includes any political or high-level support either from local or national government, as a primary contributor.One of the study respondents stated, The highest level of political commitment had a special focus on polio eradication.Due to political support, we did not face any difficulty in getting adequate support at political support from all government levels. . . .Every Government owned the program KII_G54.
Challenges
In our survey, the majority 38.1% (n = 16) of respondents identified external factors as key challenges, which include political, economic, social, technological environment and others followed by the process of conducting the activities (33.3%, n = 14) and organizational settings (12%, n = 5) (Table 4).From the literature review and KII (Figure 4), the following external factors as challenges stood out:
Population density
Bangladesh is one of the most densely populated countries in the world, which is a significant challenge for planning AFP surveillance in the country, according to the grey literature [21].The respondent rightly said, And in our country, where the population is much big, it was difficult to reach all people.KII_NG60.
Hard to reach area
In the hard-to-reach areas include areas isolated by rivers, forests, large water bodies ('Haors'), and hills.Many of them remain separated for months by floodwaters.The grey literature review noted that it was always difficult for health care providers to access and conduct regular reporting those remote areas with little or no health infrastructure support and active search for AFP cases [5,26].Respondents to the KII expressed similar experiences: Many places of Noakhali, Chittagong, Barishal, Haor areas have hard to reach areas where communicating is really hard.The surveillance team gave extra support there through.KII_NG56.
Geography
Bangladesh shares borders with India and Myanmar and has refugee camps near the Myanmar border.Bangladesh had no polio cases since 2000; however, 18 cases of Wild Polio Virus 1 (WPV1) were reported in 2006 imported from India [22].India was polio-endemic until 2014, and chance of circulation was alarming.Myanmar had an outbreak of cVDPV on 7 November 2015 [27], posing a potential threat to importation [28].Rohingya refugees labelled by the government as Forcibly Displaced Myanmar Nationals (FDMN) live in camps in Cox's Bazar district in southeast Bangladesh [29] who pose an additional challenge to Bangladesh.
Similarly, one KII respondent mentioned, Bangladesh is surrounded by India and Myanmar, so there was always a chance of importation.We do not have any indigenous cases since August 22, 2000.However, India has long been in endemic status till 2011.There was always a threat to the importation of polio.Therefore, once there is any case detection in the neighbouring country through surveillance, surrounded people were vaccinated.KII_G66.
The population at risk
The government conducted special microplanning for disadvantaged groups, including communities like snake-charmers, people living in boats and on islands ('chars') within rivers, teagarden workers, brothel inmates, minority groups, slum people, children of working mothers, and the floating population [23].During Polio SIAs, vaccinators searched house to house and did queries about new AFP cases, especially in high-risk areas [5].
Transition
As we are close to eradicating polio globally, the GPEI has started a gradual transition with national partners globally.According to the transition plan started in 2019, AFP surveillance will be fully funded by the government, which will replace the SIMO network, and new epidemiologist/public health positions will be created by the government [22].Also, one KII respondent said, Now Bangladesh Government plans to absorb this network in some way.Ultimately the Government will take over the SIMO network according to the polio transition plan.KII_NG60.
However, in literature, free text of the survey, and KII response, uncertainty in implementing the plan and maintaining the current quality surveillance have emerged [22].One KII respondent said, Technical knowledge is not built in one day.It is not the time to stop the donor support in Bangladesh as the Government might not run these programs alone . . .KII_NG60.Another respondent described, Without SMOs, AFP surveillance would not be the same KII_G54.
AFP surveillance platform beyond Polio
Bangladesh also started the AFP surveillance platform beyond the polio program to other health initiatives, especially in other disease surveillance like surveillance for measles-rubella, NNT, Japanese Encephalitis JE, AEFI, and will continue stretching far through the transition plan.For emergencies, the SIMO network has been regularly utilized in different national emergencies and natural disasters and cyclones such as cyclones Sidr (2007), Aila (2009), and the 2004 tsunami and regular floods in the past two decades [12,22].Also, in the COVID-19 crisis in Bangladesh, SIMOs were deployed to support all aspects of the COVID-
Discussion
Though Bangladesh was the last country to eradicate smallpox in the world [30], it pioneered polio eradication in the Southeast Asian region by stopping polio transmission in 2000 [23].This study highlights the valuable role of AFP surveillance in getting polio free status, the success of AFP surveillance depended on multi-sectoral collaboration and support, the SIMO network, community participation, and highlevel political commitment.This study also focuses on challenges of AFP surveillance including reaching hard-to-reach areas and people, and transition of polio programme.
The study emphasizes on the central role of AFP surveillance played a crucial to gain the polio free status of Bangladesh through serving as an early warning system for the presence of the poliovirus.Maintaining sensitive AFP surveillance globally is vital to achieving and sustaining global polio free status [31].In addition, low surveillance sensitivity one of the main reasons for the reemergence of WPV in South-Eastern Africa in 2022 [32], which again implies the importance of this surveillance.
Robust multi-sectoral collaboration between ministries, the involvement of other stakeholders like NGOs and development partners and strong political commitments were among the contributors to the polio program in Bangladesh.Multi-sectoral intervention and coordination with collaborative planning and executing across different sectors at national, and local levels in Bangladesh can be seen in noncommunicable disease sectors and nutrition as well [33,34].During the recent battle against COVID-19, multi-sectoral collaboration among different ministries of Bangladesh and various national and international stakeholders reinforced the shield against the coronavirus [35].Again the role of strong political commitment on strengthening AFP surveillance is also evident in country like India and Nigeria [36,37].In addition, the support from the partners and donors helped reduce the incidence of polio globally by 99% since the launch of GPEI [38].
The study identifies technical tasks including AFP case detection, sample collection, and testing, coverage evaluation surveys to measure OPV 3 coverage, provided an evidence-based approach to the success of the surveillance system and other vaccinepreventable diseases.Among the GPEI core partners, WHO provides the bulk of technical assistance mentioned above to the Government in the setting up and functioning of the polio surveillance at national and sub-national levels and also supports the quality assurance and standardization of surveillance laboratories [26,28].WHO provided technical support to the Ministry of Health, Bangladesh through building a well-established polio surveillance system through the SIMO network, which helped to combat polio and other emerging and reemerging diseases including measles-rubella, Neonatal Tetanus NNT, Japanese Encephalitis JE, malaria, and Kala-azar [29].Like Bangladesh, Nepal and Somalia is also using the AFP surveillance for VDPs through WHOsupported national program, in [39,40].In addition, SMOs from the National Polio Surveillance Project (NPSP) of India played a crucial role in surveillance activities both at the national and state level [41].
The study shows that in Bangladesh, key informants or community volunteers were involved in identifying and notifying AFP cases to the authority as part of community surveillance.Engaging communities against Polio was also adopted in many African countries, including Nigeria, Somalia, Kenya, Ethiopia, and Zimbabwe [6,[40][41][42][43].In Afghanistan, 'mullahs' (religious leaders), traditional healers, healthcare providers, teachers, parents, and others are trained to detect AFP cases in their community [44].
This study revealed that the low performance of AFP surveillance initially, in hard-to-reach areas, and among the 'floating' and dense population, transition and importation threat were barriers to implementing the AFP surveillance.These challenges match with other countries as well.The poliovirus remains a major challenge for remote and inaccessible places worldwide.Some areas are sparsely populated, some are densely packed, and the river beds of Lake Chad and some are in conflict-affected countries [45].Additionally, WHO says that funding decrease will increase the risk of disrupting the main EPI functions, especially the disease surveillance due to polio transition planning [46].Similarly, in Bangladesh, the polio transition poses risks and challenges in implementing the plan of substituting the staff, resulting in compromised surveillance.In addition, as the development partners supported most of the tasks and funding, it will be crucial for Bangladesh to sustain the standard surveillance system [22].However, three countries that are Bangladesh, India, and Indonesia among other priority countries in the Southeast Asian Region, are advanced in implementing the national transition plans along with funding support from the governments [47].
The study also identifies that in Bangladesh, an extensive set-up of the national disease surveillance system and a national laboratory are established with the support of GPEI identifying the importance of moving forward for attaining different global and national goals of disease elimination and control, particularly VPDs.nearly.Multiple disease surveillance interventions are integrated with AFP surveillance, which has become a standard practice in many countries for measles and rubella surveillance [8,10].According to a survey conducted in 2000, 26 of the 32 countries used AFP surveillance resources for other infectious diseases like measles, neonatal tetanus, cholera, meningitis, and yellow fever [48].Similarly, by 2003, 131 of 198 countries globally were using the AFP surveillance system for surveillance of other VPDs [8].
Conclusions
This manuscript provides a systematic exploration of the AFP surveillance system in Bangladesh since its inception.The study identifies key facilitators including collaboration between government entities, national and international organizations, and community volunteers.This also identifies challenges including population density, reaching remote areas, geopolitical considerations, and the ongoing transition of AFP surveillance to the government.The manuscript also emphasizes that as the world is so close to eradicating polio, the assets, knowledge, facilitators, barriers, and additional functions of the AFP surveillance platform\, could be used and transferred to other health programmes in other as well.Therefore, it is high time for the AFP surveillance team to provide guidance and support to the Government to create a surveillance plan for emerging infectious diseases, train and sensitize health workers, and support the laboratories.The government should recognize the value of polio transition plan and secure the ownership over the programme and funding and strengthen their technical capacity for sustaining the polio free status and smooth transitioning of polio assets to other health priorities.
Figure 1 .
Figure 1.Sequential exploratory-explanatory mixed-method for the current study.
Figure 4 .
Figure 4.The structure and reporting system of the polio program in Bangladesh.*DSFP (Disease Surveillance Focal Persons), CS (Civil Surgeon), CHO (Chief Health Officer), EPI HQ (Expanded Programme of Immunization Head Quarter), HSO (Hospital Medical Officer), LSO (Local Surveillance Officers) RMO Residential Medical Officer, SMO (Surveillance Medical Officer), SIMO (Surveillance Immunization Medical Officer), MODC Medical Officer for Disease Control, MOCS Medical Officer for Disease Surveillance, ZMO Zonal Medical Officer, AHO Airport Health Officer, HO (Health Officer).
Table 1 .
Overview of CFIR domains, indicating facilitators of and barriers for the Polio programme implementation in Bangladesh.
The survey and KII data found that environmental surveillance (ES) was established in September 2015 in Dhaka and Gazipur district and in 2018, extended to Cox's Bazar district.No wild poliovirus cases in
Table 2 .
Categories of the identified grey literature.
Table 3 .
Profile of the survey respondents.
Primary role in polio eradication activities
*Many respondents served multiple roles in polio eradication activities.
Table 5 .
Contributing factors and challenges of AFP surveillance implementation (survey).
*The survey questionnaire has been subjected to multiple answers. | 2024-06-28T06:17:16.250Z | 2024-06-27T00:00:00.000 | {
"year": 2024,
"sha1": "c3797ebc8a4d4e2ae30c91860d6890e68614e14d",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "78cabedaa2dc8a56c5c26f32428c8a07b02f6aa8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257361710 | pes2o/s2orc | v3-fos-license | Comparing Conventional and Conversational Search Interaction using Implicit Evaluation Methods
Conversational search applications offer the prospect of improved user experience in information seeking via agent support. However, it is not clear how searchers will respond to this mode of engagement, in comparison to a conventional user-driven search interface, such as those found in a standard web search engine. We describe a laboratory-based study directly comparing user behaviour for a conventional search interface (CSI) with that of an agent-mediated multiview conversational search interface (MCSI) which extends the CSI. User reaction and search outcomes of the two interfaces are compared using implicit evaluation using five analysis methods: claiming to have a better search experience in contrast to a corresponding standard search interface.
Introduction
The growth in networked information resources has seen search or information retrieval become a ubiquitous application, used many times each day by millions of people in both their work and personal use of the internet. For most users, their experience of search tools is dominated by their use of web search engines, such as those provided by Google and Bing, on various different computing platforms. Users lack of knowledge on the topic of their information need often means that they must perform multiple search iterations. This enables to learn about their area of investigation and eventually to create a query which sufficiently describes their information need which is able to retrieve relevant content. The search process is thus often cognitively demanding on the user and inefficient in terms of the amount of work that they are required to do.
Bringing together the needs of users to search unstructured information technologies and advances in artificial intelligence, recent years have seen rapid growth in research interest in the topic of conversational search (CS) systems Radlinski and Craswell (2017). CS systems assume the presence of an agent of some form which enables a dialoguebased interaction between the searcher and the search engine to support the user in satisfying their information needs Radlinski and Craswell (2017). Studies of CS to date have generally adopted a human "wizard" in the role of the search agent Trippas et al. (2017); Avula et al. (2018). These studies have been conducted in CS systems with the implicit assumption that an agent can interpret the searcher's actions with human like intelligence. In this study, we take a alternative position using an automatic rule-based agent to support the searcher in the CS interface and compare this with the effectiveness of a similar CSI to perform the same search tasks. In this study, we introduce a desktop based prototype MCSI to a search engine API Our interface combines a CS assistant with an extended standard graphical search interface. The goals of our study include both better understanding of how users respond to CS interfaces and automated agents, and how these compare with the user experience of a CSI for the same task.
The ubiquity of CSIs means that users have well established mental models of the search process from their use of these tools. With respect to this, it is important to consider that it has been found in multiple studies that subjects find it difficult to adapt to new technologies, especially when dealing with interfaces Krogsaeter et al. (1994). Thus, when presented with a new type of interface for an equivalent search task, it is interesting to consider how users will adapt and respond to it.
Previous studies of CS interfaces have focused on chatbot type interfaces which limit the information space of the search Avula et al. (2018); Avula and Arguello (2020), and are very different from conventional graphical search interfaces. Search via engagement with a chat type agent can result in the development of quite different information-seeking mental models to those developed in the use of standard search systems, meaning that it is not possible to directly consider the potential of CS in more conventional search settings based on these studies. We are interested in this study to consider how user mental models of the search process from CSIs will response in a CS conversational setting to enhance the user search experience.
For our study of conversational engagement with a search engine and contrasting it with more conventional userdriven interaction, we adopt a range of implicit evaluation methods. Specifically we use cognitive workload-related factors (NASA Load Task) Hart and Staveland (1988), psychometric evaluation for software Lewis (1995), knowledge expansion Wilson and Wilson (2013) and search satisfaction Kaushik and Jones (2018). Our findings show that users exhibit significant differences in the above dimensions of evaluation when using our MCSI and a corresponding CSI.
The paper is structured as follows: Section 2 overviews existing work in conversational engagement and its evaluation, Section 3 describes the methodology for our investigation, Section 4 provides details of our experimental procedure and our results and includes analysis, findings and hypothesis testing and Section 5 concludes.
Related work
In this section, we provide an overview of existing related work in conversational interfaces, conversational search and relevant topics in evaluation.
Conversational Interfaces
Conversational interaction (CI) with information systems is a longstanding topic of interest in computing. However, activity has increased greatly in recent years. The key motivation for examining CI is the development of interactive systems which enable users to achieve their objectives using a more natural mode of engagement than cognitively demanding traditional user-driven interfaces. Such user-driven interfaces require users to develop mental models to use them reliably. Recent research on CI has focused on multiple topics including mode of interaction, the intelligence of conversational agents, the structure of conversation, and dialogue strategy McTear et al. (2016); Abdul-Kader and Woods (2015); Roller et al. (2020). Progress in CI can be classified in four facet areas: smart interfaces, modeling conversational phenomena, machine learning approaches, and toolkits and languages Singh et al. (2019); Braun and Matthes (2019); Araujo (2020).
Current chatbot interfaces have evolved, in common with many areas, from rule-based systems to the use of data driven approaches using machine learning and deep learning methods Nagarhalli et al. (2020). Toolkits have been developed to support the construction and testing of chatbot agents for particular applications. The majority of research on conversational agents has focused on question answering and chit chat (unfocused dialogue) systems. Only very limited work has been done on information-seeking bots, dating mainly from the early 1990s Stein and Thiel (1993). One recent example of a multimodal conversational search is presented in our earlier work Kaushik et al. (2020). This enables a user to explore long documents using a multi-view interface. Our current study is focused on evaluation of this interface in comparison to a CSI.
Conversational Search
While users of search tools have become accustomed to standard "single shot" interfaces, of the form seen in current web search engines, interest in the potential of alternative conversational search-based tools has increased greatly in recent years Radlinski and Craswell (2017). Traditional search interfaces have significant challenges for users, in requiring them to express their information needs in fully formed queries, although users have generally learned to use them to good effect. The idea of agent-support conversational-based interaction supporting them in the search process is thus very attractive. Multiple studies have been conducted to investigate the potential of conversational search in different dimensions. These studies however have generally involved use of a human in the role of an agent wizard Avula and Arguello (2020); Avula et al. (2018Avula et al. ( , 2019. These have the limitation of assuming both human intelligence and error free speech recognition, which will generally not be the case in a real system Trippas et al. (2017Trippas et al. ( , 2018. Some studies on conversational search have been based completely on a data-driven approach using machine learning methods to extract a query from multiple utterances. The drawback of this approach is that the dialogues are not analyzed based on incremental learning over multiple conversations Nogueira and Cho (2017); Bowden et al. (2017). Other types of studies have developed agents by using an intermediate approach in which a combination of rules is used to form a dialogue strategy from users search behaviour De Bra and Post (1994) Kaushik and Jones (2018), which guide the user in conversations with the support of a pretrained machine learning model to extract the intent and entities from utterance. We follow this last approach in our multiview prototype to understand the user search experience in a conversational setting.
Evaluation
Currently, there is no standard mechanism for evaluation of conversational search interfaces. In this study, we adopt implicit measures in five dimensions Kaushik and Jones (2021) user search experience Kaushik and Jones (2018), knowledge gain Wilson and Wilson (2013), cognitive and physical load Hart and Staveland (1988), user interactive experience Schrepp (2018) and usability of the interface software Lewis (1995).
Methodology
In this section, we describe the details of our user study which aims to enable us to observe and better understand and contrast the behaviour of searchers using a CSI and our prototype MCSI. This section is divided into two subsections: interface design and experimental setup. In order to investigate user response to search using a MCSI and to contrast this with a comparable CSI with the same search back-end, we developed a fully functioning prototype system, shown in Figure 1. The interface is divided into two distinct sections. The righthand side which corresponds to a standard CSI, and the lefthand side which is a text-based chat agent and interacts with both the search engine and the user. Essentially the agent works alongside the user as an assistant, rather than being positioned between the user and the search engine Maes (1994).
Prototype Conversational Search System
The Web interface components are implemented using the web python framework flask and with HTML, CSS, and JS toolkits. The agent is controlled by a logical system and is implemented using Artificial Intelligence Markup Language (AIML) scripts. These scripts are used to identify the intent of the user, to access a spell checking API 3 , and are responsible for search and giving responses to the users. Since the focus of this study is on the functionality of the search interface, the search is carried out by making calls to the Wikipedia API. The interface includes multiple components, as discussed in detail in our previous work Kaushik et al. (2020)
Dialogue Strategy and Taxonomy
After exploring user search behaviour Kaushik and Jones (2018) and dialogue systems, we developed a dialogue strategy and taxonomy to support CS. The dialogue process is divided into three phases and four states as discussed in detail in our previous work Kaushik et al. (2020) The three phases include: • Identification of the information need of the user, • Presentation of the results in the chat system, • Continuation of the dialogue until the user is satisfied or aborts the search.
The agent can seek confirmation from the user, if the query is not clear, it can also correct the query by using the spell checker and reconfirm the query from the user to make the process precise enough to provide better results. The agent can also highlight specific information in long documents to help the user to direct their attention to potential important content.
The user always has the option to interrupt the ongoing communication process by entering a new query directly into the Query Box. The communication finishes by the user ending the search with success or with failure to address their information need.
System Workflow
The system workflow is divided into two sections: Conversation Management and Search Management, dicussed in detail in our previous work.
Conversation Management: This includes a Dialogues Manager, a Spell Checker and connection to the
Wikipedia API. The Dialogue Manager validates the user input and either sends it to the AIML scripts or self-handles it, if the user input misspelt or incorrect. We use AIML scripts to implement the response to the user. The system response to user input directed to the AIML scripts is determined by the AIML script, which can further classify the user's intent. The two major categories of intent are: greeting and search. The greeting intent is responsible for initializing, ending the conversation and system revealment. The search intent is responsible for directing the user input to the spell checker or wikipedia API and transferring control to search management. The Spell Checking module is responsible for checking the spelling of the query and asking for suggestions from the user (for an example: If the user searches for "viusal" then the system would ask: Do you mean "visual"?). Once the user confirms "yes" or "no", then the query is forwarded to the Wikipedia API.
2. Search Management: This is responsible for search and display of the top 3 search results. The user may also look for more sub-sections from a selected document. Search management also has an option to display the full document. This opens a display with important sections with respect to the query highlighted. The criteria for an important section is based on a Custom Algorithm which extracts important sentences based on a TF-IDF score for each sentence by selecting the top-scoring sentences. The top 30% of extracted sentences are divided into clusters by Density based Clustering (DBSCAN) to extract diverse segments (combination of the sentences). Important segments are selected from these segments by using a cosine similarity score with the query.
User Engagement
The user can interact with both the search agent assistant and directly with the search engine. If the user commences a search from the Retrieval Results box, the assistant initiates a dialogue to assist them in the search process. The system It is late, but you can't get to sleep because a sore throat has taken hold and it is hard to swallow. You have run our of cough drops, and wonder if there are any folk remedies that might help you out until morning. also provides support to the user in reading full documents. As described above, important sections in long documents are highlighted to ease reading and reduce cognitive effort.
Review of Long Documents
In our study reported in Kaushik and Jones (2018), we note users can spend considerable time reviewing long documents.
Our MCSI aims to support these users and reduce their required effort by highlighting important segments with respect to the user's query, as described above. This facility also provides the user with the opportunity to explore subsections within a document instead of needing to read a full long document.
Conventional Interface
To enable direct comparison with our MCSI, a CSI for our study was formed by using the MCSI with the agent panel removed and the document highlighting facilities disabled. The searcher enters their query in the query box, document summaries are returned by the Wikipedia API, and full documents can be selected for viewing.
Information Needs for Study
For our investigation, we wished to give searchers realistic information needs which could be satisfied using a standard web search engine. In order to control the form and detail of these, we decided to use a set of information needs specified within backstories, e.g. as shown in Figure 2. The backstories that we selected were taken from the UQV100 test collection Bailey et al. (2016), whose cognitive complexity is based on the Taxonomy of Learning Krathwohl (2002). We decided to focus on the most cognitively engaging backstories, Analyze type, in the expectation that these would require the greatest level of user search engagement to satisfy the information need.
Since the UQV100 topics were not provided with type labels, we selected a suitable subset as follows. The UQV100 topics were provided labeled with estimates of the number of queries which would need to be entered and the number of documents that would need to be accessed in order to satisfy the associated information need. We used the product of these figures as an estimate of the expected cognitive complexity, and then manually selected 12 of the highest scoring backstories that we rated as the most suitable for use by general web searchers, e.g. not requiring specific geographic knowledge or of specific events.
Experimental Procedure
Participants in our study had to complete search tasks based on the backstories using the MCSI and CSIs. Sessions were designed to assign search tasks and use of the alternative interfaces arranged to avoid potential sequence-related biasing effects. Each session consisted of multiple backstory search tasks. While undertaking a search session, participants were required to complete a pre-and post task search questionnaires. In this section we first give details of the practical experimental setup, then outline the questionnaires, and finally describe a pilot study undertaken to finalise the design of the study.
Experimental Setup
Participants used a setup of two computers arranged with two monitors side by side on a desk in our laboratory. One monitor was used for the search session, and the other to complete the online questionnaires. Participants carried out their search tasks accessing the Wikipedia search API using our interfaces running using a Google chrome browser. In addition, all search activities were recorded using a standard screen recorder tool to enable post-collection review of the user activities. Approval was obtained from our university Research Ethics Committee prior to undertaking the study. Participants were given printed instructions for their search sessions. each task.
Questionnaires
Participants completed two questionnaires for each search task. The questionnaire was divided into three sections: • Basic Information Survey: Participants entered their assigned user ID, age, occupation and task ID. Table 1: Task load index to compare the load on user while using both the systems (MCSI and CSI) with independent T two tailed test .
• Pre-Search: Participants entered details of their pre-existing knowledge with respect to topic of the search task to be undertaken.
• Post-Search: Post-search feedback from the user including their search experience, knowledge gain, and writing the post-search summary.
The questionnaire was completed online in a Google form.
Pilot Study
A pilot study was conducted with two undergraduate students in Computer Science using two additional backstory search tasks. This enabled us to see how long it took them to complete the sections of the study using the CSI and the MCSI, to gain insights into the likely behaviour of participants, and to generally debug the experimental setup.
Each of the pilot search tasks took around 30 minutes to complete. Feedback from the pilot study was used to refine the specification of the questionnaire. Results from the pilot study are not included in the analysis.
Study Design
Based on the result of the pilot study, each participant in the main study was assigned two of the selected 12 search task backstories with the expectation that their overall session would last around one hour. Pairs of backstories for each session were selected using a Latin square procedure. After every six tasks the sequence of allocation of the interface was rotated to avoid any type of sequence effect Bradley (1958).
Each condition was repeated 4 times with the expectation that this would give sufficient results to be able to observe significant differences where these are present. Since there were 12 tasks, this required 24 subjects to participate in the study. In total, 27 subjects (9 Females, 18 Males) in the age group of 18-35 participated in our study (excluding the pilot study), we examined the data of 25 subject, since 2 subjects were found not to have followed the instructions correctly. The study was conducted in two phases. Each user had to perform a different search task using the CSI and MCSI with the sequencing of their use of the interfaces varied to avoid learning or biasing effects.
As well as completing the questionnaires, the subjects also attended a semi-structured interview after completion of their session of two tasks using both interface conditions. The user actions in the videos and interviews were thematically labelled by two independent analysts and Kappa coefficients were calculated (approx mean .85) Landis and Koch (1977). Disparities in labels were 'resolved by mutual agreement between the analysts. The interview questionnaire dealt with user search experience, software usability and cognitive dimensions, and was quantitatively analyzed. Based on the interview analysis, out of 25 participants 92% were happy and satisfied with the MCSI. In all conditions, subjects preferred the MCSI. Showing that there is no sequence effect arising from the order of the interfaces in the search sessions.
Each hypothesis of the study was tested using a T-Test (since the number of samples was less than 31). Each hypothesis was evaluated on a number of factors which contribute to the examination in each dimension as discussed below.
Study Results
The MCSI was compared with the conventional interface using an implicit evaluation method examining multiple dimensions: cognitive load, knowledge gain, usability and search satisfactions.
Parameter Definition Dqual
Comparison of the quality of facts in the summary in range 0-3 where 0 represents irrelevant facts and 3 specific details with relevant facts. Dintrp Measures the association of facts in a summary in the range 0-2 where 0 represents no association of the facts and 2 that all facts in a summary are associated with each other in a meaning. Dcrit Examines the quality of critiques of topic written by the author in range the 0-1 where 0 represents facts are listed with without thought or analysis of their value and 1 where both advantages and disadvantages of the facts are given.
Cognitive dimensions
Conventional search can impose a significant cognitive load on the searcher Kaushik (2019). An important factor in the evaluation of conversational systems is measurement of the cognitive load experienced by users. To measure user workload, the NASA Ames Research Centre proposed the NASA Task Load Index Hart and Staveland (1988); Kaushik and Jones (2021). In terms of cognitive load, the user was asked to evaluate the conventional interface and MCSI in 6 dimensions from the NASA Task Load Index associated with mental load and physical load,as shown in Table 1. 1. HO: Users experience a similar task load during the search with multiple interfaces: The grading scale of the NASA Task Load Index measure lie between 0 (low) -7 (High). We compared the mean difference of both systems on all six parameters. In all aspects, subjects experienced lower task load using the MCSI. Subjects claimed more success in accomplishing the task using the MCSI. Results for accomplishing the task were statistically significantly different. Subjects felt less insecure, discouraged, irritated, stressed, and annoyed, while using the MCSI with a significant difference (P<0.10). This implies that the null hypothesis was rejected on the basis of the Task Load index. Although four factors were not significantly different, the mean difference between both the systems on these factors was more than 10%. This shows that the user experienced less subjective mental workload while using the MCSI.
Usability
CS studies generally do not explore the dimensions of software usability. However, it is important to understand the challenges and opportunities of CSs on the basis of software requirements analysis. This allows a system to be evaluated based on real-life deployment and to identify areas for improvement. Lower effectiveness and efficiency of a software system can increase cognitive load, reduce engagement and act as a barrier in the process of learning while searching Kaushik and Jones (2018); Vakkari (2016); Kaushik (2019). Usability is an important evaluation metric of interactive software. IBM Computer Usability Satisfaction Questionnaires are a Psychometric Evaluation for software from the perspective of the user Lewis (1995) known as the Post-Study System Usability Questionnaire (PSSUQ) Administration and Scoring. The PSSUQ was evaluated using four dimensions: overall satisfaction score (OVERALL), system usefulness (SYSUSE), information quality (INFOQUAL) and interface quality (INTERQUAL), which include fifteen parameters. On each dimension, the MCSI outperformed the CSI. The grading scale lies between 0 (low) -7 (High). We compared the mean difference of both systems on all parameters. In all aspects, subjects experienced less task load when using the MCSI , as shown in Table 2.
1. H0: User Psychometric Evaluation for the conversational interface and conventional search has no significant difference A T Independent test was conducted. It was found that for all the parameters the MCSI outperformed the CSI. The null hypothesis was rejected and the H1 hypothesis was accepted, which is that the MCSI performs better than the CSI.
Knowledge Expansion
Satisfaction of the user's information need is directly related to their knowledge gain about the search topic. Knowledge gain can be measured based on recall of new facts gained after the completion of the search process Wilson and Wilson (2013). We investigated knowledge expansion using a comparison of pre-search and post-search summaries written by the participant, based on a number of parameters, as shown in Table 3, while using both the systems. We divide the hypothesis into two sub-parts as follows: 1. Comparison of pre-search and post-search summaries: This is to verify the knowledge expansion after each task independent of the search interface used by the participant. 2. Comparison of the mean difference between pre-search and post-search summaries for each interface: This is to verify which interface supported users better in gaining knowledge.
The user gains knowledge during the search when using either of the search interfaces. To measure their knowledge gain, we asked subjects to write a short summary of the topic before the search and after the search. Each summary was analyzed based on three criteria as described in Wilson and Wilson (2013): Quality of Facts (DQual), Intrepterations (DInterpretation) and Critiques (DCritique), as shown in Table 3. The summaries were scored against these three factors by two independent analysts with the Kappa coefficient (Approx .85) Landis and Koch (1977). We conducted hypothesis T dependent testing on tasks completed using both the conventional search interface and the MCSI.
1. H0: No significant difference in the increase of the knowledge after completing the search task in both settings: As shown in Tables 4 and 5, the pre-search score and post-search score for all three factors were statistically significant in both the search settings. This implies that subjects expand their knowledge while carrying out the search. This rejects the null hypothesis which leads to the alternative hypothesis which concludes that users experienced significant increase in their knowledge after search in both search settings.
After concluding the alternative hypothesis, it was important to investigate whether one system was better in expanding the user's knowledge. We purposed and tested the following hypothesis.
1. H0: Knowledge gain during the search is independent of the interface design: In this test, we compared the Mean of the difference in the score for pre-search and post-search summaries in both settings. conducted on the change of the three parameter scores as discussed above for the hypothesis testing as shown in the Table 6. It was found that in the MCSI interface setting, the subjects scored higher in the change of critique, quality and interpretation. This implies that the subjects learned more while using the MCSI. The difference in critique score was statistically significant, while the other two parameters were not statistically significant. The quality and interpretation increased more than 20% while using the MCSI. This confirms the alternative hypothesis, subjects' knowledge expands more when using the MCSI. Table 7: Characteristics of the search process Vakkari (2016) by the change in knowledge structure where * indicates statistically significant results.
Search Experience
Learning while searching is an integral part of the information seeking process. Based on the search as learning proposed by Vakkeri Vakkari (2016), the user search experience can be evaluated on 15 parameters, including the relevance of the search result, the quality of the text presented by the interface, and understanding of the topic in both the search settings via pre-search and post-search questionnaires.
1. H0: Subjects find no significant difference between while using both the interfaces: The T-independent test was conducted among all 15 parameters, shown in Table 7. It was found that the null hypothesis was rejected Subjects search experience was statistically significantly better with the MCSI. In the pre-search questionnaire, subjects were asked to anticipate the difficultly level of the search before starting the search and in post-search questionnaire, subjects were asked to indicate the difficulty level they actually experienced. It was observed that pre-search anticipated difficulty level and the post-search actual difficulty level increased for the CSI (16%) and decreased in the case of MCSI search task (14%). Table 8: UEQ-S score based on CSI and MCSI where 'P' stands for Pragmatic Quality and 'H' stands for Hedonic Quality (statistically significant).
Interactive User Experience
To ensure a conversational search system provides reasonable User Experience (UX), it is critical to have a measurability which defines user insights about the system. A UX questionnaire for interactive products is the User Experience Questionnaire (UEQ-S) Laugwitz et al. (2008); Schrepp et al. (2017); Hinderks et al. (2018). This questionnaire also enables analysis and interpret outcomes by comparing with benchmarks of a larger dataset of outcomes for other interactive products Hinderks et al. (2018). This questionnaire also provides the opportunity to compare interactive products with each other. For specified purposes, a brief version (UEQ-S) was prepared which had only 8 parameters to be considered Hinderks et al. (2018). UEQ-S was preferred for the MCSI, since it is mostly used for interactive products. For example, users filled the experience questionnaire after finishing the search task, if there were too many questions, a user may not complete the answers fully or even refuse to complete it (as they have finished the search task and are in the process of leaving or starting the next task, so the motivation to invest more time on feedback may be limited). The UEQ-S contains two meta dimensions Pragmatic and Hedonic quality. Each dimension contains 4 different parameters, as shown in Table 8. Pragmatic quality explores the usage experience of the search system, while Hedonic quality explores the pleasantness of use of the system.
1. HO: Users feel a similar interactive experience when using the different interfaces: Users evaluated the system based on 8 parameters as shown in Table 8. The grading scale was assigned between 0 (low) -7 (High). We compared the mean difference of both systems on all parameters. In all aspects, subjects experience was positive in Pragmatic quality and Hedonic quality when using the MCSI, and statistically significantly different in comparison to the CSI. Subjects felt obstructive, complicated, confusing, inefficient, and boring, while using the CSI with significant difference (P<0.10). This implies that the null hypothesis was rejected on the basis of the user experience. Based on these findings, we can conclude that the user experience was more pleasant and easy while using the MCSI.
Analysis of Study Results
In summary, hypothesis testing showed that the MCSI reduced cognitive load, increased knowledge expansion, increased cognitive engagement and provided a better search experience load. Based on the results of the study, a number of research questions dealing with factors relating to conversational search, the challenges of conventional search, and user search behaviour can be addressed.
4.6.1 RQ1: What are the factors that support search using the MCSI.
Around 92% of the subjects claim in the post-search interview that the MCSI was better than the CSI. Around 48% found that the MCSI allowed them to more easily access the information. A similar view was found in terms of information relevance and its structure as presented to the user. Around 38% of subjects were satisfied with the options and suggestions provided by the MCSI. The other reasons for their satisfaction were the highlighting of segments in long documents, finding the search system effective, its being interactive and engaging, and user friendly.
4.6.2 RQ2: What are the challenges with the conventional search system? Subjects found some major challenges in completing the search tasks with the CSI. The limitations were mainly based on observations from user interactions and feedback after the search task. The limitations can be divided into five broad categories.
Exploration: Around 60% of the subjects claimed they found it difficult to explore the content with the CSI, which meant that they were unable to learn through the search process. It was noted that they needed to expend much effort to go through whole documents, which discouraged them from exploring further to satisfy their information need. Another reason was that too much information was displayed to them on the page which confused them during the process of information seeking.
Cognitive Load: Around 28% of subjects experienced issues with cognitive load using the CSI. In current search systems, a query to the search engine returns the best document in a single shot. The user may need to perform multiple searches by modifying the search query each time to satisfy their information need. There are multiple limitations associated with this single query search approach which put high cognitive load on the user. The following points highlight the limitations and weaknesses of single-shot search Kaushik (2019).
1. The user must completely describe their information need in a single query. 2. The user may not be able to adequately describe their information need.
3. High cognitive load on the user in forming a query. 4. An information retrieval system should return relevant content in a single pass based on the query. 5. The user must inspect returned content to identify relevant information.
Interaction and Engagement: 8% found difficulty in engaging and interactive with long documents. Subjects can find content in long documents irrelevant or vague with respect to their specific information need. Using the CSI, 32% of the subjects did not find the long documents precise enough to satisfy their information need. In contrast, 90% of them were satisfied with the way information was presented to them in the MCSI, although the Wikipedia API and underlying retrieval method was same for both interfaces.
Highlighting: Another issue which was referred to by around 8% of subjects related to text highlighting. Subjects found that the absence of highlighting in the CSI was frustrating. The majority of subjects (92%) claimed that the MCSI was better. The remaining subjects (8%) faced some challenges using it. Subjects wanted more sections and subsections in the documents to support their exploration, and also wanted support of image search. Around 4% of the subjects felt the need for improvement in operational speed and better incorporation of standard features such as spellchecking. Subjects found the chat interface helpful for exploring long documents. They were keen to see the addition of speech as a mode of user interaction and a more refined algorithm for the selection of images for presentation to the user.
Subjects appreciated the usefulness of the interface in supporting exploratory search, but suggested that this would be further improved by the incorporation of a question answering facility.
4.6.5 RQ5: How does user experience vary between search settings in comparison to each other?
1. Observing the Pragmatic and Hedonic properties of CSI: The users provided feedback based on their experience using the CSI. The CSI score is negative with respect to both Pragmatic and Hedonic properties and the overall score is also negative. From this we can infer that the user's experience of the CSI system is neither effective nor efficient, as shown in the Table 9. From Table 9, we can calculate the mean range after data transformation for UEQ-S where is -3 too negative and +3 is too positive. Table 9 shows the confidence interval and confidence level. The smaller the confidence interval the higher the precision Schrepp (2018). The confidence interval and confidence level confirm our analysis that all the dimensions of Pragmatic and Hedonic properties were negatively experienced by the users. Generally, items belonging to the same scale should be highly correlated. To verify the user consistency, alpha-coefficient correlation was calculated using the UEQ-S toolkit. As per different studies, an alpha value > 0.7 is considered sufficiently consistent Hinderks et al. (2018). This shows that user marking of the CSI is consistent. The UEQ-S tool kit also provides an option to detect random and non-serious answers by the users Schrepp (2018) Hinderks et al. (2018). This is carried out by checking how much the best and worst evaluation of an item in a scale differ. Based on this evaluation, the users' feedback does not show any suspicious data.
2.
Observing the Pragmatic and Hedonic properties of MCSI: The MCSI scored positive in Pragmatic, Hedonic and Overall score from which we can infer that the user's experience of the MCSI is good in general and with good ease of use. Table 10 shows the confidence interval and confidence level. The confidence interval and confidence level confirms our analysis that all the dimensions of pragmatic and hedonic scores were positively experienced by the users. Alpha-coefficient correlation Hinderks et al. (2018) confirms that the marking of MCSI by the users is consistent. The UEQ-S toolkit also provides an option to detect random and non-serious answers by users. This is conducted by checking how much the best and worst evaluation of an item in a scale differ. Based on this evaluation, the users' feedback does not detect any suspicious data.
4.6.6 RQ6: How does user experience vary for both search settings in comparison to a standard benchmark?
1. Comparison of the CSI with the standard benchmark: This benchmark was developed based users on feedback on 21 interactive products Hinderks et al. (2018). Based on the comparison from the benchmark, the CSI UX is far below the mean of the interactive products (Pragmatic Quality < 0.4, Hedonic Quality < 0.37 and overall < 0.38). This signifies that the UX with the CSI needs major improvement on Pragmatic and Hedonic sectors. In the comparison to the benchmark, the CSI rates as a low quality of user experience and lies in the range of worst 25% of the products.
2. Comparison of the MCSI with the standard benchmark: Based on the comparison from the benchmark Hinderks et al. (2018), the MCSI UX is far above the mean of the interactive products (Pragmatic Quality > 0.4, Hedonic Quality > 0.37 and overall > 0.38). This signifies the UX of the MCSI compared to other interactive products (benchmark) is very high and is of excellent level, and lies in the range of 10% best results.
Conclusions and Observations
The study reported in this paper indicates that subjects found our MCSI more helpful than a closely matched CSI. We also observed types of user behaviour while using MCSI which are different to those when using a CSI. Using our agent-based system, we observe the natural expectations of user search in conversational settings. We observed that subjects do not encounter any difficulty in using the new interface, because it seems to be similar to the standard search interface with the additional capabilities of conversation. We also observe that the information space and its structure is a key component in information seeking. Subjects found highlighting important segments in long documents enables them to access information much easily. The MCSI made the search process less cognitively demanding and more cognitively engaging.
Clearly our existing rule-based search agent can be extended in terms of functionality, and going forward we aim to examine basing its functionality on machine learning based methods, but this will require access to sufficient suitable training data, which is not available at this prototype stage. | 2023-03-06T16:06:39.535Z | 2023-03-16T00:00:00.000 | {
"year": 2023,
"sha1": "f6ee87eeafd8a1389fe795e76df1f0df4f213c2b",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.5220/0011798500003417",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "5a5f9fe9c4ce8c63b67f75386004fc3be8b2bae7",
"s2fieldsofstudy": [
"Business",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
55899220 | pes2o/s2orc | v3-fos-license | ANTI-HYPERGLYCEMIC, ANTIOXIDANT, AND ANTI-INFLAMMATORY ACTIVITIES OF EXTRACTS AND METABOLITES FROM Sida acuta AND Sida rhombifolia
Species of genus Sida are used around the world for a large amount of therapeutic treatments, including hyperglycemia. α-Glucosidase inhibitors are recognized as valuable tools for reducing postprandial hyperglycemia by retarding absorption of glucose. The effect of extracts and isolated compounds of S. acuta and S. rhombifolia on inhibition of α-glucosidase as primary screening of antihyperglycemic activity was tested using yeast and mammalian α-glucosidases. When yeast α-glucosidase was used the acetone extracts of S. acuta and S. rhombifolia showed IC50 values of 8.49 ± 0.66 and 8.10 ± 0.34 μg mL, respectively, and the most active compound was p-hydroxyphenethyl trans-ferulate (IC50 19.24 ± 1.73 μmol L) followed by β-sitosteryl glucopyranoside (IC50 32.70 ± 1.35 μmol L). However, the activity of extracts and isolated compounds decreased significantly when mammalian α-glucosidase was used, indicating that substrates affinity is higher for type 1 enzymes. The antioxidant and anti-inflammatory activities of extracts and isolates were also tested since many diabetic complications are associated to the oxidative stress and inflammatory immune responses. Acetone extracts were the most active in all evaluations. p-Hydroxyphenethyl trans-ferulate, could be associated to these activities, since it was active in the three evaluations. This effect could be related to its phenolic character.
INTRODUCTION
The genus Sida (Malvaceae) groups around 200 species spread worldwide in tropical and warm regions, 35 of which occur in Mexico, included S. acuta and S. rhombifolia. 1 Species of this genus have a large amount of therapeutic uses; in Africa they are used to treat malaria, gastrointestinal infections, varicella, variola, and hepatitis B; 2,3 in Asia they serve as tonic, antipyretic, and to cure disorders of the nervous system, hyperglycemia, and liver and blood problems; 4,5 and in Central America they are taken as medication for fever, asthma, renal inflammation, ulcers, and worm infections. 6Their use as antituberculosis, hypotensive and cytotoxic agents, and in urinary and cardiac diseases is also common. 7In Mexico S. acuta is used to treat fever, and stomach and teeth ache and S. rhombifolia is employed as disinfectant and to cure diarrhea, ulcers and tumours. 8As a result of the abundant activities of these plants there are also many pharmacological evaluations; their analgesic, 9 antidiarreic, 10 anti--oxidant, vasorelaxant, 11 and anti-hyperglycemic activities, 12 among others, have been tested, mainly from extracts of different species. 7revious chemical reports show the presence of ecdysteroids, 13,14 alkaloids, 11,15 and flavonoids, 11,16 as main secondary metabolites in S. acuta and S. rhombifolia.
Diabetes mellitus (DM), a chronic metabolic disease characterized by high level of glycemia, is becoming a serious problem around the world.There are two major types of DM, type-1 is insulin dependent and arises from defects in the insulin gene causing an inefficient or no insulin production by the pancreas, and type-2 is a noninsulin dependent disease characterized by the reduction of insulin secretion and resistance to the metabolic effects of insulin play role.Worldwide, in 2013 there were about 380 million and for 2035 is expected to be 592 million diabetic patients.Among them, 90% are DM type-2 cases. 17α-Glucosidase inhibitors are one of the six groups of drugs in the prescription of DM.They act by decreasing the postprandial hyperglycemia through the inhibition of the carbohydrate-hydrolysing enzymes, delaying the release of glucose from the oligosaccharides. 18dditionally, many diabetic complications are associated to oxidative stress and inflammatory immune responses.Free radicals and elevated circulating inflammatory markers can predict the development of diabetes mellitus.Thus, several drugs with anti-inflammatory properties lower glycemia and possibly decrease the risk of developing type 2 diabetes, and in the same way, the presence of antioxidant sources have potential benefits in obesity related diseases as DM. 19n the bases of the utilization in popular medicine of the plants of the genus Sida related above, and since there are no chemical reports on mexican S. acuta and S. rhombifolia, this paper describes their chemical composition, the effect as α-glucosidase inhibitors, and the antioxidant and anti-inflammatory activities of extracts and isolates of these Sida species.
General
Melting points were determined using a Fisher Jones melting point apparatus and are uncorrected.IR spectra were recorded on a Nicolet Magna-IR 750 spectrometer.1D and 2D NMR spectra were obtained on a Bruker Avance III 400 MHz or on a Varian-Unity Inova 500 MHz spectrometer with tetramethylsilane (TMS) as internal standard.For Direct Analysis in Real Time Mass Spectroscopy (DARTMS) a JEOL AccuTOF JMS-T100LC DART mass spectrometer was used.ESIMS were performed on an ESI ion trap Bruker Esquire 6000 mass spectrometer.FABMS were obtained on a JEOL JMS-SX102A mass spectrometer operated with an acceleration voltage of 10 kV, and samples were desorbed from a nitrobenzyl alcohol matrix using 6 kV xenon atoms.All MS analyses were performed at low resolution.Vacuum column chromatography (VCC) was performed using Silica gel 60 G (Merck, Darmstadt, Germany).Flash column chromatography (FCC) was run using Silica gel 60 (230-400 Macherey-Nagel).TLC was carried out on Silica gel 60 GF 254 and preparative TLC on Silica gel GF 254 (Macherey-Nagel), layer thickness 2.0 mm.
Plant Material
S. acuta Bum.f. and S. rhombifolia L. were collected at the archeological zone La Joya, Medellin de Bravo district, Veracruz, México, in March 2012.Voucher specimens were deposited at the Herbarium del Instituto de Biología, UNAM, México (MEXU 1319742 for S. acuta and MEXU 1241720 for S. rhombifolia).
Evaluation of yeast α-glucosidase activity
α-Glucosidase activity was evaluated using an adapted method of S. Xiao-Ping et al. 20 A solution (25 µL) of samples in DMSO-H 2 O 1:1 was added to 150 µL of phosphate buffer solution (PBS, 67 mmol L -1 , pH 6.8) and incubated at 37 °C for 10 min with 25 µL of glutation (3 mmol L -1 in PBS) and 25 µL (0.2 U mL -1 ) of α-glucosidase type I (Sigma cat.G5003-100UN).The substrate solution (25 µL, 23.2 mmol L -1 p-nitrophenyl-α-D-glucopyranoside, Sigma N1377, in PBS) was then added and incubated with agitation for 15 min at 37 °C.Reaction mixture was stopped with 50 µL of Na 2 CO 3 (1 mol L -1 ) and after 5 min of agitation the optical density was determined at 405 nm.Acarbose and quercetin were used as positive controls.The inhibition percentage was calculated by the equation: Where OD control , OD sample and OD background are defined as the absorbance of 100% enzyme activity, test sample with enzyme, and test sample without enzyme, respectively.The concentration of an inhibitor required for inhibit the 50 % of enzyme activity under the mentioned assay conditions is defined as IC 50 value.All samples were tested in triplicated.
Mammalian α-glucosidase inhibition assay
Mammalian α-glucosidase was prepared following the modified method of Jo. 21Rat-intestinal acetone powder (100 mg) was rehydrated with 4 mL of 67 mmol L -1 ice cold phosphate buffer (pH 6.8).After homogenization for 3 minutes at 4 °C, the suspension was centrifuged (13,400 rcf, 4 °C, 30 min) and the resulting supernatant was used for the assay.A reaction mixture containing 150 µL phosphate buffer (67 mM, pH 6.8), 25 µL of α-glucosidase and 25 µL of sample (in DMSO 50%) at different concentrations was pre-incubated for 10 min at 37 °C, and 25 µL of 23.2 mmol L -1 p-nitrophenyl-α-D-glucopyranoside were added.After 15 min incubation at 37 °C, the reaction was stopped by adding 50 µL of Na 2 CO 3 (1 mol L -1 ).Acarbose, and quercetin were used as a positive controls and DMSO 5% as negative control.Enzyme activity was quantified by measuring the absorbance at 405 nm in a BioTek microplate reader Synergy HT.Experiments were done in triplicates.The percentage of enzyme inhibition by the sample was calculated by the following formula: Inhibition (%) = [(AC -AS) / AC)] × 100, where AC is the absorbance of the negative control and AS is the absorbance of the tested sample.
Scavenging activity on free radical 2,2-diphenyl-1picrylhydrazyl (DPPH)
Free radical scavenging activity was measured using an adapted method of Mellors and Tappel, as previously reported. 22
Evaluation of the anti-inflammatory activity
Animals: Male NIH mice weighing 25-30 g were maintained under standard laboratory, conditions in the animal house (temperature 22 ± 4 °C) with a 12/12 h light-dark cycle, according with the Mexican official norm MON-062-Z00-1999.They were fed laboratory diet and water ad libitum.
TPA-induced edema model.The TPA-induced ear edema assay in mice was performed as previously reported. 23(Table 3).
Statistical analysis
All data were represented as percentage mean ± standard error of mean (SEM).The statistical analysis was done by means of Student's t-test, whereas analysis of variance ANOVA followed by Dunnett test were used to compare several groups with a control.P values p≤0.05 and p≤0.01 were considered to be significant.
RESULTS AND DISCUSSION
The chemical study of the aerial parts of S. acuta afforded (Figure 1) a mixture of β-sitosterol (1) 24 and stigmasterol (2), 25 13 2 -hydroxyphaeophytin a (3), 26 β-sitosteryl glucopyranoside (4), 27 p-hydroxyphenethyl trans-ferulate (5), 28 20-hydroxyecdisone (6), 28 20-hydroxy-24-hydroxymethylecdysone (7), 28 and uridine (10). 29. rhombifolia yielded 1 and 2 as a mixture, 5 -7, inosine (8), 30 20-hydroxyecdysone 20,22-monoacetonide (9), 31 25-acetoxy-20--hydroxyecdysone 3-O-β-D-glucopyranoside (11), 13 and glycerol. 32tructures of the isolated products were determined by comparison of their physical constants and spectroscopic data with those reported in the literature.13 2 -Hydroxyphaeophytin a (3), a degradation product of chlorophyll, and 20-hydroxyecdysone 20,22-monoacetonide (9) could be artefacts produced in the extraction of the plant or purification of the extracts, 33 or generated by the plant metabolism.Thus, there is evidence of the presence of 3 at different stages of plant grown, 34 and compound 9 has been isolated from plants with no acetone involved in their extraction and purification process. 28,31 mpound 6 had a molecular ion at 481 m/z [M+H] + in DARTMS, and in the IR spectrum showed absorption bands for hydroxy (3347 cm -1 ) and conjugated ketone groups (1645 cm -1 ).Its 13 C NMR spectrum exhibited signals of twenty seven carbons: a carbonyl, two vynilic, six oxygenated, two quaternary, three methines, eight methylenes, and five methyls, and in the 1 H NMR spectrum the resonance of a vynilic proton at δ H 5.80 which correlated with a carbonyl carbon (δ C 206.4) in the HMBC spectrum, suggested an ecdysone skeleton.The presence of the singlet resonances of five methyl groups together with those of the oxymethines H-2 (δ H 3.82, ddd, J = 12.0, 4.0, 2.8 Hz), H-3 (δ H 3.93, brq, J = 2.8 Hz), and H-22 (δ H 3.32, dd, J = 10.4,1.6 Hz) allowed to identify compound 6 as 20-hydroxyecdysone.This compound has been isolated from several species of the genus Vitex, 31 and of S. rhombifolia, 13 and S. spinosa. 28ompound 7 showed a molecular ion at 511 m/z [M+H] + in FABMS.Its 1 H NMR and 13 C NMR data were similar to those of compound 6, except for the presence of the resonances of a hydroxymethylene group (δ H 3.57, dd, J = 11.0,6.0 Hz and 3.50, dd, J = 11.0,5.0 Hz; δ C 64.4), which was located at C-24 by the correlations of H-28 with C-24, and of H-24 with C-23 and C-25 observed in the HMBC spectrum.Compound 7, identified as 20-hydroxy-24--hydroxymethylecdysone, has been isolated from S. spinosa. 28ompound 9, obtained as white amorphous powder, exhibited in the IR spectrum bands of hydroxy and conjugated ketone groups at 3424 and 1661 cm -1 , respectively, and a molecular ion at 521 m/z [M+H] + in DARTMS.The NMR spectra of 9, as those of compounds 6 and 7, showed an ecdysteroid structure pattern, with the additional resonances of a ketal group: two methyl groups (δ H 1.41 and 1.33, δ C 26.9, and 28.9) and a ketalic carbon (δ C 106.9).Compound 9 was identified as 20-hydroxyecdysone 20,22-monoacetonide, isolated previously of Vitex strickeri 31 and S. spinosa. 28ompound 11 exhibited in the IR spectrum absorptions at 3371, 1706, and 1646 cm -1 indicative of hydroxy, carbonyl, and conjugated ketone groups, and showed a quasi-molecular ion at 707 m/z [M+Na] + in ESIMS.The NMR spectra of 11 showed the characteristic features of an ecdysone with a sugar moiety whose anomeric proton resonated at δ H 4.34 (d, J = 8.0 Hz).The HMBC correlation between this proton and C-3 (δ C 76.8) determined the localization of the sugar portion at this carbon.Additionally, the presence of the singlet signal of a methyl group at δ H 1.95, and the resonances of a carbonyl carbon at δ C 172.7 and a methyl at δ C 22.4 indicated an acetyl group which was located at C-25 (δ C 83.9) by comparing its chemical shift with those reported in the literature. 15Compound 11 spectroscopic features were in agreement with 25-acetoxy-20-hydroxyecdysone 3-O-β-D-glucopyranoside, previously isolated from S. rhombifolia. 13n previous reports, a slight reduction on blood glucose level in normal glycemic rats produced by the methanolic extract of leaves of S. acuta has been stablished, 5 and a dose-dependent anti-hyperglicemic effect induced on diabetic rats by the methanolic and aqueous extracts of S. rhombifolia related to their antioxidant properties, was pointed out. 12Additionaly, the antioxidant effect of S. acuta and S. rhombifolia (L.) ssp.retusa extracts has been reported. 2,35However, there is no information of the effect of S. acuta and S. rhombifolia on α-glucosidase, neither on the possible anti-hyperglicemic or antioxidant active compounds, therefore, the α-glucosidase and antioxidant activities of extracts and isolates were determined.Moreover the anti-inflammatory effect of extracts and isolated compounds was also determined.
Because the substrate specificity of α-glucosidase differs greatly depending on the source, and is well established that α-glucosidase inhibitors show variable activities upon the origin of the enzyme, 36 yeast (type 1) and mammalian (type 2) α-glucosidases were used to determine the inhibitory activity of S. acuta and S. rhombifolia extracts and isolated metabolites (Table 1).Acarbose and quercetin were used as reference compounds since acarbose would be more active on mammalian α-glucosidase 37 while quercetin specificity would be higher for yeast α-glucosidase. 38esults (Table 1) show that, in primary screening, using yeast α-glucosidase the acetone extracts exhibited the highest inhibition of enzyme (82.45 and 88.52%), while the methanol extracts were barely active and no activity was detected in hexane extracts.Among the isolated compounds β-sitosteryl glucopyranoside (4) and p-hydroxyphenethyl trans-ferulate (5) with 86.33 and 82.93% of enzyme inhibition, respectively, were the more actives.However, the activity of extracts and isolates decreased significantly in the mammalian α-glucosidase test, indicating that the substrates affinity is higher for type 1 enzymes.In the concentration-response evaluation the acetone extracts showed concentration-dependent activities (Table 1S) with IC 50 values of 8.49 ± 0.66 and 8.10 ± 0.34 µg mL -1 for S. acuta and S. rhombifolia, respectively.p-Hydroxyphenethyl trans-ferulate (5) was the most active compound (IC 50 19.24± 1.73 µmol L -1 ), near to the reference compound quercetin (IC 50 15.61± 1.68 µmol L -1 ), followed by β-sitosteryl glucopyranoside (4, IC 50 32.70 ± 1.35 µmol L -1 ).
In primary screening of DPPH free radical scavenging activity only the acetone and methanol extracts of S. acuta and S. rhombifolia and compound 5 showed antioxidant properties (Table 2).In concentration-response evaluation (Table 2S) extracts showed rather moderated activities with IC 50 of 220.54 ± 6.47 and 221.50 ± 5.55 µg mL -1 for the acetone extracts of S. acuta and S. rhombifolia, respectively, which were the most actives.Compound 5, showed the highest activity with IC 50 of 46.18 ± 0.83 µmol L -1 , but was less active than α-tocopherol (IC 50 31.74± 1.04 µmol L -1 ), the reference compound.
The anti-inflammatory activity of extracts and compounds 4, 5-7, 9, and 11 was tested on the TPA model of induced acute inflammation. 39The effect on the edema (Table 3) of the acetone extracts S. acuta and S. rhombifolia was mild (46.36% and 42.23%, respectively) while the hexane and methanol extracts were not active.Among the tested compounds only compound 5 showed 48.49% of edema inhibition which was moderated compared with the reference compound indomethacin (83.73%).
CONCLUSIONS
The chemical composition of S. acuta and S. rhombifolia collected in Mexico is in agreement with that reported for the genus Sida so far.On biological screening the acetone extracts of S.acuta and S. rhombifolia behave similarly and were the most active in all evaluations.In yeast α-glucosidase test, p-hydroxyphenethyl trans--ferulate (5) was the most active compound, followed by β-sitosteryl glucopyranoside (4).They were isolated from acetone and methanol extracts of both species.However, they may not be the only active compounds since the IC 50 of the acetone extracts are lower than those of 5 and 4, or else there is a synergetic effect of the components of these extracts that accounts for the activity.In mammalian α-glucosidase test, the activity was considerably lower than in yeast α-glucosidase test, in both, and the increasing order of extracts activity was hexane, methanol, and acetone.This same order of activity was observed in DPPH and TPA tests, indicating a possible relation between the α-glucosidase inhibition and antioxidant and anti-inflammatory actions.Compound 5 was active in the three evaluations.This effect could be related to its phenolic character, since antioxidant phenolic compounds have been related to reduction of chronic inflammation, 40 and their ability to inhibit digestive enzymes such as α-glucosidase, α-amylase, lipase and tripsine has been reported. 41
SUPPLEMENTARY MATERIAL
compounds 5, 6 and 9; concentration-response evaluation on yeast α-glucosidase for acetone extracts and compounds 4 and 5; and concentration-response of DPPH free radical scavenging activity of acetone and methanol extracts and compound 5 are freely available at http://quimicanova.sbq.org.br in PDF.
Table 1 .
Effect of extracts and isolated compounds from S. acuta and S. rhombifolia on yeast and mammalian α-glucosidase inhibition Concentrations of 100 µg mL -1 for extracts.C Concentrations of 100 µmol L -1 for pure compounds.d µg mL -1 .e mmol L -1 .The IC 50 values were calculated from the dose response curve of seven concentrations of each tested sample in triplicated.All values are mean ± SD (n = 3).
a IC 50 of extracts and tested metabolites on mammalian α-glucosidase was not determined.b
Table 2 .
DPPH free radical scavenging activity of extracts and isolated products from S. acuta and S. rhombifolia | 2018-12-11T17:21:40.291Z | 2016-10-27T00:00:00.000 | {
"year": 2016,
"sha1": "170041d36c6c8ff7cf5340ae0e6892ebb85b67ee",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.21577/0100-4042.20160182",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "170041d36c6c8ff7cf5340ae0e6892ebb85b67ee",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology"
]
} |
270245127 | pes2o/s2orc | v3-fos-license | Occlusion Robust Cognitive Engagement Detection in Real-World Classroom
Cognitive engagement involves mental and physical involvement, with observable behaviors as indicators. Automatically measuring cognitive engagement can offer valuable insights for instructors. However, object occlusion, inter-class similarity, and intra-class variance make designing an effective detection method challenging. To deal with these problems, we propose the Object-Enhanced–You Only Look Once version 8 nano (OE-YOLOv8n) model. This model employs the YOLOv8n framework with an improved Inner Minimum Point Distance Intersection over Union (IMPDIoU) Loss to detect cognitive engagement. To evaluate the proposed methodology, we construct a real-world Students’ Cognitive Engagement (SCE) dataset. Extensive experiments on the self-built dataset show the superior performance of the proposed model, which improves the detection performance of the five distinct classes with a precision of 92.5%.
Introduction
In order for students to successfully learn during classroom instruction, it is essential that they pay attention and actively engage with the learning content.Research has shown that students demonstrate varying levels of cognitive engagement even within the same instructional settings (e.g., [1]).Therefore, it is crucial for teachers to ascertain whether and to what extent students are focusing on the learning materials [2].According to the Interactive, Constructive, Active, and Passive (ICAP) framework [3], students exhibit enhanced learning outcomes when participating in active or interactive learning activities as opposed to passively receiving information.Prior studies have indicated that students display attention-related behaviors that are indicative of their underlying cognitive processes [3][4][5].Engagement with instructional content is considered active when it involves overt motoric actions or physical manipulation.However, measuring cognitive engagement through observing changes in students' behavior can be time-consuming and labor-intensive (e.g., [6,7]), especially in long-duration courses and large face-to-face classes.Thus, there is a need for additional measures that could provide high-efficiency and low-cost information on cognitive engagement during real-world classroom learning.
The You Only Look Once (YOLO) series is a useful method for object detection tasks.These methods have been demonstrated to be applicable for detecting students' in-class behaviors (e.g., sleeping, drinking, yawning, etc. [8]).When considering the ICAP framework, YOLO applied to cognitive engagement detection offers three advantages [2,9]: (1) using low-cost visual data, (2) tracking the location of students' behaviors, (3) efficiently establishing the relationship between behaviors and cognitive engagement, and (4) measuring students' cognitive levels.
However, the behaviors' occlusion problem in the real-world classroom makes cognitive engagement detection extremely challenging.Furthermore, it is a big challenge for students to maintain a state of constructive or interactive engagement, resulting in a scarcity of labeled samples.Therefore, this study aims to develop an improved object-enhanced method utilizing YOLO version 8 nano (YOLOv8n) models to monitor students' cognitive engagement.This method aims to alleviate small-scale dataset issues through data augmentation and solve occluded behavior problems through IoU loss enhancement.The primary contributions of this research are as follows: • The Students' Cognitive Engagement (SCE) dataset in a real-world classroom is built.
In contrast to experiment-induced behaviors, this method non-invasively collects visual data of students, which serve as valuable input for training automatic detection models in authentic classroom settings.
•
The Object-Enhanced-YOLOv8n (OE-YOLOv8n) method is proposed to detect students' cognitive engagement in real-world scenes.First, it enhances the computation of easily occluded behaviors.Second, it enhanced the small-scale cognitive engagement data.
•
The OE-YOLOv8n method uses Mosaic and Cutout methods to enhance the realworld cognitive engagement data.Subsequently, it leverages the Inner Minimum Point Distance Intersection over Union (IMPDIoU) loss function, refining the key point distance between the potentially occluded predicted box and its corresponding ground truth box.
Cognitive Engagement Detection
Cognitive engagement involves both mental and physical participation.Previous studies indicate that self-reports can capture psychological information on cognitive engagement (e.g., [6,7]).However, there are still concerns about the reliability and validity of self-reporting [10,11].Another widely used method in the educational community is classroom observation.This approach accounts for the explicit information of cognitive engagement, but it is time-consuming and challenging to apply to long and large classes.Goldberg et al. [2] suggested that automated detected approaches are more efficient, accurate, and time-saving.Indeed, recent studies have characterized cognitive engagement as detecting students' voices [12], texts [13], behaviors [14], etc.Such a perspective might provide a more precise and traceable insight into the visual components of cognitive engagement [15].Currently, there are many databases on cognitive engagement.However, existing datasets like MOOC learners' discussion (text data, [16]) and the RECOLA Dataset (speech/multimodal data, [12,17]) are not well suited for real-world learning.In classrooms, teachers are usually the initiators of instruction, and students' speech data are limited and sparse.Visual data, on the other hand, contain explicit information about students' classroom expressions and feedback.Therefore, visual data are suitable for detecting cognitive engagement.The ICAPD dataset [9] is an example of an image-based cognitive engagement dataset, but its small scale makes it challenging to ensure the robustness of models.Target detection technology (e.g., YOLO [18]) is a popular automatic assessment method for students and their behaviors.Utilizing YOLO with visual data enhances the efficiency and accuracy of cognitive engagement detection.
The YOLO network concerns the statistical probability distribution of the target area.When different categories of target regions exhibit sufficient discriminability, they can be used for detecting various behaviors.For example, Chen and Guan [19] used related YOLO networks to detect the teacher's behaviors (i.e., explaining questions, pointing to the projection, no hand gestures, gesturing with both hands, head down and operating, walking around, writing on the blackboard, and guiding students to raise their hand) and the students' behaviors (i.e., looking up, head dropping, hand raising, standing up, lying on the desk).However, the probability distribution of cognitive engagement regions in images poses a challenging task when using YOLO, as the explicit behavioral states of cognitive engagement lack a clear definition.To achieve this, we developed the ICAPD framework to simulate the distribution of behaviors as targets.This is because different behaviors have distinct distributions, and by treating them as targets, we can leverage the capabilities of the YOLO network to detect and localize these behaviors accurately.By considering behaviors as targets, we can define specific regions of interest within an image where these behaviors are likely to occur.The YOLO network then learns to predict bounding boxes and associated probabilities for these behavior targets.This approach allows us to effectively model and detect complex cognitive engagement, even in scenarios where occlusion or limited pixel presence may be present.YOLOv8 is a recent version of the YOLO series.This version incorporates an anchor-free mechanism within its head module.So, it can directly predict the behavior's center point and width-to-height ratio.Additionally, YOLOv8's loss function incorporates both classification and regression branches.The classification branch uses the standard Binary Cross-Entropy (BCE) loss function, while the regression branch utilizes Distribution Focal Loss combined with Complete Intersection over Union (DFL-CIoU) Loss.These approaches enable effective measurement of both the location of behaviors and the classes of cognitive engagement.Therefore, applying YOLO technology to cognitive engagement detection tasks is feasible.Because cognitive engagement can be characterized as low-cost visual data, automatic cognitive engagement detection will make learning analysis more efficient.
Small-Scale Object Detection with YOLO
In a real-world classroom, a small-scale sample problem constitutes the students' cognitive engagement level.In comparison to laboratory settings, real-world scenarios exhibit greater complexity.While lecture courses can provide many samples of passive engagement, high-level engagement samples are often limited.Although collaborative or discussion-based classes can elicit high-level engagement data, they are still limited for model training purposes.The small-sample issue in cognitive engagement detection is attributed to data availability and annotation complexity [3].Firstly, acquiring a large dataset for cognitive engagement necessitates prolonged classroom sessions and the participation of multiple students.However, extended student engagement can lead to fatigue.Secondly, annotating cognitive engagement based on behavior requires specialized training and expertise.It is time-consuming and resource-intensive.Furthermore, not all behaviors in an image are usable (e.g., as shown in Figure 1).Addressing how to train a highly robust model on a small-scale dataset is a critical issue that needs to be tackled.Previous studies have shown that data augmentation is an effective method to alleviate small-scale sample problems, as it allows for more complex representations of data in classes with all samples.This facilitates the network to learn the data distribution of the dataset better.Image processing techniques are commonly used to augment datasets and optimize image quality, and they can be categorized into three main types: geometric transformations, color transformations, and pixel transformations [20].
Geometric transformation techniques, such as image cropping and scaling, image shifting and padding, and image flipping and rotation, are not meaningful for text recognition tasks.However, they can effectively address datasets with positional biases.Color transformation, such as color space conversion, is a highly effective way of extracting color features [21].However, selecting the appropriate color space transformation to enhance model performance remains challenging.Pixel transformation techniques (e.g., noise, blur, pixel fusion, and information deletion) provide a novel solution for image generation tasks, especially the Cutout technique [22,23].This technique randomly applies square paths of a certain size at random positions on the image to create a 0-mask crop.The benefit is that it avoids unnatural artifacts caused by image blending and can enhance the model's classification performance.
Owing to the challenge posed by small-scale sampling in real-world classrooms, geometric transformation techniques can enable the detecting models to pay more attention to complex behavioral features.In contrast, pixel transformation techniques can effectively mitigate overfitting problems.However, almost all of these transformations involve a distortion magnitude parameter.The combination of augmentations, encompassing flipping, cropping, color shifts, and random erasing, can lead to significantly increased dataset sizes.The efficacy of these transformations still requires experimental validation in real classroom environments.
Occluded Object Detection with YOLO
The YOLO method in cognitive engagement detection tasks faces some insurmountable problems, such as Occluded Objects (e.g., as shown in Figure 1, individuals in the front row occlude the behaviors exhibited by those in the back row, or the background impedes the visibility of the behavior).The density of students in the image causes the occlusion phenomenon.In an image, a target behavior may be obscured by a non-target background or another target behavior.In these scenarios, the effective behavioral features in the bounding box are reduced, leading to a decrease in YOLO performance.Previous studies have demonstrated that the design of the IoU loss function can impact the accuracy of object detection [24].The IoU loss for bounding box regression exhibits significant sensitivity differences across objects of different scales.Therefore, based on the information of the bounding box (e.g., shape, aspect ratio, etc.), the design of an appropriate IoU loss function for detection has widely been of concern.
Currently, the design ideas for improving IoU loss functions include GIoU [25], DIoU [26], CIoU, EIoU [27] etc.However, in Figure 1, marked with dashed lines and magnified, the occlusion behavior labeled with red lines is classified as a passive category.Interestingly, under different IoU loss functions (i.e., GIoU, DIoU, CIoU, and EIoU), both larger and smaller red bounding box (i.e., labeled with yellow lines in Figure 1) predictions yield the same results [28].However, the larger yellow bounding box contains more noise information (i.e., the head of the front-row student).Assigning it a higher loss weight may help the model learn better.The IoU loss function, which considers the overlapping area, the distance between center points, and deviations in width and height, would be more advantageous for the learning of models.This is particularly important when distinguishing between bounding boxes with the same aspect ratio but different sizes or positions.
Previous research has suggested that incorporating smaller auxiliary bounding box calculation losses during model training can have a positive impact on the regression of high-IoU samples.Conversely, low-IoU samples exhibit the opposite effect [27].However, ensuring that the aspect ratios of the behaviors' bounding boxes are the same is challenging.
Zhang et al. [29] proposed addressing this issue by employing auxiliary bounding boxes of varying scales tailored to different datasets.Based on the above, designing improved IoU loss functions is a good solution for cognitive engagement tasks.It requires simultaneous consideration of enhancing limited features within occluded target bounding boxes and adaptively adjusting their aspect ratios.Firstly, the proposed method performs data enhancement on the training samples.Next, the backbone layer is utilized to extract key features of student behavior.These key features, obtained at different scales, are fused in the neck layer.Finally, under the improved IoU loss function, the head layer outputs five categories of the students' cognitive engagement during learning.Detailed methodologies are expounded upon in Sections 3.2 and 3.3, where we introduce a data-augmented method and an enhanced IoU loss function, respectively.
OE-YOLOv8n with Mosaic and Cutout Data Augmentation
We employed combined augmentations, including geometric transformation techniques and pixel transformation techniques, as shown in Figure 3. (1) We utilized the Mosaic method for geometric transformation techniques.We combined four images, each with its corresponding bounding boxes, into a single new image.This new image and its corresponding bounding boxes were then fed into the neural network for learning.Essentially, we simultaneously input four images for learning, which yielded excellent results for detecting small objects.(2) We drew inspiration from the Cutout technique for pixel transformation techniques.We randomly covered a portion of the image, equivalent to 10% of the original image size, with two black boxes.This means that we cut out certain regions of the samples and filled them with zero pixel values.The purpose was to enhance the model's robustness to occluded situations.
OE-YOLOv8n with an IMPDIoU Loss Function
Then, the predicted bounding box and its respective categories were emitted, and the targets were annotated within the original image to facilitate the detection of student objects present.The loss associated with the target score was calculated by the BCE Logits loss function, while the class probability score was assessed through the cross-entropy loss function (BCE cls loss).Inspired by the geometric properties of bounding boxes, we used IMPDIoU to replace IoU [28,29] as the model's loss function.It minimizes the distance between the predicted and ground truth bounding boxes.
In order to further address the limited generalization capability and slow convergence speed exhibited by the existing MPDIoU loss function in different detection tasks, a scale factor ratio was incorporated to modulate the scale size of auxiliary bounding boxes, thereby accelerating the bounding box regression process.
Let B gt denote the ground truth box and B pd denote the predicted bounding box.The coordinates in the top-left corner, bottom-right corner, and center point of the ground truth box are denoted as (x ).The dimensions of the ground truth box, namely its width and height, are indicated by w gt and h gt , respectively.In a analogous manner, the width and height of the predicted bounding box, as well as those of the input image, are denoted by w pd , h pd , w, and h.The variable "ratio" represents the scale factor, ranging from 0.5 to 1.5.The IMPDIoU is defined in Equations ( 1)-( 9 Thus, the loss function derived from IMPDIoU can be defined as follows: In comparison to the standard MPDIoU loss, when the ratio is below 1 and the auxiliary bounding boxes are smaller than the actual ones, the effective range of regression is reduced.Nonetheless, the gradient's absolute value exceeds that of the MPDIoU loss, facilitating accelerated convergence for high-IoU samples.Conversely, when the ratio is above 1, the expanded scale of the auxiliary bounding boxes enhances the regression's effective range, thereby improving the regression effect for low-IoU samples.The value of the ratio was established through the verification method employed in the subsequent experiment.
The outcomes of the comparative experiment involving various ratios within the IMP-DIoU loss, as opposed to the standard MPDIoU loss, are detailed in Table 1.It is apparent that the loss function employed in this paper demonstrates a substantial enhancement across all detection metrics when contrasted with the standard MPDIoU loss.Furthermore, an observation can be made that with the increment of the ratio value for the dataset utilized in this study, there is a corresponding improvement in the detection outcomes.Notably, the two primary indicators, F1 and Map, both achieve their peak performance at a ratio of 1.4.Hence, a ratio of 1.4 was ultimately determined as the optimal choice for this paper.
SCE Dataset
Our SCE dataset not only considers the validity and discriminability of cognitive engagement categories but also takes into account the diversity and complexity of stu-dent populations, enabling a more effective explanation of the occurrence, changes, and maintenance of students' cognitive engagement.The SCE dataset (Approval CNU-IRB-202305004a) is a real-world Students' Cognitive Engagement dataset.SCE has several main features: a real-world classroom, cognitive engagement detection, an ICAPD annotation framework, 6566 images (6566 annotated), 86 students, 5 categories for students' cognitive engagement, and 25-30 students per image.We used non-invasive cameras above the college classroom's blackboard to collect data.The collected data are all spontaneous behaviors of students, thereby preserving the authenticity of classroom learning as much as possible.Figure 4 shows the examples from the dataset.A total of eight videos were collected from the different classes, each with a resolution of 1920 × 1080.Six videos last 45 min and two videos last 90 min.Each video contains 25-30 students.Then, automatic frame sampling was performed to obtain frames that are conducive to automated detection.And the images were extracted from video stream frames every three seconds.The extracted images were stored in the folder as samples to be labeled.As shown in Figure 5, the generated images were annotated using the LabelImg data annotation tool.Moreover, based on the ICAPD framework [9], the locations and behaviors of the individual students were precisely marked with bounding boxes (seen in Figure 6).We further defined different degrees of cognitive engagement in university classrooms, referring to the work of [9].The defined degrees are presented in Table 2.Each student's location is delineated by the smallest possible rectangular bounding box, thereby ensuring that the enclosing boundary contains a minimal amount of the surrounding background.The annotation details include the folder name, filename, path, source, size, and multiple objects.These annotated files were saved with an XML extension.The SCE dataset contains very few instances, with annotations in each image being densely packed.This characteristic poses a considerable challenge in accurately capturing the nuances of student behavior for automatic detection.Standing up to talk to teachers, turning the body and talking to peers, applauding mates, clapping hands, patting others After labeling according to the ICAPD framework, the SCE dataset generated in this experiment contains 5665 images with a total of 154,776 objects.The dataset's structure is configured in accordance with the COCO dataset schema.Before data augmentation, 3880, 5072, 12,011, 41,922, and 91,891 samples were in the disengaged, constructive, active, passive, and interactive categories, respectively.The composition of the training and testing samples is shown below (see Table 3).All experiments within this section were configured to undergo a training regimen comprising 300 epochs.The training was stopped early if the average accuracy did not improve significantly after 50 epochs.The YOLOv8n model, a component of the YOLOv8 series, was used to train with a batch size of 128, which was constrained by the GPU's memory capacity.The learning rate utilized during the model's training phase was set to 0.01, with an SGD momentum of 0.937 and an optimizer weight decay of 0.0005.All other training parameters were maintained at their default values as specified by the YOLOv8n network architecture.
To comprehensively assess the proposed model's performance, we utilized precision (P), recall (R), F1 score (F1), mean average precision when IoU is 0.5 (mAP50), and mean average precision when IoU is 0.5 to 0.95 (mAP50-95) to measure the model's accuracy and evaluate the object detection results on the SCE dataset.
Training Procedures
As shown in Figure 7, the initial three columns depict the training's time progression along the X-axis and the corresponding loss values along the Y-axis.The overall loss values continue to decrease as training progresses and eventually stabilizes.The experimental results show that the OE-YOLOv8n model demonstrates robust fitting performance, high stability, and a high level of accuracy.The last two columns represent the P, R, mAP50, and mAP50-95 curves, and the X-axis represents the training time.It can be observed that the curves' values gradually approach 1, indicating the effectiveness of the OE-YOLOv8n model.In all, when the epoch is 300, the overall training effect of our method is ideal.Therefore, the weight file of the acquired detection model is saved, and the test set is utilized for the assessment of the model's efficacy.The efficacy of the proposed SCE detection method was substantiated through a comparative analysis involving five distinct methods.These baseline approaches include Faster R-CNN and SSD; YOLO families YOLOv5n and YOLOv8n; and a variant of YOLOv8n, simAM-YOLOv8n.These experiments were conducted on the test dataset we established.The corresponding experimental results are displayed in Table 4.It can be seen that the P, R, F1, mAP50, and mAP50-90 of OE-YOLOv8n are higher than those of other algorithms.The performance of the OE-YOLOv8n model surpasses that of the two-stage detection model (i.e., Faster R-CNN), indicating that the deep cognitive engagement features extracted by OE-YOLOv8n are more effective.Compared to the singlestage model (i.e., SSD), the proposed OE-YOLOv8n model still demonstrates comparability in F1 score, mAP, etc.The comparison within the YOLO family reveals that the architecture of YOLOv8n is superior, leading to better results using the proposed improvement strategies after YOLOv8n.The experimental results with different improvement strategies suggest that our enhancement (i.e., data-enhanced and improved-loss) is more suitable for cognitive engagement detection tasks.After data augmentation and IMPDIoU loss improvement, the OE-YOLOv8n method achieved significant performance enhancement in the SCE dataset.
Ablation Experiments
In order to further understand the significance of the two improved modules, the OE-YOLOv8n model was run on the SCE dataset by removing the Mosaic and Cutout data augmentation and the IMPDIoU loss function.Then, the evaluation values were set on the SCE dataset.The results of the ablation experiments are summarized in Table 5. YOLOv8n denotes the original model, and L-YOLOv8n denotes the YOLOv8n model with an IMPDIoU loss function.The last row refers to the improved network in this paper.The last five columns show the model's P, R, F1, mAP50, and mAP50-95 values without one or two of these modules.
From the table, it is discernible that the P, R, F1, mAP50, and mAP50-95 values experience a precipitous decrease when any one or two of the modules are omitted.This attests to our conjecture that a more balanced distribution of cognitive engagement samples and a more precise estimation of IoU values are indispensable for the accurate assessment of cognitive engagement levels.Due to the proposed IMPDIoU loss improvement strategy employed in L-YOLOv8n, the L-YOLOv8n method exhibits higher mAP values at different IoU thresholds.OE-YOLOv8n outperforms YOLOv8n the most in terms of the P value (improved by 6.4%), indicating that the proposed OE-YOLOv8n model can maintain high accuracy in recognizing students and their cognitive engagement.Furthermore, the OE-YOLOv8n method exhibits higher F1 values than the L-YOLOv8n method.This indicates that our proposed data augmentation strategy is effective, enabling more accurate computation of high-order cognitive engagement (i.e., interactive) with few samples.
Validity of the Method
To validate the effectiveness of the algorithm in detecting cognitive engagement, experiments were conducted using class videos.The OE-YOLOv8n model in track mode was employed to detect cognitive engagement in video data captured every 3 s/frame.Each student was assigned a fixed ID, and the occurrences of disengaged, passive, active, constructive, and interactive states for each student in the video were recorded.The data for each state category were normalized using min-max scaling.The results are shown in Figure 9.It was observed that (1) the cognitive engagement states detected by the OE-YOLOv8n model were not consistently associated with specific individuals.This indicates that the cognitive engagement states exhibited by students are not always fixed.(2) There were no identical cognitive engagement patterns across the entire video, emphasizing individual differences.Therefore, the model does not focus on detecting individuals but rather on capturing various cognitive engagement features.
Conclusions
This investigation introduces OE-YOLOv8n as a solution to the challenges encountered in the detection of cognitive engagement within real-world classroom environments.On the one hand, the YOLOv8n model and the improved IMPDIoU loss function are employed to address the occlusion issue.The IMPDIoU directly minimizes the distance between the topleft and bottom-right points of the predicted bounding box and the ground truth bounding box.Additionally, a scale factor is integrated to regulate the scale of auxiliary bounding boxes.On the other hand, the Mosaic method and Cutout method are combined to augment the SCE dataset.As a result, even the difficult-to-recognize and infrequently occurring categories in real classrooms can receive significant attention from the improved model.The experimental results indicate that OE-YOLOv8n yields a substantial enhancement in detection efficacy.Furthermore, ablation experiments were conducted to corroborate the efficacy of the various modules.In the subsequent stages, we intend to integrate additional engagement cues, such as heart rate and sweat sensors, to further enrich our investigation.
Figure 1 .
Figure 1.Student learning scene in a real-world classroom.The red boxes indicate the ground truth bounding boxes corresponding to cognitive engagement, whereas the yellow boxes represent the predicted bounding boxes for cognitive engagement.
3.1.Overall Architecture Here, an improved OE-YOLOv8n model is proposed for the detection of students' cognitive engagement in real-world classrooms.The improved components of our improved OE-YOLOv8 model are the IMPDIoU loss function and cognitive engagement data processing.The comprehensive block diagram delineating the entire methodology is depicted in Figure 2. The improved OE-YOLOv8n model can detect complex cognitive engagement by target enhancement components, which may involve occlusion behaviors or occur in students with limited pixels.The components consist of an occlusion behavior enhancement loss component and an effective data augmentation component.This loss component uses the IMPDIoU loss function.This method minimizes the distance between the top-left and bottom-right points of the predicted bounding box and the ground truth bounding box by considering the relevant loss factors (i.e., overlapping or non-overlapping area, central point distance, and deviation in width and height).The data augmentation component employs the Mosaic and Cutout techniques.The Mosaic technique enhances the learning scene information to focus the model more on the targeted cognitive engagement.The Cutout technique enhances the model's localization ability by adding information from other samples in the cut region.
Figure 2 .
Figure 2. The structure of the improved OE-YOLOv8n model for cognitive engagement detection.
respectively.Similarly, the coordinates of the corresponding points in the predicted bounding box are represented by (x
Figure 4 .
Figure 4. Three examples from the dataset.
Figure 5 .
Figure 5.An example during labeling with the Labelimg tool.
Figure 6 .
Figure 6.The ICAPD framework for labeling.It includes five classes of cognitive engagement and their corresponding behaviors.
Figure 7 .
Figure 7. Performance for the OE-YOLOv8n model in the training procedure on the SCE dataset.
4. 4 .
Figure8displays the confusion matrix for the OE-YOLOv8n model and depicts its prediction accuracy for the five distinct categories of student cognitive engagement within the SCE dataset.Additionally, it illustrates the relationship between the predictions.The matrix clearly demonstrates that our method attains high accuracy for each category.
Figure 8 .
Figure 8. Confusion matrices of the YOLOv8n and OE-YOLOv8n methods on the SCE dataset.The rows correspond to the true labels, the columns correspond to the predicted categories, and the diagonal entries correspond to the accuracy of correct predictions.(a) YOLOv8n; (b) OE-YOLOv8n.
Figure 9 .
Figure 9.The results of detecting students' cognitive engagement in a class.The X-axis represents the sequence of student IDs and the Y-axis represents the proportion of each state in the video.
Table 1 .
The performance of OE-YOLOv8n with different ratios of IMPDIoU loss against the standard MPDIoU loss.The bold represents the best result.
Table 2 .
Degrees of cognitive engagement of the students in a classroom.
Table 3 .
The scale of the SCE dataset.This study's experimental environment consisted of training the models on an NVIDIA GeForce RTX 3090 (NVIDIA, Santa Clara, CA, USA) and leveraging the GPU drivers compatible with Ubuntu 22.04.And the utilized environment comprised Python 3.11.7 and torch 2.1.2with cuda 12.1 support.
Table 4 .
Comparison of methods and our OE-YOLOv8n results on SCE dataset.The bold represents the best result.
Table 5 .
Ablation experiments on SCE dataset.The bold represents the best result. | 2024-06-05T15:22:12.749Z | 2024-06-01T00:00:00.000 | {
"year": 2024,
"sha1": "5d8de43c9896aa1bd4b72a3b1aa7502a2127438e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/24/11/3609/pdf?version=1717416181",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "172b2d6b574d27d2c48d532fd8ecb657bd3b866f",
"s2fieldsofstudy": [
"Computer Science",
"Education"
],
"extfieldsofstudy": []
} |
21892073 | pes2o/s2orc | v3-fos-license | Enabling cytoplasmic delivery and organelle targeting by surface modification of nanocarriers
Nanocarriers are designed to specifically accumulate in diseased tissues. In this context, targeting of intracellular compartments was shown to enhance the efficacy of many drugs and to offer new and more effective therapeutic approaches. This is especially true for therapies based on biologicals that must be encapsulated to favor cell internalization, and to avoid intracellular endosomal sequestration and degradation of the payload. In this review, we discuss specific surface modifications designed to achieve cell cytoplasm delivery and to improve targeting of major organelles; we also discuss the therapeutic applications of these approaches. Last, we describe some integrated strategies designed to sequentially overcome the biological barriers that separate the site of administration from the cell cytoplasm, which is the drug's site of action.
Nanocarriers are designed to specifically accumulate in diseased tissues. In this context, targeting of intracellular compartments was shown to enhance the efficacy of many drugs and to offer new and more effective therapeutic approaches. This is especially true for therapies based on biologicals that must be encapsulated to favor cell internalization, and to avoid intracellular endosomal sequestration and degradation of the payload. In this review, we discuss specific surface modifications designed to achieve cell cytoplasm delivery and to improve targeting of major organelles; we also discuss the therapeutic applications of these approaches. Last, we describe some integrated strategies designed to sequentially overcome the biological barriers that separate the site of administration from the cell cytoplasm, which is the drug's site of action.
Keywords: biological barriers • cell surface modifications • intracellular targeting • nanocarriers • nanomedicine
Nanocarriers have recently emerged as versatile tools with which to increase the half-life, bioavailability, targeting and safety of various classes of therapeutics [1][2][3][4][5]. An important advance in the field of nanomedicine was the development of nanocarriers able to respond to the physical and chemical stimuli of the biological environment so as to maximize the efficacy of the loaded therapeutics and drive their delivery in function of the surrounding milieu [6][7][8]. To be effective, carriers must overcome several biological barriers that are encountered during transit from the site of administration to the final target [5]. Clearance action by the major filtering organs (lung, liver, spleen and kidney), and the physical (vascular wall and extracellular matrix), cellular (circulating and resident tissue phagocytic cells) and enzymatic (serum and intracellular digestive enzymes) barriers that characterize the structure of our tissues, have evolved to control the diffusion of exogenous materials within our body. These very efficient biological barriers are the greatest obstacle to the clinical translation of many promising platforms designed for targeted therapy [9].
Depending on their size, shape and surface chemistry, nanoparticles are internalized by the target cells through different pathways ( Figure 1) [10,11]. After internalization, nanocarriers and their payload are usually sequestered in endosomal vesicles (caveolin-driven endocytosis pathway excluded) [5,12]. Endosomal sequestration of nanotherapeutics is characterized by multiple membrane fusions in which the endocytic vesicles merge sequentially with early and late endosomes until they are compartmentalized in endolysosomal vesicles [13]. Each step is characterized by a progressive decrease of intravesicular pH and an increase of digestive enzymatic content (lipase, proteases and nucleases) [13,14] that affect the efficacy of the payload, in particular if it consists of such biological therapeutics as peptides, proteins and genetic material (Figure 2). For example, short interfering RNA (siRNA) technology has been extensively investigated in recent decades given its enormous therapeutic potential even at very low concentrations [5].
future science group Review Parodi, Corbo, Cevenini et al. While encapsulation within nanoparticles protects these therapeutics from the extracellular environment [15], after internalization the payload must be protected from intracellular endosomal digestion and be delivered to the cell cytoplasm. Methods devised to elude endosomal sequestration and 'conquer' the cytoplasmic space have led to a better understanding of cell physiology and trafficking, and to the development of therapeutic strategies aimed to specifically target and modulate the function of major subcellular compartments. Here we describe the strategies used to access the cell cytoplasm after internalization and to achieve organelle targeting, which in turn provides new means with which to treat various diseases. Finally, we pro-vide examples of integrated systems that were designed to sequentially overcome several biological barriers and deliver the payload in the cytoplasm.
Cell-penetrating peptides
Protein translocation domains, also known as cellpenetrating peptides (CPPs), are short peptides able to cross biological membranes and deliver a wide variety of cargos into the cytoplasm [16,17]. They are usually divided into amphipathic peptides and polycationic peptides: the first contain an alternating pattern of polar and hydrophobic amino acids; while the second are rich in positively charged amino acids. In general, CPPs are characterized by an overall positive charge due to the abundance of arginine and lysine residues [16]. This positive charge favors their interactions with surface components of the cell membrane such as heparan sulfate and phospholipids that can act as CPP receptors [18,19], while hydrophobic residues like tryptophan increase the affinity for biological lipid structures [18]. Typical CPPs are the transactivator of transcription (TAT) peptide derived from HIV virus and the amphiphilic Antennapedia peptide derived from Drosophila [20].
The mechanism of CCP uptake is still unclear [21,22]. Various mechanisms have been proposed: energy-independent internalization mediated by local destabilization of the membrane structure rather than by intracellular endosomal trafficking [23]; uptake involving membrane lipid rafts and subsequent escape from these vesicles [16,24]; and macropinocytosis-and caveolaemediated endocytosis [25]. However, CPP internalization seems to depend on several factors, namely, their specific amino acid sequence [26]; the concentration of CPPs at the cellular interface (above a specific concentration threshold, they are internalized through nonendocytic pathways [27]); and the size of cargo fused to the CPPs [28]. These findings indicate that the transport of nanoparticles modified with CPPs occurs through energy-dependent endocytic pathways, in particular macropinocytosis [21].
The CPP surface functionalization of metallic, silica or lipid particles can be easily obtained by covalent modification or simple surface absorption driven by electrostatic bonds [29]. Using TAT-modified gold nanoparticles, Krpetic and co-workers were able to monitor and describe the intracellular trafficking of nanoparticles after endosomal escape [30]. They found that, after an initial phase of accumulation in the cytoplasm, nucleus and mitochondria, the particles were confined to vesicular bodies and eventually released again in the cytoplasm. The cytoplasmic delivery of specific payloads and materials was shown to enhance the anticancer properties of the system. Liu and colleagues showed that peritumoral injection of TATmodified silver nanoparticles efficiently inhibited multidrug resistance phenomena in vitro and in vivo [31]. Due to enhanced cellular uptake and a consequent increase in cellular drug accumulation, TAT-modified particles exerted a tumor growth inhibition power fourfold higher than unmodified particles [23].
CPPs have been also investigated for their capability to deliver drugs to the central nervous system. In fact, nanoparticles modified with TAT in combination with polyethylene glycol (PEG) favored brain accumulation and a higher penetration of the blood-brain barrier [32,33]. Although surface modification of particles with TAT does not ensure specific targeting of nervous tissue, TAT-modified particles can be used to treat severe diseases like brain tumors, thanks to their ability to accumulate in abnormal tumor vasculature [34].
Antimicrobial peptides
Antimicrobial peptides (AMPs) are a group of small amphipathic molecules (15-45 amino acids) characterized by a positive charge and, like CCPs, they have the ability to translocate within the cell cytoplasm. AMPs are produced by a wide variety of organisms of marine and terrestrial origin (including human defensins) [35,36] as an essential component of their innate immune response [37]. Recently, Vale and co-workers [38] classified AMPs in function of their structure into four main categories: α-helical peptides without Cys residues characterized by the presence of several Lys and Arg residues and a significant number of hydrophobic residues; β-pleated peptides containing disulfide bridges, for example, human defensins that are characterized by six Cys residues [39][40][41]; peptides rich in Pro, Gly, His, Arg and Trp; and circular antimicrobial peptides. Antimicrobial peptides preferentially target viruses, bacteria and eukaryotic pathogens that are usually less anionic than mammalian cell membranes [27]. Thanks to these properties, AMPs can be used to modify nanoparticles designed to target microorganisms and achieve more efficient antibiotic effects.
To our knowledge, there are only two reports describing the production of AMP bioconjugates with nanoparticles [42,43]. Golubeva and coworkers [42] designed a nanoparticle with antimicrobial properties against Staphylococcus aureus and Pseudomonas aeruginosa by conjugating silver nanoparticles to peptide G-Bac3.4 (RFRLPFRRPPIRIHPPPFYPPFRPFL) modified from the natural peptide Bac3.4, which belongs to the bactenecin family. They showed that the modification process affected the membranolytic properties of the peptides. Yang et al. [43], on the other hand, produced liposomes conjugated with the antimicrobial peptide WLBU2 and loaded with the photosensitizer temoporfin. These liposomes effectively inhibited the proliferation of S. aureus and P. aeruginosa. Despite these promising results, the use of AMPs in the clinical setting is still limited by issues related to toxicity [44,45].
Surface modification to avoid endosomal sequestration & degradation
Endosomal sequestration of nanocarriers Despite many improvements in delivery systems [1][2][3][4]46], the set-up of methods for the effective delivery of therapeutic biological agents such as proteins and nucleic acids remains a challenge [5,47,48]. One of the main limiting factors is that the endocytic pathway is the major uptake mechanism by which nanocarriers enter the cell [5]. This pathway is composed of endosomes that have an internal acidic pH. These vesicles undergo a maturation process that, from early endosomes with a pH around 6, leads to late endosomes that have a pH around 5. Late endosomes can fuse with lysosomes that contain many digestive enzymes, namely, nucleases, proteases, glycosidase, lipases, phosphatases, sulfatases and phospholipases [12][13][14]. After entering the cell, nanovectors together with their payload become entrapped in endosomes after which they can enter lysosomes, where enzymatic processes degrade the payload especially if it comprises biological molecules (Figure 2). This route prevents efficient delivery of therapeutic agents to intracellular targets. molecules that induce osmotic lysis of the endosome through a pH-buffering effect known as the 'proton sponge effect' (Figure 3) [49,50]. The proton sponge effect of these molecules is based on their ability to accept H + ions at the acidic pH of the endosomes. The most promising candidate molecules in this context contain protonable secondary and/or tertiary amine groups that have a pKa close to the endosomal/lysosomal pH [12,43,44]. Acidification of these endolysosomal vesicles occurs through specific endosomal H + transmembrane pumps that work in symport with chlorine ions [43,44]. Molecules containing groups like tertiary amines that accept protons at a low pH are able to contrast vesicle acidification and to inhibit the negative molecular feedback that blocks the action of the endosomal pumps [49,50]. In response to the increased chloride concentration within the vesicle lumen, the osmotic pressure of endosomes increases [49,50] with a consequent incorporation of large amounts of water that, in turn, affects the endosomal ultrastructure thereby leading to osmotic swelling [15,49,50] and subsequent membrane rupture. Poly(ethyleneimine) (PEI) [51,52], poly(amidoamine) (PANAM) [53,54] and poly(N,N-dimethylaminoethyl methacrylate) [55][56][57][58] are among the polymers most often used to coat nanoparticles [56,59] to obtain delivery systems endowed with buffering properties that enable endosomal escape by the proton sponge effect.
Cytoplasmic delivery of biologicals can be improved even further by using other moieties to enhance the buffering ability of polymeric systems. A case in point is histidine, which is endowed with buffering properties thanks to its imidazole ring that has a pKa of around 6, and that is protonated at a mildly acidic pH [12,[60][61][62][63]. Interestingly, polymers like polylysine and chitosan enriched in histidine significantly increased cytoplasmic delivery of nucleic acids [61].
Notably, in the case of polymers containing protonable amine groups (e.g., PEI and PANAM) or histidine residues, the proton sponge model may not be the only mechanism of endosomal escape. In fact, after protonation, the positively charged aminic groups could electrostatically bind the negatively charged endosomal membrane, which results in a physical interaction and subsequent fusion between the polymer and the endosomal membrane -a process that triggers the release of the payload into the cytoplasm [61][62][63][64]. For example, once protonated in an acidic environment, the pH-sensitive polymer poly(histidine) acquires fusogenic properties thereby resulting in endosomal escape [12,62]. Other strategies to enable nanocarriers to elude endosomal sequestration are based on polymers with pH-responsive properties resembling those of fusogenic peptides. For example, poly(propylacrylic acid) changes its conformation at a low pH thereby destabilizing the endosomal membrane and favoring delivery of the payload into the cell cytoplasm [65][66][67].
Conjugation with peptides as a means to enable nanocarriers to elude endosomal sequestration
Endosomal escape can be obtained by destabilizing the endosomal membrane. Various approaches to obtain destabilization of the endosomal membrane are based on the finding that viruses have evolved specific fusogenic proteins or peptides that mediate mechanisms for endosomal membrane penetration. These mechanisms consist in the fusion of the viral envelope with the lipid bilayer of the endosomal vesicles or the formation of pores through which the viral genome enters the cytosol [68,69]. In bacteria, the formation of pores in the endosomal membrane is the main mechanism of endosomal escape and it is mediated by bacterial exotoxins [70]. Also some animal and plant proteins are endowed with these Particles become protonated and resist endosome acidification, thus more protons will be pumped into the endosomes thereby acidifying the environment. The proton pumping action is followed by passive entry of chloride ions into the endosomes, thereby increasing the ionic concentration and hence water influx. Due to the high osmotic pressure, the endosomes swell and break and consequently release their contents.
Endolysosomal
trafficking Ion in ux Enabling cytoplasmic delivery & organelle targeting by surface modification of nanocarriers Review types of fusogenic or membrane pore-forming properties [12]. In these mechanisms operating in nature, the acidic pH of endosomes triggers endosomal escape by inducing a change in the protein/peptide conformation and enabling interactions between the peptide and the lipid bilayer of the endosomal vesicles. In some cases, these peptides and protein domains form a random coil structure at physiological pH, while at acidic pH, some amino acid residues become protonated and peptides acquire an amphipathic α-helical conformation [12].
pH-responsive lipids as components of endosomolytic nanocarriers Some nanoformulations of lipids are used to obtain nanocarriers able to escape from endosomes and to deliver active payloads in the cell cytoplasm. Cationic lipids can electrostatically interact with negatively charged lipids of the cytosolic layer of the endosomal vesicle and exert a moderate fusogenic effect [60,61]. To increase their efficiency in endosomal membrane destabilization, helper lipids like l-α-di-oleoyl phosphatidyl ethanolamine can be added to the lipid formulation [86]. The ethanolamine head group of l-α-di-oleoyl phosphatidyl ethanolamine has a high tendency to form an inverted hexagonal phase at acidic pH which, in turn, improves fusogenicity [60,61].
Organelle targeting
In many cases, the success of any delivery system greatly depends on the capability of the carrier to transport the cargo to its specific site of action within the cell cytoplasm. For instance, all DNA carriers must overcome the nuclear membrane to exert their function. Consequently, the development of approaches aimed at effectively targeting the cellular organelles (nucleus, mitochondria and endoplasmic reticulum [ER]) and subsequently, delivering the payload in its active status directly to its intracellular site of action is a fundamental step in the development of more effective therapeutics (Figure 4). In the following sections, we discuss the strategies used to modify the surface of nanoparticles in order to ensure subcellular delivery of payloads. Nuclear, mitochondrial and ER targeting is particularly complex given the importance of these organelles for cell physiology. In fact, loss of function of one of these organelles causes various pathological conditions (see Table 1).
Nuclear targeting
The nucleus is surrounded by a double-layered membrane punctured by protein channels, known as the nuclear pore complex, that determine the 'in and out' trafficking between the nucleus and the cytoplasm [91,92]. Learning how to enhance nuclear delivery has great scientific and translational relevance because the nucleus governs cell physiology by controlling replication and transcription processes, and it is the final functional target of many therapeutic approaches (e.g., gene therapy). In fact, the nucleus is responsible for all diseases derived from a mutation of a gene (at the level of both the promoter and the splicing site), including heart dysfunction [93], muscular dystrophy [94] and neurodegenerative disease [95]. In addition, transport through the nuclear membrane is essential for all anticancer agents whose cytotoxic action is based on DNA intercalation (e.g., doxorubicin and cyclophosphamide).
Unfortunately, the nuclear membrane is difficult to penetrate and nanoparticles rarely accumulate in this future science group Review Parodi, Corbo, Cevenini et al.
organelle [96][97][98][99]. Indeed, the small pore size of the nuclear membrane (≈ 10 nm) prevents the passage of molecules larger than 40 kDa even when the pores are dilated [100]. Consequently, large molecules must be actively transported beyond the membrane by proteins. The functionalization of nanoparticles with nuclear localization signals (NLS), which are characterized by basic amino acid residues, implies the recognition of the NLS by its cytoplasmic receptors, such as importins. Subsequently, the NLS peptide binds the nuclear pore complex and the cargo is released into the nucleus via translocation through the pores [101]. Considering that the pore size of nuclei is usually smaller than that of typical drug delivery systems, it appears that the diffusion of nanoparticles (not confined in intracellular vesicles) depends on the carrier size in the case of passive transport, and on the inclusion of an NLS in the case of active transport [102]. An example of a drug delivery system developed to target the nucleus by passive transport is represented by malonodiserinolamide fullerene C 60 (C60-ser, 3-4 nm) nanoparticles, fabricated to treat liver cancer in vivo. Thanks to their small size, they were found to localize in the nucleus by diffusion through the nuclear pore complex [103]. Also nanoparticles surface-modified with NLS have been widely used to target genes to the nucleus. Particles modified with the KKKRKV peptide from the Simian virus 40 large tumor antigene, or peptide KRPAATKKAGQA-KKKKL from nucleoplasmin (an enzyme involved in nucleosome assembly) accumulated in much larger amounts in the nuclear membrane than did unmodified controls. [97,[104][105][106][107][108][109][110]. Nuclear localization signals exerted an efficient translocation function also when they were integrated in a classical PEG coating that enabled gold nanoparticles to inhibit the division of cancer cells [111] as well as to avoid unspecific interactions with other biological entities. Notably, Mackey and collaborators Enabling cytoplasmic delivery & organelle targeting by surface modification of nanocarriers Review showed that NLS-labeled gold nanoparticles had a specifically enhanced cytotoxic effect on cancer cells [112]. The HIV-1 TAT peptide is a well-known nuclear membrane translocator, which when applied on the surface of nanoparticles enables them to elude endosomal sequestration and to accumulate in the nucleus [18,113]. Notably, doxorubicin exerted stronger anticancer activity when it was encapsulated in mesoporous silica particles whose surface was modified with the TAT peptide [97,[104][105][106]114,115]. Nuclear targeting can also be accomplished by exploiting the high affinity of acridin to DNA. Acridine-based compounds can reversibly bind DNA through intercalation and act as phototriggers for the controlled release of drugs. Acridin-9-methanol-functionalized nanoparticles were shown to accumulate in the nucleus of HeLa cells [109]. Consequently, photoresponsive acridin-conjugated nanoparticles can be used to control drug release into nuclei [116]. In other nuclear targeting strategies, a positive charge was applied to the particle surface. Cationic polymers such as PEI and poly (l-tartaramidoamine) increased the targeting of particles to the nuclear membrane, probably thanks to their ability to permeabilize the nuclear envelope -a process that favors proper delivery of pDNA payloads to the nuclei of target cells [117,118].
Finally, an interesting approach was based on the targeting of specific cell receptors to induce both cellular internalization and nuclear translocation. EGFR is located on the cell membrane and once activated it accumulates in the cell nucleus thanks to the action of karyopherin-β 67 , a protein responsible for the transport of cytoplasmic material to the nucleus. Importantly, EGFR has great clinical relevance because its nuclear translocation occurs mainly in cancer cells. Fe 3 O 4 @TiO 2 nanoparticles conjugated with EGFRbinding peptides showed better nuclear localization in EGFR-expressing HeLa cells than control nanoparticles [119].
Mitochondrial targeting
Mitochondria are characterized by an outer and an inner membrane enriched in protein. The two membranes determine the formation of two separate compartments: the intermembrane space and the mitochondrial matrix. These organelles possess their own DNA, which is organized in several copies of a single, circular chromosome [120]. Mitochondria are involved in apoptotic cell death, adenosine triphosphate synthesis, production of reactive oxygen species and calcium metabolism. Consequently, mitochondrial dysfunction has been associated with a variety of disorders (cancer, neurodegenerative diseases, cardiovascular disease and diabetes) [121][122][123][124][125] and they have been extensively studied as therapeutic intracellular targets. The main rea-son why strategies to achieve targeted mitochondrial drug delivery were developed is that chemotherapeutics act directly on mitochondrial function and favor apoptosis. Two generalized requirements for mitochondrial localization are lipophilicity and delocalized positive charge. Increasing doxorubicin lipophilicity by introducing a nitric oxide into the molecule (nitroxydoxorubicin) [126] shifts the intracellular localization to mitochondria and to the ER [127].
The large mitochondrial membrane potential (negative inside) can be exploited to target mitochondriamodifying particles or bioactive molecules using positively charged molecules; Schiller-Szeto peptides are a combined series of hydrophobic and positively charged tetrapeptides that severely localize to mitochondria [128]. Lipophilic cations such as rhodamine, triphenylphosphonium (TPP) and stearyl triphenylphosphonium were shown to increase liposome mitochondria targeting [129,130]. To avoid issues of the nonspecific cytotoxicity of stearyl triphenylphosphonium liposomes, Biswas et al. synthesized a novel coating of polyethylene glycol-phosphatidylethanolamine conjugated with the TPP group attached to the distal end of the PEG block (TPP-PEG-PE) [104]. Similarly, Marrache et al. designed a mitochondria-targeted polymeric nanoparticle system by blending a targeted poly(d,l-lactic-coglycolic acid)-block (PLGA-b)-poly(ethyleneglycol) (PEG)-triphenylphosphonium (TPP) polymer (PLGAb-PEG-TPP) with b-poly(lactide-co-glycolide)-succinic acid (PLGA-COOH) or with PLGA-b-PEG-OH. They varied the size and the surface charge of nanoparticles in an attempt to understand the effect of these properties on mitochondrial uptake [131]. Mitochondrial uptake was studied by qualitative and quantitative investigations of the cytosolic and mitochondrial fractions of cells treated with these blended nanoparticles [131]. Evaluation of the mitochondrial uptake of nanoparticles measuring 80-330 nm revealed a trend toward a maximum uptake of particles between 80 and 100 nm in diameter. Among hydrodynamic particles of a similar diameter but with different surface charges, mitochondrial uptake increased significantly with the more positively charged nanoparticles.
Strategies inspired by intracellular trafficking of mitochondrial proteins were developed by modifying nanoparticles with the mitochondrial targeting signal, 10-70 amino acids). In the cell, the mitochondrial targeting signal directs newly synthesized proteins to the mitochondria. Hoshino transport chain) [132]. Various approaches, based on specific materials, have been used to deliver DNA to mitochondria in the attempt to improve therapies for mitochondria-associated diseases. For example, the liposome-like vesicles delocalized lipophilic cation dequalinium (DQA)-somes, which are characterized by a positive charge, showed high affinity for the mitochondrial membrane [133,134]. These vesicles are made of dequalinum chloride, a mitochondiotropic delocalized cation that severely accumulates in the mitochondria of carcinoma cells [135].
Similar strategies are based on surface-modified resveratrol liposomes co-conjugated with dequalinium polyethylene glycol-distearoylphosphatidylethanolamine (DQA-PEG2000-DSPE). This delivery platform exerted antitumor effects in various tumor models in vitro and in vivo [136]. Yamada et al. developed a liposome with fusogenic properties called 'MITO-porter' designed for mitochondrial gene delivery that enabled efficient targeting of the innermost mitochondrial space [137,138]. More recently, the same group increased the efficacy of this system by producing liposomes with a double lipid shell to sequentially penetrate cell and mitochondrial membranes, thereby avoiding endosomal sequestration (dual function MITO-Porter) [139]. Xia Yang et al. synthesized a new kind of fluorescent core-shell ellipsoidal-ionic liquid nanoparticle that has anticancer activity and that can be used in imaging techniques [29]. These particles were taken up by HeLa cells through a mitochondriaassociated pathway favored by a functional ionic liquid (acetyl-N-butyl pyridinium hexafluorophosphate) conjugated to rhodamine B isothiocyanate-doped silica-coated [29]. Agemy et al. developed iron oxide nanoworms modified in their surface with a tumor-homing peptide (CGKRK), which enabled mitochondrial targeting, and a proapoptotic peptide (D[KLAKLAK]2), which ensured tumor cell killing in a murine glioblastoma [140]. Wang et al. observed that Au nanorods coated with a cetyltrimethylammonium bromide bilayer and serum components was preferentially uptaken by cancer cells (compared with normal cells) and mitochondrial targeting ability [46]. Mo and collegues reported that pH-sensitive liposomes (HHG2C 18 -L and PEGHG2C18-L) based on zwitterionic oligopeptide lipids as anticancer drug carriers were developed and evaluated for effective intracellular delivery and enhanced antitumor activity [87]. Liposomes modified with 1,5-dioctadecyl-l-glutamyl 2-histidyl-hexahydrobenzoic acid (HHG2C 18 -L) and 1,5-distearyl N- (N-a-(4-mPEG2000) butanedione)-histidyl-l-glutamate (PEGHG2C18) are able to escape endosomal sequestration and target mitochondria by reversing their surface charge in function of the difference in pH between endosomal vesicles and cytoplasm [87]. The His groups play a key role in endosomes/lysosome escape. The imidazole ring of histidine is a weak base that has the ability to acquire a cationic charge when the pH of the environment drops below 6. The accumulation of histidine residues in acidic vesicles can increase their osmolarity and their swelling [141][142][143][144]. HHG2C18 and PEGHG2C18 liposomes exploit the property to develop the proton sponge effect that enables liposomes to be released from the endosome membrane-bound compartments [87].
ER targeting
The ER is organized in a membrane network of sac-like structures called 'cisternae,' and an intra-and extra-cisternae space is discernible in its volume. The ER governs such important cell functions as protein folding and lipid biosynthesis [145]. In addition, it controls calcium homeostasis [146], apoptosis [147] and drug detoxification biochemical pathways [148]. ER stress leads to protein misfolding and/or aggregation, and these phenomena are often associated with such disorders as neurodegeneration [149], cancer [150], diabetes [151] and cardiovascular disease [152], which highlights the importance of targeting this organelle. Physiologically, protein trafficking to the ER is mediated by the KDEL sequence and its receptor (KDEL-R) expressed on this organelle. This sequence is an ER localization signal and was used to improve the intracellular targeting of these organelles [153]. Acharya and Hill developed gold nanoparticles conjugated with the KDEL peptide for the delivery of siRNA that inhibits the expression of reduced nicotinamide adenine dinucleotide phosphate oxidase 4 [154]. This strategy was shown to be more efficient than lipofectamine in transfecting differentiated myotubes. Delie and coworkers developed anti-KDEL functionalized polymeric nanoparticles loaded with paclitaxel for the treatment of prostate cancer [155]. They demonstrated that ER targeting significantly increased paclitaxel sensitivity in prostate cancer cells overexpressing G-protein-coupled receptor 78. There is evidence that ER targeting improves vaccine formulations for cancer immunotherapy [156]. In fact, in antigen-presenting cells, the antigens bind the major histocompatibility complex class-I molecules in the ER [157]. The research group of Nakagawa assembled fusogenic liposomes loaded with tumor antigen peptides coated with an ER-insertion signal sequence, which efficiently induced in vivo tumor immunity [158]. Compared with untargeted peptide-loaded fusogenic liposomes, these carriers prolonged epitope presentation in antigenpresenting cells for up to 140 h [158]. Finally, Pollock and coworkers developed a biomimetic approach based on liposomes synthesized with the same phospholipid future science group Enabling cytoplasmic delivery & organelle targeting by surface modification of nanocarriers Review composition as ER vesicles (57% phosphatidyl choline, 24% phosphatidyl-ethanolamine, 13% phosphatidylinositol, 3% phosphatidyl-serine and 3% sphingomyelin [159]), and demonstrated significant co-localization of these particles with the ER compartment within 30 min [160].
Integrated approaches
As mentioned above, to reach the cytosol, nanocarriers must usually overcome several biological barriers that differ in terms of architecture, working mechanism and composition [5]. Traditional injectable delivery platforms are designed to accumulate in the diseased area by exploiting the enhanced permeability and retention effect [161]. Particle surface modification with PEG is the most common way to exploit this effect since PEG enables the formation of a hydrodynamic radius that shields and protects the particles from intimate interaction with biological entities [162,163]. In particular, PEG proved to be very efficient in delaying particle opsonization, which favors the entrapment of particles into reticulo-endothelial system organs and their clearance [164]. Unfortunately, the stealthing action of PEG hinders the targeting and penetrating action of the molecules applied on the surface of the particles. Multiple surface modifications (with, e.g., PEG + targeting molecules + cell-penetrating peptides) to sequentially overcome the biological barriers separating the administration site from the target area can be made by appropriately dosing the different particle constituents and applying, when possible, synthesis approaches that preserve the structure of biological molecules, such as surface absorption through electrostatic interactions [165]. On the other hand, the use of multiple surface modifications to confer several functionalities to the carriers could increase the complexity of chemical synthesis, production costs and the number of potential side effects of the system, thereby making the translation of these technologies to the clinical setting extremely difficult [166]. The advent of polymers or particles with buffering properties could overcome these issues because endosomal escape is achieved through a mechanism that resides in the polymer/particle structure rather than on their surface. In addition, the surface of polymeric particles can be modified with shielding molecules to prolong particle circulation time, or with targeting agents.
Few attempts have been made to enable synthetic particles to sequentially overcome different biological barriers. Multistage silicon systems developed by the group of M. Ferrari for example are constituted by a first stage of nanoporous silicon that can be loaded with second-stage nanocarriers [167]. The shape of these particles was designed to marginate within tumor capillaries where the blood flow decreases and their structure dissolves in the biological fluid so that the second-stage nanoparticles are released within the tumor vasculature. These particles can be endowed with an endosomal escape mechanism and efficiently deliver siRNA to the tumor cell [168]. In addition, there is evidence that porous silicon and silica could exert a proton sponge effect thereby favoring the cytoplasmic delivery of siRNA. However, the integration of these multistage particles with pH-responsive polymers improved the delivery of biologicals [169], which shows successful integration of functions.
The negotiation of biological barriers through bioinspired approaches has recently gained momentum. These strategies are typically based on viral peptides with fusogenic and intracellular targeting properties [170]. Our group proposed a biomimetic approach based on coating synthetic particles with purified cell membrane of infiltrating leukocytes [171]. Using this strategy, the surface of nanocarriers was enriched, in a single step, with a natural membrane coating enriched with more than 150 leukocyte membrane-associated proteins [172] that determine reduced unspecific binding of opsonizing agents, improved biocompatibility and self-tolerance and local increase of inflamed vasculature targeting and permeability. In addition, once internalized within endothelial cells, particles coated with the leukocyte cell membrane were able to avoid endolysosomal sequestration (Figure 5), probably as a result of fusogenic phenomena of the membrane coating toward endosomal vesicles or of the activation of an alternative internalization pathway (e.g., caveolin- driven internalization; data under investigation). As shown in Figure 5, leukolike vectors (LLVs) retain their leukocyte membrane (black arrow, right inset) and are in contact with the cytoplasm, while nanoparticles are trapped in the endolysosomal compartment (black arrow, left insert). This phenomenon was quantified by measuring the internalization of the specific probe Lysotracker Red. As shown by our group and others [173][174][175][176], internalization of nanoparticles induces increased accumulation of this dye within the cells (Figure 6). A higher uptake of Lysotracker Red (or other probes specific for the endosomal compartment) was observed a few hours after internalization of uncoated particles, whereas treatment with LLV did not induce the same effect ( Figure 6A & B). At longer time points after treatment, accumulation of the dye did not differ between cells treated with uncoated particles and LLV-treated cells because the internalization process returned to basal level ( Figure 6C & D).
In this context, Harashima's group developed a tetra-lamellar multifunctional envelope-type nanodevice [176], inspired by the biology of adenovirus [107,108,177,178]. Thanks to the bioactivity of its surface, the system efficiently negotiated multiple extra-and intra-cellular barriers to delivery and to diagnostic applications. Enabling cytoplasmic delivery & organelle targeting by surface modification of nanocarriers Review Conclusion A plethora of strategies have been devised to enable synthetic particles (and their payload) to escape endosomal sequestration and target different subcellular compartments. These strategies can be achieved through nanoformulation of pH-responsive polymers or lipids or the conjugation of pH-responsive polymers or peptides on the surface of drug delivery carriers thereby endowing them with the ability to respond to the surrounding environment. Here we describe several approaches that can be used to enable particles to easily access the cytoplasm, and to target the nucleus, mitochondria and ER. Organelle targeting is usually mediated by peptide sequences that increase the affinity of carriers toward the intracellular compartments (Table 2). Nonetheless, endosomal escape and intracellular targeting are only the final steps of the delivery process. Targeting approaches can be classified in three main categories: primary targeting defined as accumulation of the delivery system in the tissue of interest; secondary targeting defined as accumulation in the cell of interest; and tertiary targeting defined as targeting specific subcellular compartments. Tertiary targeting is the ultimate challenge of nanomedicine and so far, strategies that enable particle diffusion in the cytosol have resulted in a deeper understanding of the biological processes that govern intracellular trafficking, but their clinical application remains limited because of the difficulty in designing nanocarriers that combine all the targeting steps.
Future perspective
Enormous effort is still required to integrate in a single technology the various approaches to sequentially overcome all the biological barriers that are encountered in the route to the target. In addition, these technologies must have a high reproducibility and cost-effective standards of production in order to favor their clinical translation. The development of new synthesis techniques as well as the new bio-inspired approaches could play a key role in the clinical translation of technologies designed to achieve cytoplasmic delivery of the payload and organelle targeting.
Summary
This review is devoted to the most recent advances in subcellular targeting and the development of new therapeutic strategies able to specifically target and modulate the function of major subcellular compartments. We also describe the approaches used to access the cell cytoplasm after particle internalization and to achieve organelle-targeting, and provide some examples of novel integrated systems designed to sequentially overcome more than one biological barrier. In this context, we review recent data about the new strategies used to enable nanocarriers to cross cellular barriers using cell-penetrating peptides, and to avoid endosomal sequestration through smart polymer modification. Last, we describe recent discoveries in the field of nuclear, mitochondrial and ER targeting. Considering that these organelles are the final target for a wide variety of therapeutic molecules, specific subcellular targeting is the ultimate challenge of nanomedicine and much effort is being spent to shed light on the processes that govern the intracellular trafficking of nanoparticles. Nucleus (SV40 T antigene) KKKRKV [97,[104][105][106][107][108][109][110] KRPAAT KKAGQAKKKKL (HIV-1 TAT) GRKKRRQRRRPQ [18,113] DOPAC-MYIEALDKYAC-COOH Mitochondrion MSVLTPLLLRGLT-GSARRLPVPRAKIHWLC [132] GKRK [140] D[KLAKLAK]2 SCHILLER-SZETO (SS) tetrapeptides. SS-02: (Dmt-RFK); SS-20 (FRFK); SS-31 (R-Dmt-KF) [128] Endoplasmic reticulum KDEL [153] DMT: 2,6-dimethyltyrosine; DOPAC: 3,4-dhydroxyphenylacetic acid; TAT: Transactivator of transcription. financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript apart from those disclosed.
No writing assistance was utilized in the production of this manuscript.
Open access
This work is licensed under the Attribution-NonCommercial-
Cell-penetrating peptides
• Nanoparticles modified with cell-penetrating peptides (CPPs), for example, transactivator of transcriptionmodified nanoparticles, are able to enhance cellular uptake and increase cellular drug accumulation thereby improving the anticancer proprieties of the described systems. CPP modifications can favor penetration of the blood-brain barrier.
Surface modification to avoid endosomal sequestration & degradation
• pH-responsive molecules used as constituents of nanocarriers favor the rupture of the endo/lysosomal membrane thereby resulting in the release of the carriers and their payload into the cell.
Nuclear targeting
• The pore size of nuclei is usually smaller than drug delivery systems, some nanoparticles, for example, C 60 , are small enough to diffuse through these pores. Another approach to nuclear targeting is to functionalize the nanoparticle with nuclear localization signals. Nuclear targeting can also be accomplished by exploiting the high affinity of acridine for DNA or by applying a positive charge to the nuclear membrane.
Mitochondrial targeting
• Requirements for mitochondrial targeting are lipophilicity and delocalized positive charge. Nitroxydoxorubicine, Shiller-Szeto peptides, lipophilic cations and DQAsomes, overcome the need for these characteristics. Other strategies are based on modifying nanoparticles with the mitochondrial target signals, in other words, small peptides from 10 to 70 amino acids. Another approach, peptide-based, is the MITO-porter system, which was designed for mitochondrial gene delivery.
Endoplasmic reticulum targeting
• Strategies to target the endoplasmic reticulum (ER) are based on conjugating ER localization signals (small peptides) or antibodies against its receptor to nanoparticles. Recent studies show that ER targeting improves the efficacy of vaccines for cancer immunotherapy. The latest biomimetic strategies devised to target the ER are based on liposomes that have the same phospholipid composition as ER vesicles.
Integrated approaches
• New strategies to overcome biological barriers are based on multiple surface modifications that confer several functionalities on the carrier. The main strategies are based on polymers or particles that have buffering proprieties, and multistage systems that sequentially overcome various biological barriers. Another strategy is based on biomimetic approaches; the surface of nanocarriers can be modified with natural membrane coating, which improves their biocompatibility, self-tolerance and local increase in inflamed vasculature.
References
Papers of special note have been highlighted as • of interest | 2018-04-03T02:42:55.142Z | 2015-07-01T00:00:00.000 | {
"year": 2015,
"sha1": "7236c8da54796cf15e077a689929eacc7ff3189f",
"oa_license": "CCBYNCND",
"oa_url": "https://www.futuremedicine.com/doi/pdf/10.2217/nnm.15.39",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "7236c8da54796cf15e077a689929eacc7ff3189f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
260337279 | pes2o/s2orc | v3-fos-license | Population transcriptogenomics highlights impaired metabolism and small population sizes in tree frogs living in the Chernobyl Exclusion Zone
Background Individual functional modifications shape the ability of wildlife populations to cope with anthropogenic environmental changes. But instead of adaptive response, human-altered environments can generate a succession of deleterious functional changes leading to the extinction of the population. To study how persistent anthropogenic changes impacted local species’ population status, we characterised population structure, genetic diversity and individual response of gene expression in the tree frog Hyla orientalis along a gradient of radioactive contamination around the Chernobyl nuclear power plant. Results We detected lower effective population size in populations most exposed to ionizing radiation in the Chernobyl Exclusion Zone that is not compensated by migrations from surrounding areas. We also highlighted a decreased body condition of frogs living in the most contaminated area, a distinctive transcriptomics signature and stop-gained mutations in genes involved in energy metabolism. While the association with dose will remain correlational until further experiments, a body of evidence suggests the direct or indirect involvement of radiation exposure in these changes. Conclusions Despite ongoing migration and lower total dose rates absorbed than at the time of the accident, our results demonstrate that Hyla orientalis specimens living in the Chernobyl Exclusion Zone are still undergoing deleterious changes, emphasizing the long-term impacts of the nuclear disaster. Supplementary Information The online version contains supplementary material available at 10.1186/s12915-023-01659-2.
Background
Rapid environmental change caused by human activities poses threats to many wild local species by increasing their vulnerability and notably increasing the risk of population extinction. Individual functional modifications of biological features (i.e. from gene to organism) determine this vulnerability [1]. Here, "function" is referring to a causal relationship between environmental trigger and individual response which impact the interaction between individuals and their environment [2]. These modifications can be detrimental, neutral or adaptive, as well as transmitted or non-transmitted. Functional adaptations in response to an altered environment have often been studied to disentangle plastic and selected processes [3][4][5]. But the interaction between evolutionary processes (irrespective of functional consequences, and adaptive nature) and plastic changes in the emergence of such functional modifications remains largely an open issue.
The current status of species and their ability to persist locally can be first explored by examining the evolutionary processes faced with contemporary environmental change [6]. By investigating population genetic diversity and structure, population genetics contributes towards disentangling the mechanisms that shape these evolutionary processes [7]. Short-term selection and changes of evolutionary potential in response to new abiotic and biotic interactions induced by human activities have often been emphasised for example in the case of urbanisation [8], climate change [9], or pollution [10]. This generally leads to a decrease in genetic diversity on standing variation or by selective sweeps. Other processes like migration can be also greatly modified by human activities and influence gene flow between populations, potentially increasing genetic diversity with population admixture [11] or decreasing genetic diversity by altering connectivity [12]. More broadly, in the case of small population size, random fixation of transmitted variations through genetic drift can lead to decrease in genetic diversity which can in turn affect population vulnerability [13,14]. Also, anthropogenic abiotic changes like pollution can induce de novo mutations which could lead to an increased genetic diversity [15], thereby providing the source of genetic variability upon which evolution acts, but may have deleterious effects by directly impairing protein function [16,17] or through indirect collateral effects [10,18,19]. In parallel with evolutionary processes, phenotypic plasticity interacts with individual functional changes and notably influences adaptive evolution. In small populations the increase of genetic drift limits the genetic potential on which selection can act and thus increases the importance of plastic changes to be able to cope with the altered environment [20]. On the other side, individual functional changes act also as feedback on evolutionary processes. For example, if plastic changes lead to a phenotype close to the fitness optimum, directional selection may only have a limited effect [20]. Combining the study of evolutionary processes (with or without direct individual functional consequences) with more functional-centred effects appears thus essential to estimate properly the vulnerability of populations facing anthropogenic pressures.
Pollutants constitute an interesting case study to analyse the evolutionary processes, phenotypic plasticity and functional responses of exposed organisms as they are examples of rapid and persistent environmental changes [21,22]. In particular, radioactive contaminations after major nuclear accidents like the Chernobyl accident are characterised by a wide range of consequences on wildlife [23][24][25] and long environmental persistence [26]. The short-term effects of high levels of radionuclides on wild organisms are relatively well described [23]. But even if the regulation in the most contaminated places (as for example in Ukraine with the establishment of the Chernobyl Exclusion Zone greatly limiting human activities [27,28]) promotes the presence of a large number of species (including large mammals [29]), we are still unable to describe the long-term vulnerability of populations and ecosystems [30,31]. A good example is the bank voles (Myodes glareolus) living in the Chernobyl Exclusion Zone (CEZ), for which a large number of studies showed that they are still experiencing individual functional modifications along the radioactive gradient more than 30 years after the nuclear power plant accident (e.g. colouration [32], cataracts [33], organ mass [34], DNA damage response [35]). These modifications were associated with decreased populations size [36] and thus question the impact on the vulnerability of populations in contaminated places.
Here, we take the Chernobyl study case to assess the long-term consequences of such a major anthropogenic environmental change and the nature of these consequences. In particular, we attempted to conciliate the question of evolutionary processes with a more individual functionally centred study. We focused on the Eastern tree frog Hyla orientalis, a species of particular interest because of its relatively small dispersal radius (about 0.5 km per generation [37]) and its exposure to radionuclides in different components of the environment as it lives buried in the ground during winter, reproduces and develops in water and lives as adult in the vegetation [38]. These characteristics make this species relevant to characterise the effects of ionizing radiation (IR) and inform on the possible non-perceptible effects on other components of the biotic community, as proposed for 'sentinel species' [39,40]. While over 3 years (from 2016 to 2018) the total dose rate absorbed by male tree frogs during the breeding period (ranged from 0.07 to 32.40 μSv/h) [41] was estimated to be below the no-effects threshold values (i.e. < to 42 μGy/h) determined by the International Commission on Radiological Protection (ICRP), a recent population genetics study indicated a singular evolution of CEZ populations since the accident compared to other European populations [42]. Analyses of the specific mitochondrial profile of tree frogs in the CEZ revealed a 100-fold higher rate of mitochondrial substitution and smaller effective population sizes compared to other European populations. In the same time, while several blood parameters were not shown to be dependent on total dose rate [43], the high mitochondrial substitution rate observed in the CEZ [42] pointed towards potential metabolic impairments.
The power of transcriptomic analysis, and especially the study of gene expression level, to explore the potential effects of pollutants on individual function has already been emphasised [44][45][46]. But transcriptomic analysis appears also particularly interesting to explore the potential effects of pollutants by assessing systemic evolutionary processes in the transcribed regions by population genetics analysis and estimate functional effects via the study of gene expression [47][48][49]. In this study, the de novo transcriptome assembly of H. orientalis allowed the comprehensive transcriptomics analysis of 87 individuals sampled in 2018 across a gradient of dose rates in the CEZ. The molecular changes potentially induced by radioactive contamination were assessed by doseresponse modelling with specific emphasis on coregulation of genetic networks with environmental variables, and including the body condition index, which was used as an integrative measure of major physiological changes at the individual scale [50]. Expressed single-nucleotide polymorphisms (SNP) derived from transcriptomics data were used to analyse the population structure and genetic variation within the Chernobyl area and to assess the intensity of evolutionary processes ongoing inside the CEZ for this species.
Negative correlation between body condition and individual dose of ionizing radiation
To better estimate the current exposure to IR in wild tree frogs, 87 breeding Eastern males tree frog (Hyla orientalis) were collected in the CEZ and in non-radiocontaminated area near Slavutych ( Fig. 1a and Additional file 1: Table S1) and their individual total dose rate (ITDR) (Fig. 1b) and body condition index (BCI) were estimated. A statistically significant decrease in BCI along the contamination gradient was observed (Chisq = 4.48, p = 0.034), due to a higher frequency of individuals with BCI below 0 in highly contaminated sites compared to other sites (chi-square test p = 0.0003, 61% n = 43 in sites A18, C18 and D18 compared to 21% n = 44 in sites B18, E18, F18, G18, H18) (Fig. 1c).
Geographically structured population and asymmetrical admixture in the CEZ
The de novo transcriptome assembly of Hyla orientalis was performed by using the combined set of 5.9 billion RNASeq paired end-reads from 5 tissues (tibia muscle, heart, eye, brain and testis) obtained from three specimens collected near the Slavutych locality. The E90N50 of the assembly was 1690 bp, and 95% of the conserved genes in Tetrapoda were completely assembled (BUSCO) [51]. The reference map of the Hyla orientalis transcriptome was defined as a set of 17,323 non-redundant assembled contigs. From the 7286 contigs encoding peptides detected by RNAseq in the muscle tissue (Tag Per Million, TPM > 0.1), 3611 (50%) corresponding proteins were also detected by proteomics (Additional file 2: We then generated RNAseq data from tibia muscle of 87 individuals sampled in the CEZ and Slavutych region, with similar coverage (Additional file 4: Table S2, Additional file 2: Fig. S2a, and Additional file 3: Supplementary Material and Methods). The tibia muscle was dissected from one leg for all individuals as it is an abundant and easily accessible tissue in frogs, deeply innervated and with high energy demand for locomotion [52]. From these data, we collected 374,102 variants and selected 157,677 SNPs (42.15%) based on stringent quality control criteria to assess genetic diversity. The ratio of nucleotides transition/transversion (Ti/Tv) was between 2.0 and 2.2 for all individuals, as expected for this type of data [53]. To estimate the intensity of gene flow within the sampled area, and also to make an initial assessment of the relevance of the markers used, we first examined the population genetic structure. We detected that all populations inside and outside the CEZ were moderately differentiated with G'st comprised between 0.054 and 0.115. The genetic distances (G'st/(1-G'st)) were significantly and positively correlated to geographical distances, indicative of an isolation by distance (Mantel, r = 0.6581, p = 0.011). This isolation by distance remained significant even by controlling for dose rate differences (partial Mantel, r = 0.5915, p = 0.012). Thus, genetic differentiation is explained on average, by affiliation reflected by geographical distances, and not only by the averaged total dose rates (ATDR) (see Materials and Methods for more information about the ATDR). To further describe population genetic structure in the CEZ, we conducted a principal component analysis (PCA) using bi-allelic SNPs.
The Slavutych populations (G18 and H18) were well separated from CEZ populations (Additional file 2: Fig. S2b), but only a relatively low variation was explained by the two first principal components (PC) (5.3% for PC1 and 3.66% for PC2). To further infer relationships between genetic clusters, we carried out a discriminant analysis of principal components (DAPC) in which populations were implemented a priori. The populations were clearly distributed according to geography, with Slavutych populations (G18 and H18) more differentiated from CEZ populations, as expected (Fig. 2a). The genetic structure of CEZ populations was also consistent with their geography. Indeed, the two southern populations B18 and F18 were close together and distant from the three central populations A18, C18 and D18, the south-west population E18 bridging these two groups. We then assessed the probability of affiliation of individuals to each sampled population based on their genotype (Fig. 2b). Individuals displayed high membership to their population defined a priori with little to moderate mixing between them, with the exception of the two Slavutych populations that were completely separated from CEZ populations. The Relationship between ITDR and BCI (Ri). The linear model is represented by a black line with 95% confidence intervals three central populations in the CEZ, A18, C18 and D18, presented the highest number of admixed individuals, reflecting their geographical proximity. We also noted a higher proportion of admixed individuals coming from the south (E18 and F18 populations) towards the centre of the CEZ but not the converse (Fig. 2b). These observations were confirmed by non-supervised k-means clustering of genetic polymorphisms and by admixture analysis based on ancestry coefficient inferences, without a priori on population affiliation (Additional file 3: Fig. S2c and d).
Functional prediction of SNP outliers correlated with individual total dose rate (ITDR)
We next aimed at describing the potential selective process ongoing in the CEZ by testing loci associated to the dose gradient and including the confounding effect due to population structure. We detected 21,432 SNPs highly correlated to ITDR (q-value < 0.05), scattered among expressed exons of 5283 different contigs encoding functionally characterised orthologous genes. A total of 4138 SNPs non-synonymous variants were detected and associated with genes involved in ATPase activity (GO:0016887 q-value < 10 −3 ) and small GTPase mediated signal transduction (GO:0051056 q-value < 10 −2 ). Finally, we found that 159 SNPs in 143 different contigs were stop-gained variants predicted to have a functional impact on orthologous proteins with function in energy metabolism including IDH3G (Uniprot P51553), ALDH2 (Uniprot P05091), and LDHA (Uniprot P00338), as well as the stress factors AHSA1 (Uniprot O95433) and EIF2AK3 (Uniprot Q9NZJ5) (Additional file 5: Table S3). The number of homozygous stop-gained mutations deviating from Hardy-Weinberg (HW) equilibrium was higher in populations highly contaminated in the CEZ compared to the others (exact test of HW equilibrium p < 0.01, n = 8 in sites A18, C18, D18, n = 4 in sites B18, E18, F18 and n = 0 for G18 and H18) (Additional file 6: Table S4).
Negative relationship between genetic diversity indices and the averaged total dose rates (ATDR)
The nucleotide diversity (π) of populations located at the centre of the CEZ (A18, C18 and especially D18) was lower than the one observed at the south of the CEZ (B18, E18, F18) and in the Slavutych region (G18 and H18) (Fig. 3a). Furthermore, π and individual heterozygosity were both negatively correlated with the ATDR (respectively, S = 148, rho = − 0.760, p = 0.037; F = 9.445, p = 0.003) (Fig. 3a, b). As decrease of heterozygosity could be a sign of increased inbreeding within the populations in the CEZ, we assessed the inbreeding coefficient (Fis) for each population. Average Fis by population was comprised between − 0.130 < Fis < − 0.026, indicating a general excess of heterozygosity. Higher Fis indices were observed in populations highly contaminated in the CEZ (Fig. 3c) and Fis values were positively correlated to ATDR (S = 12, rho = 0.857, p = 0.011) even including only the loci at HW equilibrium (S = 10, rho = 0.881, p = 0.007). Thus, the populations studied here do not show general sign of inbreeding, despite a decrease in genetic diversity in most contaminated populations. We next assessed variability of relatedness inferred by pairwise kinship coefficient between sites. Higher variations in relatedness were observed within highly contaminated populations in the CEZ (Fig. 3d) (z = 5.73, tau = 0.253, Fig. 3 Population genetics summary and relatedness of Hyla orientalis in the Chernobyl region. a Correlation plot of nucleotide diversity (π) on averaged total dose rate (ATDR) represented at the mean and standard error per sampled population. b Individual heterozygosity on individual total dose rate (ITDR). The linear model is represented by a black line with 95% confidence intervals. c Representation of the distribution of individual heterozygosity. d Degree of relatedness per population p = 9.93.10 −9 ) with notably a higher percentage of individuals with relatedness higher than zero in the three most exposed sites (15%) than in lower exposed sites (2%) (Chisq = 41.26, p = 1.33.10 −10 ). Together, these results indicate that individuals sampled in highly contaminated area in the CEZ do not show sign of higher inbreeding but display higher degree of relatedness.
Dose dependent effects in the CEZ assessed by transcriptomics clustering analysis
Individuals were discriminated in two main groups by hierarchical clustering of contig expression (Additional file 2: Fig. S3). To test how ITDR contributes to this observed pattern, we tested the correlation between individual cophenetic distances (as an estimate of similarity between individual) and ITDR. We found a significant positive correlation between cophenetic distances and ITDR (Mantel, r = 0.121, p = 1.10 −4 ). Importantly, this correlation remained significant even by controlling for genetic distances (partial Mantel, r = 0.126, p = 1.10 −4 ) or geographical distances (partial Mantel, r = 0.101, p = 2.10 −4 ). Thus, these results highlight that changes in expression patterns observed by transcriptomics analysis are correlated to the ITDR, and importantly this correlation does not depend only on geography or genetics.
Impact of long-term exposure to radiocontaminants in the CEZ by transcriptomic analysis
The differential expression analysis of 17,323 contigs was performed using individuals sampled outside the CEZ as reference. Comparison of fold changes was correlated (rho > 0.63) using either reference populations (G18 or H18) or a lowly contaminated site (F18) showing that the dose effects were prominent, and choice of reference sites did not impair the identification of differentially expressed contigs (Additional file 2: Fig. S4). Using the most distant site (G18) as reference, we found 477 differentially expressed genes (DEG) in common in the pair-wise comparative analysis made for the highly contaminated sites (A18, C18, D18) and 367 DEG common for the lowly contaminated sites (B18, E18, F18). The differentially expressed contigs were involved in pathways involved in oxidoreductase activity (GO:0016705, q-value < 1.10 −2 ) and glycolysis and gluconeogenesis (hsa00010, q-value < 1.10 −2 ) (Additional file 7: Table S5).
We then analysed the transcriptomics changes potentially associated with the environmental changes in the 87 individuals. Several groups of contigs were characterised by relative higher expression in individuals sampled mainly from sites A18, C18 and D18 (clusters 1 and 2), while a second group of contigs was expressed at low levels in individuals from sampling sites A18, C18 and D18 (cluster 3,4 and 6) (Fig. 4a). These results show that individuals sampled in highly contaminated sites inside the CEZ have a specific transcriptome signature that discriminates them from individuals originating from lowly contaminated sites in the CEZ or outside the CEZ. We then assessed which biological pathways were deregulated in individuals sampled in the CEZ by analysing the GO biological processes, GO molecular functions and KEGG pathways enriched in the different clusters obtained after hierarchical clustering. Contigs present in the cluster characterised by higher levels of expression in the highly contaminated sites were enriched (among others) in aerobic respiration (GO:0009060, q-value < 10 −6 ), glycolysis and gluconeogenesis (hsa00010, q-value < 10 −5 ) and muscle cell development (GO:0055001, q-value = 0.0016). Contigs lowly expressed in highly contaminated individuals in the CEZ were involved in immune response (GO:0002252, q-value < 10 −8 ) and tumor necrosis factor signalling (GO:0032640, q-value < 10 −2 ) (Fig. 4b, c). Importantly, adjusting contigs expression by the genetic distances led to clusters of contig expression with similar GO term composition (Additional file 2: Fig. S5). A complete list of the biological pathways enriched in each cluster is provided in Additional file 8: Table S6. Gene set enrichment analysis was then performed on the different sets of differentially expressed genes (using G18 as a reference, as before). The differentially expressed genes from highly contaminated sites (A18, C18 and D18) were enriched in proteins involved in oxidoreduction processes, while differentially expressed genes in the lowly contaminated sites (B18, E18 and F18) were enriched in proteins with function in lipid metabolism (phosphatidylcholine − sterol O − acyltransferase activator activity). Signalling transducer molecules and structural components of actin cytoskeleton were enriched in all gene sets (Fig. 4d). We then modelled contigs expression profiles along the dose-gradient [54] and computed benchmark dose (BMD 1SD ) to rank the sensitivity of the deregulated GO pathways (Fig. 5a). The subset of contigs with doseresponse models were notably involved in the response to oxidative stress (GO:0006979, q-value = 0.016), cellular response to lipid (GO:0071396, q-value = 0.012) and muscle system process (GO:0003012, q-value = 0.042), thereby showing a deregulation of contigs involved in these biological pathways in function of the ITDR. The complete list of the deregulated pathways is provided in Additional file 9: Table S7. Ranking of the BMD 1SD highlighted that the pathway involved in reduction of oxidative stress was deregulated at lower ITDR compared to contigs related to muscle contraction (BMD 1SD < 10 −2 μGy.h −1 and about 1 μGy.h −1 respectively) (Fig. 5b, c).
We then applied a non-supervised machine learning methods based on weighted gene co-expression network analysis [55] to identify modules of co-expressed contigs associated with variations of the BCI and ITDR (Fig. 6a). One module was significantly associated with ITDR and enriched in contigs with function in mitochondrial ATP synthesis coupled electron transport (q-value < 10 −19 ) and aerobic respiration (q-value < 10 −27 ). One other module was associated with the body condition index (BCI) and contained contigs involved in extracellular matrix structural constituent (q-value < 10 −39 ) (Fig. 6b and Additional file 10: Table S8). The correlation between the module eigengene and ITDR remained significant after controlling for genetic distances (p-value = 0.0017, see Additional file 3: Supplementary Material and Methods). Finally, modules correlated to ITDR or BIC after correcting contigs expression by genetic distance were enriched in ATP synthesis (q-value < 10 −4 ) and oxidative phosphorylation (q-value < 10 −4 ) (Additional file 2: Fig. S6 and Additional file 11: Table S9). Together, these results demonstrate that the transcriptional regulation of genes involved in several biological processes like ATPase activity, hexose metabolism and muscle function are affected by IR, and these changes are correlated to a decreased BCI.
Discussion
Tree frog (Hyla orientalis) populations living in highly contaminated sites inside the CEZ showed lower nuclear genetic diversity and higher relatedness, as a result of small population sizes. In addition to the underlying genetic drift process, we highlighted stop-gained mutations and changes in transcriptional profile strongly associated with the individual dose received which are predicted to have or support functional consequences in genes involved in energetic metabolism, and in turn could explain phenotypic changes including body condition index.
The analysis of genetic variants showed a significant negative correlation of average nucleotide diversity and average individual heterozygosity with ITDR indicative of genetic drift inside the CEZ, likely due to small effective population sizes. Decreased population sizes associated with the contamination of the environment is also corroborated by the higher propensity of sampled individuals to be more related in the most contaminated sites. Our data on inbreeding coefficient did not reveal signs of higher consanguinity inside the CEZ at the level of the genome but rather high variability in relatedness with notably higher degree of relatedness in the most contaminated sites.
By analysing genetic population structure, we highlighted that populations inside and outside the CEZ were genetically differentiated. The Slavutych populations were more differentiated compared to the ones collected inside the CEZ, pointing towards genetic specificity of the CEZ populations even if the geographical distance between Slavutych and CEZ populations is low (36 km for the closest and 66 km on average). The presence of higher number of admixed individuals in the centre of the CEZ indicates a directed gene flow towards the most contaminated places which can reflect past or ongoing migrations from low towards higher contaminated sites, either as a consequence of recolonisation of declined frog populations or the decrease of anthropogenic activities in the centre of the CEZ, as proposed for barn swallows [56] and bank voles [57]. The fact that population size is smaller in the most contaminated site support the notion that this dynamic is fuelled by a higher mortality or lower reproduction [58]. Thus, migrations and elevated mutations seem not sufficient to offset the genetic diversity loss by bringing new genetic variants in the most contaminated sites of the CEZ. While we cannot exclude the fact that lower genetic diversity observed in the most contaminated places could be the result of past strong population bottlenecks at the time of the accident (corroborated by the massive functional impacts on amphibians observed at the time of the accident) [23], previous simulations carried out on a mitochondrial marker suggest current small effective population sizes [42]. Current small effective population sizes can be indicative of decreased individual viability in the most contaminated places. Using BCI as an integrative trait to evaluate individual performances linked with energetic status [50], and sometimes proposed as a metric of fitness [59,60], we showed a negative correlation of this parameter with ITDR. Thus, the exposure to IR is associated with functional modifications in the CEZ. Because of the wide range of mechanisms implicated in BCI variations [61], non-supervised analysis of individual expression variations was performed in order to identify the main functional pathways associated with exposure to IR and mass decrease. The transcriptomic analysis pointed towards an effect of exposure to IR inside the CEZ, as low versus high contaminated sites were discriminated based on gene expression. The gene expression pattern (observed at the individual level) is explained by ITDR, and this relation remained significant by controlling for geographical distance or genetic distances. The changes in gene expression observed in the CEZ are thus not solely a consequence of genetic differentiation inside the CEZ but rather the result of ITDR differences, despite other environmental characteristics specific to the CEZ cannot be completely excluded. Furthermore, the biological effects detected by the transcriptomic analysis, collectively point toward deregulations of mitochondrial activity and muscle functions in frogs living in the CEZ, which fits with the known effects of chronic exposures to IR [62][63][64][65].
The results of the gene expression analysis suggest a substantial role of radioactive contamination on functional differences between individuals, impacting notably major energetic pathways. The proximate mechanism at the origin of these modifications remains unknown. Indeed, it could be both the effect of plastic changes and an adaptive response on a trait selected since several generations, as gene expression levels are known to be heritable and thus potentially under selection [66,67]. Genetic drift induced a lower range of transmitted variations, which can have functional consequences on which selection can act and thus render the plasticity particularly valuable if the functional changes bring individual closer to a fitness optimum [20]. Moreover, the functional response observed at the transcriptomic level appears constrained (as individuals from the most contaminated places show very similar gene expression patterns) and may thus be indicative of an adaptive response. A more in-depth analysis of expression diversity in each site for the different contigs could be a way to evaluate the nature of this response and which functions are involved [68,69]. Although this response seems non-optimal, as it does not compensate the effective population size decline inferred in the most contaminated places, this does not exclude some of these changes from being subject to selection.
Using RNaseq data to analyse SNP represents a promising and cost-effective alternative compared to whole genome sequencing, especially for non-model species for which no reference genome is available. The main limit, however, is that SNP are obtained only from exons that must be expressed at sufficient levels. If we cannot exclude that detection of SNP from gene expression might lead to bias in estimating genetic parameters, recent analysis shows that RNA-seq data can be used with good confidence to detect SNPs [70,71]. It is not yet possible to discriminate genetic variants only correlated with radiocontamination from loci genuinely under selective process, but many genes involved in response stress and ATPase activity and correlated with ITDR may thus be indicative of selective process on these functions. Several loci strongly associated to ITDR were found to be stop-gained mutations that are predicted to affect the function of several proteins involved in mitochondrial energy production (IDH3G, ALDH2, LDHA). Noteworthy, we observed an increased number of individuals with stop-gained mutations in these genes in the CEZ. Because of the predicted functional deleterious impact of these changes, it is most likely that these genetic variants do not represent cases of selection but rather de novo radiation-induced mutations that accumulate in the small population of frogs through inbreeding. Ultimately, these changes are in line with the transcriptomics data that showed changes in ATP metabolic process and glucose metabolism. These changes in metabolism observed at the molecular level in individuals exposed to higher IR doses were correlated to a decreased BCI. In addition, other biological processes were detected by the transcriptomic analysis, including change of expression of genes involved in muscle function and unfolded-protein stress response. As chronic exposure to relative low levels of IR was shown to alter the structure of myofibrils [64], it is possible that our results highlight a disruption of myofibers in frog muscles, an organ with high energy demand and rich in mitochondria. Together, these data collectively point toward a frequent impairment of mitochondrial function, which could cause the degraded body condition of individuals living in the CEZ. Despite congruent functional changes in individuals found in highly contaminated areas of the CEZ, our data do not demonstrate the effects of a selection of traits favourable to maintain frog population in the CEZ. Rather, the most parsimonious hypothesis is that frog populations inside the CEZ are maintained, in part, by migration from other less contaminated territories.
More than 30 years after the Chernobyl nuclear power plant accident, numerous studies showed contemporary functional modifications in wildlife associated with differences in exposure to IR [35,[72][73][74][75]. While generalisation to every exposed species is not possible [31,76,77], and the consequences at the ecosystem scale and the mechanisms involved in these changes remain subjected to debate [31], we highlighted the interest of taking an evolutionary perspective to interpret gene expression differences. The functional changes observed in the most contaminated places are expected to influence fitness and thus show the need to study the long-term specific processes that are taking place on wildlife in polluted areas. Mass loss and body condition in particular are expected to influence reproduction by modifying reproduction investment [78,79]. These fitness costs and the transmission of functional changes through generations should be better addressed in the future, for example with the establishment of field-based reciprocal transplants [80]. We also highlighted the potential key role of energy metabolism in response with radiation exposure from a body of evidence: changes in body condition, differential expression of genes involved in energy metabolism and association of dose rates with genetic variants on genes involved in mitochondria energy production. More generally, we analysed a specific type of response to persistent environmental change which combines an effective small population size coupled to an increase of deleterious mutations in some loci that directed functional plastic changes and viability loss. While these changes might allow the persistence of populations facing the radiocontaminated environment, this study questions the constraints that an eroded genetic diversity places on possible adaptation, and in particular of a 'genetic assimilation' which could lead to the fixation of initially plastic phenotypes [81,82].
Conclusions
The combination of gene expression study, SNP data and population genetic analyses allows us to identify genetic disturbances that were previously inaccessible [42]. In a radio-contaminated environment such as the CEZ, even several decades after the nuclear accident and with total absorbed dose rates that the ICRP has so far estimated to have no deleterious effects on frogs [41,83], individuals inhabiting the CEZ still display signs of significant disturbances such as genetic erosion, disruption of transcriptomic/metabolic pathways and changes in physiological traits. These changes are consistent with altered metabolism and fitness costs resulting from long-term radiation exposure, but the causal relationship with radiation remain to be established. Future studies in radiocontaminated areas should use a combination of "omics" data and population genetics analyses to investigate rigorously possible hidden deleterious effects.
Sampling of Hyla orientalis and dosimetry
The individuals used in the present study correspond to sampling made in Ukraine in 2018, as described before [42,43]. Briefly, Hyla orientalis calling male individuals were collected during the night in May-June 2018 from 8 different wetlands in Ukraine, including 6 sites in the CEZ (A18, B18, C18, D18, E18, F18) and two sites 10 km and 45 km east from the Chernobyl NPP (G18 and H18, respectively). Thereafter, these sites are referred to as populations in the sense of "population sample. " The 6 sites inside the CEZ covered a gradient of ambient dose rates ranging from 0.1 to 16.2 μGy.h −1 , while dose rates close to background were measured for the 2 sites outside the CEZ (0.04 μGy.h −1 ) (radiometer MKS-AT6130, Atomtex). Absence of trace metallic contamination (including uranium, thorium and plutonium isotopes) in the sites G18 and H18 was confirmed by elemental analysis of water samples (see Elemental Analysis). Individuals were sampled inside the CEZ (n = 73) and in the 2 noncontaminated sites (G18 and H18) (n = 14). Three additional individuals were sampled to produce the reference transcriptome (see Additional Material and Methods). After capture, frogs were kept individually in a wet plastic container overnight, measured, weighted and dissected as described before [42,43]. The tibia (gastrocnemius) muscle was dissected from one leg for all individuals as it is an abundant and easily accessible tissue in frogs, deeply innervated and with high energy demand for locomotion. The other leg was used for reconstruction of the internal contamination (see below). All tibia samples used for RNA extraction were immerged in individual tubes with 1 ml of RNA later (Sigma) and kept in liquid nitrogen. Internal contaminations of 90 Sr and 137 Cs were measured for each individual in bone and leg muscles, respectively, in the International Radioecology Laboratory (IRL) (Slavutych, Ukraine) to reconstruct the total internal dose, as described before [41,42]. The individual total dose rate (ITDR) in μGy.h −1 was reconstructed from both internal and external exposures using EDEN [84], considering both soil and water activities as before. Averaged total dose rates (ATDR) were obtained by computing the mean ITDR per site. All samples were deep frozen until their transport to the Institute for Radiological Protection and Nuclear Safety (IRSN) laboratory (Cadarache, France).
RNA extraction and sequencing libraries
The 87 individuals sampled in and outside the CEZ were subjected to RNA-Seq analysis. Total RNA was extracted from tibia muscle with the Quick-DNA/RNA Miniprep kit (Zymoresearch) following manufacturer's instructions. Integrity and RNA concentrations were assessed on RNA Nano Chips (Bioanalyzer 2100, Agilent). All samples had an RNA integrity index > 8 and were used for library preparation using the Stranded mRNA Prep Ligation kit with ITD UD index (Illumina) following instructions. After quality check (size and concentration) on DNA1000 Chips (Bioanalyzer 2100, Agilent), libraries were sequenced on a NovaSeq6000 platform to produced 100 or 150 bases long paired-end reads using S4 flow cells (Clinical Research Sequencing Platform, Broad Institute, MIT, United States). All fastq files were quality checked with an in-house pipeline relying on fastqc and fastx-toolkit.
Transcriptomics analysis
The de novo assembly of Hyla orientalis transcriptome was made with Trinity [86] using the complete dataset obtained from five different tissue type (tibia muscle, heart, eye, brain and testis) obtained from 3 additional individuals sampled in non-contaminated site. Completeness of Hyla orientalis transcriptome assembly was assessed by BUSCO [51] analysis. We searched similarities to SwissProt [87] and UniRef90 [88] (E < 1e −3 ) using blastx and blastp [89] with the assembled contigs and predicted ORF respectively. All annotations were collected in a mySQLite relational database using Trinotate [90] for annotating putative functional characteristics. The quality of the assembled contigs was confirmed by comparing the sequence of the predicted ploypeptides to proteomics data obtained by Mass spectrometry from 3 muscles of the same individuals. Proteins were extracted as before [64]. Peptides were analysed using liquid nanochromatography online in front of an Orbitrap Fusion Lumos Tribrid mass spectrometer (Thermo Fisher Scientific) as previously describe [91]. Transcript abundance estimation were mapped against the de novo assembled Hyla orientalis transcriptome using Bowtie2 [92] and quantified by RSEM [93]. Differential analysis was made by Deseq2 [94]. Dose-response modelling and BMR (benchmark response) estimation were computed with DRomics [54]. Weighted gene co-expression network analysis was performed with WGCNA [55]. See Additional file 3: Supplementary Material and Methods and Additional file 2: Fig. S1 for details on methods for the transcriptomics data analysis.
Population genetics summary statistics
Nucleotide diversity (π) for each population is the mean of nucleotide diversity for each contig calculated as the average number of nucleotide differences (θπ) divided by the total number of positions (segregating or not segregating sites) except positions with missing data in any individual. The individual heterozygosity ratio was calculated as the number of heterozygous positions on the number of segregating sites used for an individual. π and the individual heterozygosity were computed with DnaSP [98]. Expected (He) and observed (Ho) Heterozygosity of each population for each contig were computed with PLINK [99] and inbreeding coefficient (Fis) as 1-(Ho/ He) [100]. We then analysed the distribution of Fis values for the different loci for each population. To evaluate the contribution of loci with extreme values, we compared Fis between populations solely for loci at the Hardy-Weinberg equilibrium.
Population structure
Genetic variability was computed by pairwise Hedrick G'st [101] analysis between all populations with vcfR v.1.12.0 [102]. Multivariate analyses were performed among biallelic SNPs to take into account the different loci. Principal component analysis (PCA) and discriminant analysis of principal component (DAPC) were conducted with the adegenet v.2.1.3 package [103]. For DAPC, eight principal components were retained representing 21% of the total variance to keep a sufficient power of discrimination between observed and random effects and to avoid overfitting (a-score criterion). We retained and interpreted the two first discriminant functions which were the most informative. Group memberships of DAPC were represented with a "STRU CTU RE-like" barplot to investigate differentiation between populations and their level of admixture. In addition, ancestry estimation was obtained from the individual SNP data with the algorithm admixture using k = 7 [104]. In order to make inferences about the potential effective size differences between populations, we estimated individual pairwise kinship coefficient [105] for individuals which are affiliated to more than 95% to their initial population (to get rid of potentially recent migrants). The identification of genetic clusters without a priori was made with k-means clustering with all PCs and a number of cluster k = 8, corresponding to the eight populations sampled as a maximum of cluster.
Identification of loci potentially under selection
To identify loci potentially under selection along the contamination gradient, we used a Latent Factor Mixed Model (LFMM) implemented in LEA v.2.8.0 package [106]. This model provides a genome-wide ecological association study, by fitting a statistical model including population structure, which permits to test the association between genetic polymorphism and an environmental variable. Here, by running first non-negative matrix factorisation algorithms (sNMF), 3 clusters were selected to optimise the cross-entropy criterion. LFMM were then performed with 5 runs (iterations = 6000, burnin = 3000) with the ITDR as environmental variable and outliers identified using q-values (FDR, Benjamini-Hochberg) < 0.05. Genomic variant annotations and functional effect prediction were made with SnpEff [107]. Biallelic SNP outliers predicted to cause homozygous stop-gained mutations were tested for deviating from the HW equilibrium using the R package HardyWeinberg version 1.4.1 using the HWExact test option.
Statistical analysis of the link between ionizing radiation and body condition, genetic diversity, and expression
Body condition of Eastern tree frogs was estimated with the residuals of the regression of log10(body mass) against log10(snout-vent length) (Ri) [108,109]. Linear mixed models were used to test the link between log(ITDR) and body condition index with site as random factor. Linear models were used to test the link between log(ITDR) and individual heterozygosity. Correlations between genetic diversity indices (π and Fis) and ATDR were tested with non-parametric Spearman's rank tests. To investigate potential changes in variance in relatedness along the contamination gradient, correlation between ATDR and the residuals of relatedness for each site, calculated as the absolute value of relatedness from median of relatedness of each site, was tested with non-parametric Kendall's test. At the population level, correlation between log(geographical distances) (in meter, m) and genetic distances (G'st/(1-G'st)) were tested using Mantel's test [110]. The dependence of this correlation on pairwise ATDR absolute difference was tested using partial-Mantel test. At the individual level, Mantel tests were used to test the correlation between cophenetic distances (i.e. interindividual similarity) obtained by the hierarchical clustering of 5735 differentially expressed contig and log(ITDR) or log(geographical distances) (see also Additional Material and Methods). The dependence of these correlations on log(geographical distances) and genetic distances respectively was tested with partial Mantel tests. Mantel and partial-Mantel tests were carried out on vegan v.2.5-6 package [111] and significance estimated by permutation's test using 9999 permutations. Threshold of significance were all set to p < 0.05. All tests were carried out on R version 3.6.1 (A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL https:// www.R-proje ct. org/). | 2023-08-01T13:47:05.440Z | 2023-07-31T00:00:00.000 | {
"year": 2023,
"sha1": "9a208f20c8b2fcc965205b1c90255312eb40f40a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "9a208f20c8b2fcc965205b1c90255312eb40f40a",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244117476 | pes2o/s2orc | v3-fos-license | Positive equilibria of power law kinetics on networks with independent linkage classes
Studies about the set of positive equilibria ($E_+$) of kinetic systems have been focused on mass action, and not that much on power law kinetic (PLK) systems, even for PL-RDK systems (PLK systems where two reactions with identical reactant complexes have the same kinetic order vectors). For mass action, reactions with different reactants have different kinetic order rows. A PL-RDK system satisfying this property is called factor span surjective (PL-FSK). In this work, we show that a cycle terminal PL-FSK system with $E_+\ne \varnothing$ and has independent linkage classes (ILC) is a poly-PLP system, i.e., $E_+$ is the disjoint union of log-parametrized sets. The key insight for the extension is that factor span surjectivity induces an isomorphic digraph structure on the kinetic complexes. The result also completes, for ILC networks, the structural analysis of the original complex balanced generalized mass action systems (GMAS) by M\"uller and Regensburger. We also identify a large set of PL-RDK systems where non-emptiness of $E_+$ is a necessary and sufficient condition for non-emptiness of each set of positive equilibria for each linkage class. These results extend those of Boros on mass action systems with ILC. We conclude this paper with two applications of our results. Firstly, we consider absolute complex balancing (ACB), i.e., the property that each positive equilibrium is complex balanced, in poly-PLP systems. Finally, we use the new results to study absolute concentration robustness (ACR) in these systems. In particular, we obtain a species hyperplane containment criterion to determine ACR in the system species.
Introduction
A power law kinetic system (N , K) consists of a chemical reaction network N and a kinetics K which assigns to each reaction q a function K q of the form K q (x) = k q m i=1 x i F qi for q = 1, . . . , r with k q ∈ R >0 , F qi ∈ R and x ∈ Ω, a subset of R m ≥0 containing R m >0 . The r × m matrix [F qi ] is called the system's kinetic order matrix F and each k q is a rate constant. Mass action systems can be viewed as a subset of the power law kinetic (PLK) systems, with the kinetic order matrix consisting of the stoichiometric coefficients of each reaction's reactant complex. PLK systems in which, as in mass action systems, branching reactions of a reactant complex have identical rows in the kinetic order matrix form the subset of PL-RDK systems (RDK = reactant determined kinetic orders) systems, its (non-empty) complement is denoted by PL-NDK. A mass action system also has the additional property that reactions with different reactants have different kinetic order rows -a PL-RDK system with this property is called a factor span surjective (denoted by PL-FSK) system.
The study of power law kinetic systems in Chemical Reaction Network Theory (CRNT) was revived by S. Müller and G. Regensburger in 2012 with the novel, more geometric concepts of generalized mass action systems (GMAS) and kinetic deficiency [18]. In a GMAS, the rows of the kinetic order matrix of a PLK system (N , K) are called kinetic complexes and collected into a set C, which for a weakly reversible network N is the image of a bijective map φ : C → C. The requirement of φ being a surjective map is equivalent to the system being PL-RDK and injectivity to its factor span surjectivity (i.e. in PL-FSK). They showed that complex balanced GMAS share many properties with their mass action counterparts including the log parametrization of their complex balanced equilibria, i.e., their set Z + (N , K) = {x ∈ R S >0 | log(x) − log(x * ) ∈ S ⊥ }, where x * is a given complex balanced equilibrium and S is the system's kinetic order subspace (the kinetic analogue of the stoichiometric subspace S). In [19], they expanded the GMAS concept by dropping the injectivity requirement for φ and extended their results to complex balanced PL-RDK systems.
In the past decade since the work of Müller and Regensburger, numerous advances have been made in the study of positive equilibria of non-mass action PLK systems (e.g., [22], [17], [23], [3], [10], [9]). However, they have also exclusively focused on the subset of complex balanced equilibria. Even the results of Talabis et al. [22] and Fortun et al. [10] on the set of (all) positive equilibria E + (N , K) of weakly reversible PLK systems in their Deficiency Zero and Deficiency One Theorems, can be viewed as results on complex balancing since in these cases, E + (N , K) = Z + (N , K) [15]. To our knowledge, new results have been published for non-weakly reversible " T -rank maximal" kinetic (PL-TIK) systems satisfying the conditions of Deficiency One Theorem and weakly reversible PL-NIK systems, i.e., all kinetic orders are non-negative, on conservative and concordant networks, as a special case of a result of Shinar and Feinberg [21] on weakly monotonic kinetic systems ( T is the augmented matrix of kinetic complexes where the non-reactant columns are deleted). The goal of this paper is to contribute two new results in this regard.
A network is cycle terminal if each of its complexes is a reactant complex. Our first main result establishes that any cycle terminal PL-FSK system with E + (N , K) = ∅ and has independent linkage classes (ILC) is a poly-PLP system, i.e., E + (N , K) is the disjoint union of sets of the form {x ∈ R S >0 | log(x) − log(x * ) ∈ S ⊥ }, where x * are given equilibria and the union, when finite has |E + (N , K) ∩ Q| terms. This result extends the results of Boros in his PhD thesis [2] on mass action systems with ILC -in fact, many of our proofs follow his arguments closely. The key insight for the extension is that factor span surjectivity induces an isomorphic digraph structure on the kinetic complexes, and hence a bijection between their decompositions. This result also completes, for ILC networks, the structural analysis of the original complex balanced GMAS by Müller and Regensburger focused on the subset Z + (N, K) of complex balanced equilibria. Furthermore, it identifies the class of systems which includes the counterexample of Jose et al. [15] for the extension of the theorem of Horn-Jackson on absolutely complex balanced (ACB) systems to complex balanced PL-RDK systems [15].
It follows from the Feinberg's Decomposition Theorem that for a network with ILC and any kinetics, E + (N , K) = ∅ =⇒ E + (L, K) = ∅ for each linkage class L. Our second main result identifies a set of PL-RDK systems for which the condition is also sufficient, i.e., E + (L, K) = ∅ for each linkage class L implies E + (N , K) = ∅. This result also extends a result of Boros on mass action systems with ILC. Furthermore, it extends, for networks with ILC, results of Talabis et al. [22] on PL-TIK systems to its superset of T -independent PL-RDK systems.
We conclude the paper with two applications of our results. In the first application, we consider absolute complex balancing in poly-PLP systems. A complex balanced kinetic system, i.e., Z + (N , K) = ∅, is absolutely complex balanced (ACB) if E + (N , K) = Z + (N , K), i.e., every positive equilibrium is complex balanced. In 1972, F. Horn and R. Jackson derived the following fundamental result: for mass action kinetics, any complex balanced system is absolutely complex balanced. In [15], Jose et al. posed the following question: to which other kinetic systems, besides mass action, does the Horn-Jackson result extend? Using the results of Jose et al., we show that for complex balanced PL-RDK systems which are poly-PLP with flux subspace S (the kinetic order subspace of the system), the multi-PLP systems are precisely those which are not ACB. We then apply our results for cycle terminal PL-FSK systems to show that for these systems, multi-PLP is equivalent to multi-stationarity. This criterion identifies the set of monostationary PL-FSK systems with ILC as a new class of PL-RDK systems to which the Horn-Jackson result extends. In addition, we apply our results to study absolute concentration robustness (ACR) in these systems. The property of ACR in a species was introduced by G. Shinar and M. Feinberg in a well-known paper in the journal Science and denotes the invariance of the species concentration at all positive equilibria. Their results for mass action systems have been extended to power law kinetic systems, and we use the specific structure of the positive equilibria sets of the systems studied to obtain a species hyperplane containment criterion to determine ACR in the system species.
The paper is organized as follows: Section 2 collects the basic concepts and results on reaction networks and kinetic systems needed in the later sections. In Section 3, poly-PLP systems are introduced and the first main result is shown. T -independence is discussed in Section 4 and shown to ensure the sufficiency of non-empty linkage class equilibria sets for the same in the whole network for networks with ILC. In Section 5 we study absolute complex balancing in poly-PLP systems and identify a new class of poly-PLP PL-RDK systems with the property. Section 6 formulates a general species hyperplane criterion for ACR and applies it to the systems studied. Summary and outlook are provided in Section 7.
Preliminaries
In this section, we provide essential background for our succeeding discussions. We present a number of definitions and useful results on chemical reaction networks and chemical kinetic systems primarily from the works of M. Feinberg [5][6][7][8], F. Horn and R. Jackson [14], and C. Wiuf and E. Feliu [24].
Fundamentals of chemical reaction networks
We start by giving a formal definition of a chemical reaction network, or simply CRN: Definition 2.1. A chemical reaction network is a triple N := (S, C, R) of nonempty finite sets S, C ⊆ R S ≥0 , and R ⊂ C × C, of m species, n complexes, and r reactions, respectively, that satisfies the following properties: i. y → y / ∈ R for each y ∈ C, and ii. for each y ∈ C, there exists y ∈ C such that y → y ∈ R or y → y ∈ R.
Chemical reaction networks can be treated as directed graphs, which are known as reaction graphs in literature, where complexes are vertices and reactions are arcs. The (strongly) connected components are precisely the (strongly) linkage classes of the CRN. A strong linkage class is a terminal strong linkage class if there is no reaction from a complex in the strong linkage class to a complex outside the given strong linkage class.
For a given CRN with , sl, and t are the number of linkage classes, the number of strong linkage classes, and the number of terminal strong linkage classes, respectively, we have ≤ t ≤ sl. A CRN is weakly reversible if sl = , i.e., each linkage class is strongly connected, and is t-minimal if t = . The terminal strong linkage classes can be of two kinds: cycles (in the sense of directed graphs) that are not necessarily simple, and singletons, which we call terminal points. We let n r be the number of reactant complexes of a CRN. Then, n − n r is the number of terminal points. A CRN is cycle terminal if and only if n − n r = 0.
We then proceed with introducing the following matrices that are essential to our work: Definition 2.2. The molecularity matrix or matrix of complexes Y is an m × n matrix such that Y ij is the stoichiometric coefficient of species i in a complex j. The incidence matrix I a is an n × r matrix such that The stoichiometric matrix N is the m × r, which is given by the product Y · I a .
We now provide definitions for what we call the stochiometric subspace and the deficiency of a chemical reaction network.
i.e., the linear span of the reaction vectors over R. The rank of the network is given by s := dim S. For x ∈ R S >0 , its stoichiometric compatibility class is defined as (x + S) ∩ R S ≥0 . Two vectors x * , x * * ∈ R m are stoichiometrically compatible if x * − x * * is an element of the stoichiometric subspace S. Definition 2.4. The deficiency of a CRN is δ = n − − s where n is the number of complexes, is the number of linkage classes, and s is the rank of the network.
Decomposition of a chemical reaction network is induced from partitioning the reaction set into subsets. M. Feinberg introduced the so-called independent decomposition of chemical reaction networks as given in Definition 2.5 (Section 5.4 in [5], Appendix 6.A in [6]). In our work, however, we consider the linkage classes as the subnetworks under network decomposition.
Definition 2.5. A decomposition of a chemical reaction network N into k subnetworks of the form N = N 1 ∪ · · · ∪ N k is independent if its stoichiometric subspace is equal to the direct sum of the stoichiometric subspaces of its subnetworks, i.e., S = S 1 ⊕ · · · ⊕ S k .
Remark 2.6. We emphasize that whenever we write equations like δ = δ 1 + · · · + δ or S = S 1 ⊕ · · · ⊕ S as in Proposition 2.7, the ith linkage class has deficiency δ i and stoichiometric subspace S i . A reaction network that satisfies property (i) or property (ii) in Proposition 2.7 possesses the independent linkage classes property, or simply ILC. Proposition 2.7. [1,2] Let N be a reaction network. Then the following statements are equivalent.
These next two results were taken from Boros [1,2] that we will use in the proofs of our main results. The first part of Lemma 2.8 appears in various works in literature such as in Corollary 4.14 of [7] and in Section 4 of [14]. On the other hand, the next lemma is a consequence of the famous Farkas' Lemma.
Fundamentals of chemical kinetic systems
We now introduce our definitions for a chemical kinetic system and the species formation rate function. Here, we associate a kinetics for a given chemical reaction network to describe its dynamical properties. Definition 2.10. A kinetics for a reaction network (S, C, R) is an assignment to each reaction y → y ∈ R of a continuously differentiable rate function K y→y : R S >0 → R ≥0 such that the following positivity condition holds: K y→y (c) > 0 if and only if supp y ⊂ supp c. The system (N , K) is called a chemical kinetic system. Definition 2.11. The species formation rate function (SFRF) of a chemical kinetic system The dynamical system or system of ordinary differential equations (ODEs) of a chemical kinetic system is given by . An equilibrium or a steady state is a zero of f . Definition 2.12. The set of positive equilibria of a chemical kinetic system (N , K) is given by . We also denote this set by E + for simplicity.
A chemical reaction network is said to admit multiple positive equilibria if there exists a set of positive rate constants such that the system of ODEs admits more than one stoichiometrically compatible equilibria.
Definition 2.13. The set of complex balanced equilibria of a chemical kinetic system (N , K) is given by A positive vector c ∈ R m is complex balanced if K (c) is contained in ker I a . A chemical kinetic system is complex balanced if it has a complex balanced equilibrium.
We now consider an important class of chemical kinetics that generalizes the wellknown mass action.
, containing the kinetic order values, is called the kinetic order matrix, and k ∈ R r is called the rate vector.
If the kinetic order matrix contains the corresponding stoichiometric coefficients of reactant y for each reaction y → y in the network, then then the system follows the mass action kinetics. Definition 2.15. A PLK system has reactant-determined kinetics (of type PL-RDK) if for any two reactions i, j with identical reactant complexes, the corresponding rows of kinetic orders in F are identical, i.e., F ik = F jk for k = 1, 2, ..., m. On the other hand, it has a non-reactant-determined kinetics (of type PL-NDK) if there exist two reactions with the same reactant complexes whose corresponding rows in F are not identical. From Definition 2.14 and due to S. Müller and G. Regensburger [19], we present the following definitions: A PL-TIK system is a PL-RDK system whose block matrix has maximal column rank. The subset of PL-TIK with linear independent T-matrix columns is denoted by PL-RLK (reactant set linear independent power-law kinetics).
Positive equilibria of PL-FSK systems on cycle terminal networks with ILC
In this section, we determine the structure of the positive equilibria set of factor span surjective power law kinetic (PL-FSK) systems on cycle terminal networks with independent linkage classes (ILC). These systems include the complex balanced GMAS where Müller and Regensburger determined the structure of the subset of complex balanced equilibria. We show that they are poly-PLP systems, i.e., where x * are given equilibria. Note that the kinetic order subspace S is the kinetic analogue of the stoichiometric subspace S, which is generated by the differences of kinetic complexes. Our results partially extend the result of B. Boros on mass action systems with ILC and those of Jose et al. on (mono-) PLP systems.
We now introduce the following definition of an LP set from Jose et al. [15] and the definition of a poly-PLP system. Definition 3.1. An LP set is a non-empty subset of R S >0 of the form E(P, x * ) where P is a subspace of R S , which is called an LP set's flux subspace, and x * a given element of R S >0 , which is called the LP set's reference point. P ⊥ is called an LP set's parameter subspace, and the positive cosets of P are called LP set's flux classes.
Definition 3.2. A kinetic system (N , K) is a poly-PLP system if its set of positive equilibria E + (N , K) is the disjoint union of LP sets with flux subspace P E and reference points {x * i |i = 1, . . . , µ} where µ := |E + ∩ Q| (with Q as a flux class) or ∞ is the coset intersection count. Analogously, a system is poly-CLP if its set of complex balanced equilibria Z + (N , K) is the disjoint union of LP sets. A poly-PLP (poly-CLP) system is called multi-PLP (multi-CLP) if µ ≥ 2, otherwise PLP (CLP). We also denote a poly-PLP (poly-CLP) system with µ-PLP (µ-CLP) if we want to emphasize the value of µ.
An example of a poly-PLP system is given in Example 3.13. We use our main result to show that it is indeed such system.
We now introduce the key insight for the extension of the results of Boros in his PhD thesis [1,2] from mass action to a large class of power law kinetics. Note that when a PL-RDK (kinetics) is PL-FSK (power law kinetics which is factor span surjective), different reactant complexes have different kinetic complexes. Factor span surjectivity for PL-RDK induces an isomorphic digraph structure on the kinetic complexes as described in Proposition 3.7. We first state the following definition: Example 3.6. Consider the example of Jose et al. [15], with underlying network N , depicted in Fig. 2. We let the complexes in N be labeled as Figure 2. A chemical reaction network from Jose et al. [15] C 2 = 3A 2 + 3A 3 , C 3 = 4A 1 + A 2 + A 3 , and C 4 = 6A 1 . Then, we have these reactions: In addition, the kinetics is defined using the following kinetic order matrix [15]: Note that each complex in N is a reactant complex, and hence, the network is cycle terminal. Then, we have the following kinetic complexes: C 1 := −A 2 + A 3 , C 2 := −A 1 − A 2 +A 3 , C 3 := −2A 2 , and C 4 := −2A 3 . One may write the kinetic complexes by expressing the species as the standard basis vectors of R m , for instance, −A 2 + A 3 may be written as (0, −1, 1). Moreover, we form the following kinetic reactions: The following proposition is a key insight for our results in this section. In particular, we need the assumption of a kinetic system to be cycle terminal and PL-FSK for digraph isomorphism between the network N and the induced network N as defined in Definition 3.4 to proceed with Lemma 3.10 and succeeding propositions. Let (N , K) be a cycle terminal PL-FSK system. Then i. N is isomorphic (as a digraph) to N . Consequently, I a = I a , and Im I a is isomorphic to Im I a .
ii. The isomorphism induces a bijection between the set of decompositions of N and N .
Before we present the main results, we first present these basic results and definitions that are essential in the latter part of this section. We define π (Definition 3.9 given below) in the sense of PL-RDK so that we are ensured that π is indeed a function. We use Lemma 3.10 to establish Lemma 3.11, which gives a parametrization of a non-empty subset of the set of equilibria E + of a cycle terminal PL-FSK system that satisfies the ILC property, i.e., δ = δ 1 + · · · + δ where is the number of linkage classes of the underlying network, with provision of course of the non-emptiness of E + .
x s x s Ysy . Lemma 3.10. Let (N , K) be a cycle terminal PL-FSK system, and the function π y be given in Definition 3.9. Then, the following statements are equivalent.
i. π y (x, x ) = π y (x, x ) for any y → y ∈ R ii. Y y − Y y , log (x) − log (x ) = 0 for any y → y ∈ R Proof.
In the following next two results, we require the assumption of a system to be cycle terminal PL-FSK since we use Lemma 3.11 in the proofs.
Example 3.13. We again consider the reaction network from Jose et al. [15] as given in Fig. 2. The kinetics and T -matrix are given as follows: Since the network has a single linkage class, it has ILC. It is in fact PL-RLK, i.e., the set column vectors of the T-matrix is linearly independent, and thus factor span surjective. According to Theorem 3.12, the system is poly-PLP.
Suppose that E + = ∅. Then i. for all positive stoichiometric classes P and P , there exists a bijection between E + ∩ P and E + ∩ P , and ii. E + ∩ P is finite for some positive stoichiometric class P then a. E + ∩ P is finite for each positive stoichiometric class P and b. |E + ∩ P | = |E + ∩ P| for each positive stoichiometric class P .
Proof. The proposition follows from Lemma 2.8 and Theorem 3.12.
A criterion for the existence of positive equilibria for T -independent PL-RDK systems on networks with ILC
In this section, we introduce the concept of T -independence of a PL-RDK system and show that, on a network with ILC, the existence of positive equilibria for each linkage class is a necessary and sufficient condition for the property for the whole system. This result extends a result of Boros [1,2] on mass action systems with ILC and partially a result of Talabis et al. [22] on PL-TIK systems, which form a subset of T -independent systems. We first define two important concepts then proceed with the main result of this section.
Definition 4.1. The Laplacian matrix A k is an n × n matrix such that where k ji is the label (often called rate constant) associated to the reaction from complex j to complex i.
We can see from the first condition in Definition 4.1 that k ji = 0 when reaction i → j is not present in the network. Let (N , K) be a PL-RDK system that satisfies δ = δ 1 + · · · + δ and T = T 1 ⊕ · · · ⊕ T (which we call the T -independence). Then where ψ i neg , Y i and A i k,neg are the factor map (neglecting non-reactant complexes), the molecularity matrix and the Laplacian matrix (without columns for non-reactant complexes) for the ith linkage class, respectively. Now, log ψ i neg (x) = (T i ) log (x) and log : We denote the vector in R n i with all coordinates equal to 1 by 1 i . Note that At this point x ∈ E i + = ∅ for each i = 1, . . . , if and only if Condition (4.3) holds. Hence, By assumption, T = T 1 ⊕ · · · ⊕ T and using Lemma 2.9, we have for some u ∈ R m and w ∈ R . Let x ∈ R n >0 and γ 1 , . . . , γ ∈ R >0 such that log(x) = u and − log(γ i ) = w i for each i = 1, . . . , . Equation (4.4) becomes Hence, for all i = 1, . . . and for all y ∈ C i , we have Then, This proves that x ∈ E + . The system is a PL-RDK system that satisfies the T -independence. If the ILC property holds for the underlying network, i.e., δ = δ 1 + · · · + δ , then the conclusion of Theorem 4.3 is true for the given system.
An application to the Extension Problem for the Horn-Jackson ACB Theorem
In this Section, we recall the Extension Problem for the Horn-Jackson ACB Theorem and identify a large class of PL-RDK systems to which the extension does not hold. We further apply our previous results to obtain a simple criterion for weakly reversible PL-FSK systems with ILC to be in the complement of this class, leading to the identification of a new class of ACB PL-RDK systems.
The Extension Problem for the Horn-Jackson ACB Theorem and multi-PLP systems
A complex balanced kinetic system, i.e., Z + (N , K) = ∅, is absolutely complex balanced (ACB) if E + (N , K) = Z + (N , K), i.e., every positive equilibrium is complex balanced. Two ACB Theorems were foundational in the modern CRNT: in 1972, M. Feinberg showed that any complex balanced deficiency zero kinetic system is absolutely complex balanced (Feinberg ACB Theorem). In the same year, F. Horn and R. Jackson derive an extension to positive deficiency systems, showing that any complex balanced mass action system, regardless of deficiency, is absolutely complex balanced (Horn-Jackson ACB Theorem). The extension is partial in the sense that it was limited to mass action kinetics. Fifty years later, Jose et al. [15] posed the Extension Problem for the Horn-Jackson ACB Theorem: for which kinetic sets, besides mass action, is every complex balanced system absolutely complex balanced? In view of the Feinberg ACB Theorem, one of course only needs to consider systems with positive deficiency. A natural candidate for an extension is the set of complex balanced PL-RDK systems for two reasons: 1. In his 1979 Wisconsin Lecture Notes, M. Feinberg derived the equivalence of the ACB property with the log parametrizability of Z + (N , K) (denoted as CLP) for mass action systems.
2. S. Müller and G. Regensburger in 2014 in turn showed that any complex balanced PL-RDK system is a CLP system with flux subspace = S.
Jose et al. extended the Horn-Jackson result to weakly reversible PL-TIK systems, whose linkage classes are independent (ILC) and have deficiencies 0 or 1. These are precisely the weakly reversible systems which satisfy the conditions of the Deficiency One Theorem for PL-TIK systems [22]. They also constructed the system in Example 3.13 and showed that though CLP, is not a PLP system. From this, they could conclude that it is not absolutely complex balanced, thus providing a counterexample to a general extension of the Horn-Jackson ACB Theorem to PL-RDK systems (or even just PL-TIK systems).
The following Proposition and Corollary identify a large class of PL-RDK systems to which the Horn-Jackson ACB Theorem does not extend: Let (N , K) be a complex balanced PL-RDK system. If it is poly-PLP with flux subspace = S, then it is non-ACB ⇐⇒ it is a multi-PLP system.
Proof. By Proposition 4 of [19], it follows that the system is a CLP system with flux subspace S. By Theorem 4 of [15], the system is non-ACB if and only if it is not a PLP system with flux subspace S. Since the system is poly-PLP, the latter is equivalent to the system being multi-PLP.
Corollary 5.2. Let (N , K) be a weakly reversible multi-PLP system with kinetic deficiency δ = 0. Then it is complex balanced, but not absolutely complex balanced.
Proof. Since the system is weakly reversible, PL-RDK and has zero kinetic deficiency, it follows from Theorem 1 of [19] that the system is complex balanced. On other hand, since the system is multi-PLP, from the previous Proposition, the system is not absolutely complex balanced. Example 5.3. Since the system from Jose et al. [15] in Example 3.13 is a PL-TIK system, it has zero kinetic deficiency [21]. According to Example 3.13, it is poly-PLP with flux subspace S, but the computations of Jose et al. show that it is not PLP. Hence, it must be multi-PLP and therefore it belongs to the class of PL-RDK systems specified above.
Remark 5.4. Weakly reversible, multi-PLP PL-RDK systems with δ > 0 which satisfy the complex balancing criterion in Theorem 1 (Statement ii) of [19], constitute a further class of non-ACB systems.
A criterion for multi-PLP for cycle terminal PL-FSK systems with ILC
We now apply our previous results to derive a simple criterion and computational procedure to determine when a cycle terminal PL-FSK system with ILC is multi-PLP. Proof. For a given stoichiometric class P and x * ,i ∈ E + ∩ P, we have a map {x * ,i } → {Q(x * ,i )}. Clearly, the map is surjective. According to Theorem 3.12, in particular the last statement explaining the "star union", it is also injective. Hence, when finite, |E + ∩ P| = |{Q(x * ,i )}| = µ. As remarked before, since the Q(x * ,i ) are LP sets with flux subspace S, |E + ∩ Q| = µ. This establishes the equivalence of multi-PLP and multistationarity for cycle terminal PL-FSK systems with ILC. Proof. Again, by Proposition 4 of [19] it is a CLP system with flux subspace S and monostationarity implies that it is PLP with the same flux subspace. By Theorem 4 of [15], it is absolutely complex balanced.
Absolute concentration robustness in poly-PLP systems
The concept of absolute concentration robustness (ACR) in a network's species was introduced by G. Shinar and M. Feinberg in a well-known paper in the journal Science in 2010 [20]. ACR denotes the invariance of a species' concentration at all positive equilibria of the system. Shinar and Feinberg also provided a remarkable sufficient condition inspired by experimental observations in subsystems of E. coli for deficiency one mass action systems. This condition was extended to power law kinetic systems of low deficiency (i.e., δ = 0 or 1) [10,11], subsets of poly-PL kinetic systems [16], and Hill-type kinetic systems [13]. Independent decompositions allowed the extension of the Shinar-Feinberg criterion to larger systems and those with higher deficiency.
In this Section, we first formulate a general necessary and sufficient condition, called an ACR species hyperplane criterion, for ACR in a species for any kinetic system. We then show that when applied to the systems we have studied in the previous sections, it leads to a relatively simple procedure for determining ACR in the species of the systems.
A general species hyperplane criterion for ACR in kinetic systems
Mathematically, the property of ACR in a species S is equivalent to the statement that all positive equilibria of a kinetic system lie in a hyperplane x S = c, with a positive constant c (this is the so-called "ACR hyperplane"). Clearly, this is equivalent to the statement that for any two equilibria x * and x * * , x * − x * * lies in the hyperplane x S = 0. Let (N , K) be a kinetic system, with N = (S, C, R) and m is the number of species.
Definition 6.1. For any species S, the (m − 1)-dimensional subspace is called the species hyperplane of S.
For U containing R >0 , let φ : U → R be an injective map, i.e., φ : U → Im φ is a bijection. By component-wise application (m times), we obtain a bijection U S → R S , which we also denote with φ. We formulate our concepts for any subset Y of E + (N , K), although we are mainly interested in Y = E + (N , K). Definition 6.2. For a subset Y of E + (N , K), the set is called the difference set of φ-transformed equilibria in Y , and its span ∆ φ Y the difference space of φ-transformed equilibria in Y .
Recall that a system has concentration robustness in a species S over a subset Y of positive equilibria if the concentration value for S is invariant over all equilibria in Y . Most studied has been absolute concentration robustness where Y = E + (N , K), and balanced concentration robustness when Y = Z + (N , K).
The following Proposition formulates the general species hyperplane criterion for concentration robustness for a subset of positive equilibria Y : ii. If m ACR (m BCR ) is the number of species with ACR (BCR), then Proof. Statement (i) follows directly from the definitions, i.e., concentration robustness The claims in (ii) follow from (i), since ∆ φ E + is contained in H S . Hence, dim ∆ φ E + ≤ m − m ACR , and analogously for Z + and BCR.
Corollary 6.4. If (N , K) has ACR in at least one species, then ∆ φ E + does not contain any positive vector.
Remark 6.5. Note that any injective map U S → R S leads to an ACR hyperplane criterion. The key question for the choice of the map φ is the tractability of computing ∆ φ E + .
Example 6.6. For monostationary systems [4], the choice for U = R and φ = id, so that Im φ = R.
Concentration robustness in poly-PLP systems
We now apply the general species hyperplane criterion to the logarithm function.
Definition 6.7. The subspace P E * = log(x * i ) − log(x * j ) ⊥ is called the reference equilibria flux subspace.
The ACR species hyperplane criterion for poly-PLP systems is the following: Theorem 6.8. Let (N , K) be a poly-PLP system with flux space P E , reference equilibria {x * i } and reference equilibria flux subspace P E * . Then i. it has ACR in a species S if and only if P E ⊥ + (P E * ) ⊥ = (P E ∩ P E * ) ⊥ is contained in the hyperplane {x|x S = 0}, and Figure 3. A mass action system first studied by F. Horn and R. Jackson from [14] ii. if m ACR is the number of species with ACR property, then m ACR ≤ dim(P E ∩ P E * ) ≤ min{dim(P E ), dim(P E * )}.
Proof.
i. We only need to show that ∆ log E + = P E ⊥ + (P E * ) ⊥ . We denote the set {x ∈ R S >0 | log(x) − log(x * i ) ∈ P E ⊥ } with E i . If x ∈ E i and x ∈ E j , then log(x) − log(x ) = (log(x * i ) − log(x * j )) + (p i − p j ) ∈ (P E * ) ⊥ + P E ⊥ . Hence, ∆ log E + ⊆ P E ⊥ + (P E * ) ⊥ . Conversely, if p ∈ P E ⊥ , for each pair (i, j), since the system is poly-PLP, there exists an x i ∈ E i such that log(x i ) = log(x * i ) + p 2 , and analogously an x j ∈ E j such log(x j ) = log(x * j )− p 2 . Hence, (log(x * i )−log(x * i ))+p = log(x i )−log(x j ), implying P E ⊥ + (P E * ) ⊥ ⊆ ∆ log E + . Applying the general species hyperplane criterion delivers the claims.
ii. We have (P E ∩ P E * ) ⊥ is contained in S {x|x S = 0} for all ACR species, so that m − dim(P E ∩ P E * ) ≤ m − m ACR , which leads to the claim.
Theorem 6.8 suggests that a way to determine ACR in a poly-PLP system is to compute bases of P E ⊥ and (P E * ) ⊥ , respectively, and then observe which species coordinates are zero in all basis vectors of the two spaces. These are precisely the ACR species. Remark 6.9. Note that if µ = |E + ∩ Q| is finite, then, for any species S, Example 6.10. If (N , K) is a (mono-)PLP system, then (P E * ) ⊥ = 0, and we recover the results of [16], including a simple computational procedure for the species hyperplane criterion. In that paper, the bijection L x * (x) := log(x) − log(x * ) was used to derive the results.
Example 6.11. We now refer to Fig. 3. The mass action system was first studied by F. Horn and R. Jackson [14]. Note that ε > 0 is a parameter that depicts rate constant.
The stoichiometric subspace S = (1, −1) , and hence, δ = 4 − 1 − 1 = 2. Since = 1, the linkage class decomposition is trivially independent, so it is poly-PLP. In [2], B. Boros computed the positive equilibria sets and showed that the system is tri-PLP if 0 < ε < 1 6 and mono-PLP if 1 6 < ε. However, for the ACR analysis, we do not need the equilibria computations because the network is conservative (the vector (1, 1) is clearly in S ⊥ ), so that Corollary 6.4 implies that it does not have ACR in any species. | 2021-11-16T02:16:21.204Z | 2021-11-13T00:00:00.000 | {
"year": 2021,
"sha1": "3d62f7cf9437511a4d692746e1a1b544ea6ffb2b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "3d62f7cf9437511a4d692746e1a1b544ea6ffb2b",
"s2fieldsofstudy": [
"Mathematics",
"Chemistry"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
204800736 | pes2o/s2orc | v3-fos-license | Ringdown overtones, black hole spectroscopy and no-hair theorem tests
Swetha Bhagwat, Xisco Jiménez Forteza, Paolo Pani, Valeria Ferrari 1 Dipartimento di Fisica, “Sapienza” Università di Roma & Sezione INFN Roma1, Piazzale Aldo Moro 5, 00185, Roma, Italy 2 Max Planck Institute for Gravitational Physics (Albert Einstein Institute), Callinstraße 38, 30167 Hannover, Germany 3 Scuola Superiore di Studi Avanzati Sapienza, Viale Regina Elena 291, 00161, Roma, Italy and 4 INFN Sez. di Napoli, Compl. Univ. Monte S. Angelo Ed. G, Via Cinthia, I-80126 Napoli (Italy)
Validating the black-hole no-hair theorem with gravitational-wave observations of compact binary coalescences provides a compelling argument that the remnant object is indeed a black hole as described by the general theory of relativity. This requires performing a spectroscopic analysis of the post-merger signal and resolving the frequencies of either different angular modes or overtones (of the same angular mode). For a nearly-equal mass binary black-hole system, only the dominant angular mode (l = m = 2) is sufficiently excited and the overtones are instrumental to perform this test. Here we investigate the robustness of modelling the post-merger signal of a binary black hole coalescence as a superposition of overtones. Further, we study the bias expected in the recovered frequencies as a function of the start time of a spectroscopic analysis and provide a computationally cheap procedure to choose it based on the interplay between the expected statistical error due to the detector noise and the systematic errors due to waveform modelling. Moreover, since the overtone frequencies are closely spaced, we find that resolving the overtones is particularly challenging and requires a loud ringdown signal. Rayleigh's resolvability criterion suggests that -in an optimistic scenario -a ringdown signal-to-noise ratio larger than ∼ 30 (achievable possibly with LIGO at design sensitivity and routinely with future interferometers such as Einstein Telescope, Cosmic Explorer, and LISA) is necessary to resolve the overtone frequencies. We then conclude by discussing some conceptual issues associated with black-hole spectroscopy with overtones.
I. INTRODUCTION
At a sufficiently late time after the merger, the evolution of a binary black hole (BBH) spacetime can be described as a perturbation on the spacetime of the remnant black hole (BH). The gravitational-wave (GW) signal produced during these final stages is called the 'ringdown' (RD) 1 . It comprises of a superposition of damped sinusoids with frequencies and damping times corresponding to the quasi-normal-mode (QNM) spectrum [1][2][3][4][5][6][7][8] of the remnant BH. h = Σ lmn A lmn e −ιω lmn t e −t/τ lmn Y lm , where the QNMs are indexed by a set of three integers: (l, m) describe the angular dependence of the modes and are decomposed on the spin-weighted s = 2 spheroidal harmonics Y lm , whereas n is the overtone index of a given angular mode, with n = 0 being the fundamental mode. For a given QNM, the complex amplitude A lmn = A lmn e iφ lmn depends on the perturbation conditions set up during the inspiral-plunge-merger phase of the BBH evolution. The QNM frequencies ω lmn and the damping times τ lmn are characteristic of the final BH and are parametrized uniquely by its mass (M f ) and spin (a f ). 1 Throughout this paper RD is assumed to be the part of the BBH waveform describable by linear perturbation theory of the remnant BH. Note that in literature, RD is also sometimes used to refer to the full post-merger signal which contrasts with the usage of the word we choose here.
The BH uniqueness [9][10][11][12] and the no-hair theorem [13][14][15] state that a BH spacetime in general relativity (GR) is uniquely defined by at most three parameters: the mass, the angular momentum and the electric charge. The latter is thought to be negligible for astrophysical BHs. All aspects in a BH spacetime -including its perturbative dynamics and therefore the QNM spectrumis thus completely parametrizable by its mass and spin. Consequently, the countably infinite number of QNMs of a Kerr BH are uniquely related to each other. This presents a lucrative opportunity to perform an observational validation -in the form of multiple null-hypothesis tests -of the no-hair theorem with the BBH RD signals [16][17][18].
The dominant QNM excitation in a BBH merger is the l = m = 2, n = 0 QNM and is the easiest to detect in the GW data. To perform a no-hair theorem test one needs to at least obtain an independent measurement of three QNM parameters. In particular, one can measure the fundamental l = m = 2 mode along with either • another angular mode, namely the l = m = 3 or the l = 2, m = 1 mode, both with n = 0; or • the first overtone (n = 1) of the l = m = 2 mode.
Role of the overtones for BH spectroscopy and no-hair theorem tests Although the role of overtones in RD modelling has been studied for a long time [24,25], they have been considered for performing the no-hair theorem test only recently [26,27]. Overtones have become particularly important because a substantial fraction of the BBHs detected by the LIGO-VIRGO detectors comprise of nearequal mass systems [28]. This poses a predicament for using angular modes to perform the no-hair theorem tests as these systems do not sufficiently excite angular modes other than the dominant one (e.g., the l = m = 3 or l = 2, m = 1 modes are suppressed) [29,30]. For these systems, performing a no-hair theorem test using the overtones of l = m = 2 mode seems promising.
The LIGO-Virgo Collaboration performed a GR consistency test by comparing the full inspiral-merger-RD waveform to the RD of the remnant BH modelled by a single dominant mode for GW150914 event [17]. Recent work like Refs. [26,27] tried to quantify the contribution of additional overtones to the RD signal of GW150914 event. Adding at least one overtone is necessary to obtain an independent estimate of the remnant BH's mass and spin from the RD of GW150914 system. It was found that these estimates were consistent (at the ∼ 20% level) with the values predicted by numerical relativity (NR) simulations corresponding to the estimated BBH parameters of a full inspiral-merger-RD waveform [26,27] (see also [29,31]).
However, going beyond consistency tests demands a robust BH spectroscopy, i.e., measuring several QNMs from a single BBH RD. An observational validation of the no-hair theorem requires that the QNM frequencies and the damping times must be estimated by performing a Bayesian parameter estimation (PE) on all the RD model parameters 2 (i.e., {A n , φ n , ω n , τ n }) for a BBH RD event. It is required that: a) the obtained estimates of the frequencies and damping times are consistent with the predicted GR-QNM spectrum; and b) the QNM frequency spectrum can be resolved, i.e., that the separation of the two estimated frequencies are larger than the measurement errors, for the two frequencies to be distinguishable.
We consider the RD waveform produced by NR simulations as a gold standard and compare them with analytical RD models. Accurate modelling of the postmerger 3 allows for a better estimation of the mass and the spin of the remnant BH (thus allowing for more accurate inspiral-merger-RD consistency tests [17,26,27]).
Details of the fitting procedure and some discussion related to this fitting are presented in Sec. II. Including up to n = 7 overtones in the RD waveform is necessary to model the post-merger accurately. However an 8-tone 4 RD model has 4 × 8 = 32 independent model parameters which makes performing a Bayesian PE infeasible. The simplest Bayesian PE using overtones can be performed with a minimal RD model comprising of a fundamental mode and its first overtone. In this paper, we study this model and refer to it as the 2-tone RD model henceforth.
The 2-tone model is insufficient to describe the entire post-merger accurately [26,27,31] and we discuss it in Sec. III. It is important to start the analysis from an appropriate time, t 0 , after the merger such that the recovered QNMs do not have a bias arising from the modelling inaccuracies. In Sects. III and IV we address this issue and provide a computationally cheap recipe to choose the start time for a spectroscopic analysis of a BBH RD using overtones 5 .
Performing the no-hair theorem test with overtones is challenging, both conceptually and from an implementation point of view. Although the addition of overtones improves the accuracy of RD waveform models, whether they can be used for performing a direct no-hair theorem test must be investigated carefully, particularly because the overtone frequencies are closely spaced and a high signal-to-noise ratio (SNR) is required to resolve them. In Sec. V we provide an estimate of the minimum SNR required to resolve overtones of a BBH system based on Rayleigh's resolvability criterion (the latter providing an optimistic lower bound for the minimum SNR [29]) required. Finally, in Sect. VI we conclude by discussing some conceptual issues that must be considered while performing BH spectroscopy with overtones.
II. FITTING THE RD WITH OVERTONES
Estimating the amplitudes and the phases of QNMs in a BBH RD from the source frame dynamics is a highly nontrivial problem. In practice, a fit must be performed to calibrate the amplitudes and the phases of a multitone RD model against an NR-RD. To validate our fitting procedure and to check the reproducibility of the best fit values obtained for the amplitudes and the phases of an 8-tone RD model, we first compare our fitting results with those in Ref. [27]. This is particularly important progenitor-BHs. Nonetheless, in this paper we refer to the time at the peak of the waveform as the time of the merger. 4 We define a p-tone RD model as that including n = 0, 1, 2, ..., p−1 overtones. Thus, an 8-tone model includes overtones from n = 0 to n = 7, whereas a 2-tone model includes only the fundamental mode and the n = 1 overtone. 5 Our prescription can be easily extended to BH spectroscopy with different angular modes. because the overtones are short-lived and therefore the fits could be unstable and/or irreproducible. We consider 2 nearly non-spinning BBH NR simulations from the SXS catalogue [32] as our representative systems: (i) SXS:0305 corresponding to mass ratio q = 1.2, and (ii) SXS:1220 corresponding to q = 4. The spins of the remnant BHs are a SXS:0305 f = 0.69 and a SXS:1220 f = 0.47, respectively. The NR simulations scale with the total mass of the system and in this section, we normalize it to unity. We fix the frequency spectrum of the damped sinusoids in Eq. (1) to the GR-QNM spectrum of the remnant BH for various overtones of the l = m = 2 angular mode. We then use a complex linear least-squared fit based on χ 2 minimisation to obtain the best fit value of the amplitudes and phases for our RD models. χ 2 is defined as whereh( λ) i is the model parameterized by λ = {A lmn ; ω lmn , τ lmn , t 0 } and is evaluated at each time step t = t i . Throughout this study, while we fit the RD at different start time t = t 0 , we quote the amplitude extrap- TABLE I. The best-fit amplitudes and phases of an 8-tone RD model. We perform the fits at the peak of the BBH waveform t0 = 0 for SXS:0305 and SXS:1220 and provide the values of the best fit amplitudes and phases. We note that for SXS:0305 these values are comparable to that presented in Ref. [27]. olated back in time at t = 0 for the ease of comparison. This is done by rescaling the amplitude as A lmn → A lmn e (ιω lmn +1/τ lmn )t0 . Further, for the sake of simplicity, we assume that all overtones are excited simultaneously. We emphasise this is a strong assumption in Sec. VI. With these assumptions, we perform the complex linear RD fits and in Table I we present the best-fit values obtained for the amplitudes and the phases of QNMs. We confirm that the best fit amplitude values corresponding to SXS:0305 are consistent with that presented in Table 1 of Ref. [27].
From Table I, we note that the largest excitation amplitude in both the NR simulations considered here corresponds to the n = 4 overtone 6 The amplitudes of the overtones higher than n = 4 decrease monotonically. Although not conclusive, this hints towards the possibility that only a finite number of overtones are excited significantly in a BBH RD.
To quantify the accuracy of our fits, we use the mismatch defined as where C is the 'averaged match' defined as 6 We observe the magnitude of the best fit amplitude first increases, reaches the maximum at n = 4 and then decreases. Here is a speculative explanation for this phenomenon. The first overtones to be excited are the high-n ones, which are excited by sources relatively far from the horizon and therefore their excitation factor is small. On the other hand, low-n overtones are excited by sources near the horizon, so part of the ringdown energy is absorbed and never reaches infinity. The largest excitation occurs for intermediate-n overtones, which are not too close nor too distant from the horizon. Mismatch of RD models with non-GR-QNM spectrum for SXS:0305. We computed the mismatch (similar to Fig. 1) of the best-fit waveform corresponding to a 2-tone, 4-tone and 8-tone RD models (overtones up to n = 1, 3, 7 respectively, see legend) with NR-RD. The shaded bands show the mismatch as a function of time for 100 realisations of the RD-models with non-GR QNM spectra. The frequencies of the overtones (n > 0) are randomly drawn within 2% the GR-QNM frequencies. The solid curves correspond to the RD models with GR-QNM spectrum. Note that all the three solid curves lie within their respective shaded bands demonstrating that some non-GR-QNM spectra produce a similar (or better) mismatch.
Here h NR represents the NR-RD while h M is a p-tone RD model (obtained by truncating Eq. (1) at n = p − 1), whereas t 0 is the time after the peak of the GW waveform at which we begin our fits.
In Fig. 1 we show the behaviour of M as a function of t 0 for the p-tone RD models. The top panel shows an overall agreement with Fig. 1 in [27], validating our fitting procedure and confirming the reproducibility of the best-fit amplitude and phase values for SXS:0305. Further, we observe that the behaviours of the mismatch for SXS:0305 and SXS:1220 are qualitatively similar. We find that in both cases, the entire post-merger is reproduced accurately (M ∼ 10 −6 ) by an 8-tone RD model. However, we remark that although with an 8-tone RD model the post-merger signal is accurately replicated [26,27], to interpret them as actual BH QNMs one needs to verify that the choice of the spectrum is uniquely specified by GR and that a different choice of the frequency spectrum cannot reproduce the post-merger signal with a similar accuracy. This is crucial to avoid the risk of misinterpreting the fit, especially since the number of parameters of the model is large. Therefore, in Fig. 2 we investigate if frequencies inconsistent with the GR-QNM spectrum can reproduce SXS:0305 NR-RD accurately. We fix the frequency and damping time of the fundamental mode in an p-tone RD model because the frequency of the fundamental mode is required to reproduce the late stages of RD. However, we allow the frequencies of all the remaining p − 1 overtones to randomly vary within ±2% of their GR QNM values 7 . We fit for the amplitudes and the phases of the overtones to the NR-RD.
We draw 100 realisations of non-GR-QNM frequency spectra and repeat this analysis on each of them. The shaded bands of Fig. 2 show the mismatch of 100 realisations of frequency spectra for 2-tone, 4-tone and 8-tone RD models. The solids lines in the figure correspond the p-tone RD models (with overtones up to n = 1, 3, 7, respectively) using the GR-QNM frequency spectrum.
We find that the mismatch obtained with a GR-QNM spectrum lies within the shaded bands, showing that non-GR-QNM spectra can reproduce an NR-RD to a similar accuracy as GR-QNMs. In fact, for each model there exists a few realisations of the non-GR-QNM spectrum that reproduce the NR-RD to a slightly better accuracy than the GR-QNM spectrum. This casts some doubts on the interpretation of the damped sinusoid as BH QNMs 8 .
Moreover, we also find that the best fit amplitudes A n (particularly, for the higher n overtones) are extremely sensitive to small changes in the overtone frequencies. This is shown in Appendix B (cf. Fig. 18).
An 8-tone RD model is impractical for a Bayesian PE because one needs to estimate 8 × 4 = 32 parameters from a few cycles of RD. Therefore, in the rest of this paper, we concentrate on a more feasible 2-tone RD model -including only the fundamental mode and the n = 1 overtone -which is the minimal complexity of RD model required to perform a no-hair theorem test. It must be emphasised that a 2-tone RD model is insufficient to capture the morphology of NR-RD close to the peak of the waveform [31] and one is forced to choose an appropriate RD start time t 0 . We elaborate on this in Sect. III.
Finally, we investigate the robustness of the fits for a 2-tone RD model by studying the behaviour of the bestfit amplitudes A 0,1 with the start time t 0 (specifically, t 0 /M ∈ [0, 30]) for SXS:0305 and SXS:1220. In Fig. 3, the markers correspond to the values for the amplitudes A 0,1 obtained by fitting a 2-tone RD model using the GR-QNM spectrum. For both the systems, the best-fit amplitude A 0 (shifted back to the merger time) does not vary considerably with the start time of the fit. This indicates that the dominant n = 0 mode is robust to variations of t 0 . However, the best fit value for A 1 increases significantly for larger values of t 0 , alluding to an instability towards the choice of start time of the fits. In particular, for SXS:0305, A 1 ∼ 1 for t 0 = 0, while A 1 ∼ 4 for t 0 = 25M . 7 Note that the difference in frequencies between the fundamental mode and the n = 1 overtone for a SXS:0305-like system is ∼ 2.2% of the l = m = 2, n = 0 QNM frequency. 8 The fact that some effective-one-body waveform models use pseudo-modes (i.e., modes which are not QNMs) to reproduce the early RD [25] provides further hint that these damped sinusoids may just be basis functions producing reliable fits. The best fit value of amplitudes A0 (green dotted markers) and A1 (red squared markers) for the 2-tone RD-model are presented as a function of start time of the fit t0. For ease of comparison, we report the values of amplitudes at the merger time obtained by extrapolating the best fit amplitude at t0 to t = 0. Ideally, when the best fit amplitudes are robust to the start time of fits, one expects the markers to line up at a constant value. We see that while the best-fit amplitude A0 is robust to the choice of t0, A1 increases, indicating an instability towards the choice of start time.
III. DETERMINING THE RD START TIME FOR BH SPECTROSCOPY
Whether one interprets the damped sinusoids used to fit the RD as physical BH QNMs or merely as basis functions that reproduce the NR-RD, one can test the no-hair theorem by performing a Bayesian PE on all the RD- is a label for different modes/overtones -of an observed GW event. Such test of the no-hair theorem requiresa) estimation of frequencies and damping times that is consistent with GR predictions, and b) that the observed frequency spectrum is resolvable. A crucial issue in performing this test is the choice of the start time of the RD analysis after the peak of the GW strain amplitude.
At sufficiently late times after the peak of the waveform, one expects the 2-tone RD model to accurately describe the NR-RD. However, there is a trade-off: on the one hand, performing BH spectroscopy at late times in the RD leads to large uncertainties in the estimation of frequencies due to the lack of SNR; on the other hand, starting the spectroscopic analysis close to the peak amplitude of the waveform leads to a biased estimate of QNM frequencies due to the inaccuracies of the 2-tone RD model to describe the early RD, e.g. due to missing overtones and possible nonlinear effects in the model. In Sec. III A we estimate the expected uncertainty in the recovery of the QNM frequencies as a function of SNR using a Fisher matrix framework. Subsequently, in Sec. III B we study the effect of modelling inaccuracies on the recovery of QNM frequencies.
We argue that an optimal time to start a spectroscopic analysis of a BBH RD is the time (after the peak amplitude strain) at which the expected bias in the recovery of QNM frequencies due to modelling error is equal to (or less than) the statistical uncertainty in its recovery. We illustrate the interplay of these two factors in Sec. III C and provide a computationally inexpensive prescription to find the optimal start time for spectroscopic analysis of RD in Sec. IV. Again we study this problem quantitatively for two representative systems [32]: i) SXS:0305, and ii) SXS:1220.
A. On the statistical error
The SNR ρ of an observed event dictates the uncertainty in the PE due to the detector noise; we refer to this uncertainty as the statistical error in the estimation of the parameter.
For a fixed RD start time, adding the overtones can markedly increase or decrease the SNR of the RD signal. This is illustrated in Fig. 4 where we plot the SNR (normalized to the single-mode case) as a function of the amplitude ratio between two the overtones and for various choices of the phase difference δφ. We see that the n = 1 overtone can notably contribute to the SNR when the amplitude ratio is ≥ 1 and δφ ∼ 0 . Furthermore, the relative phase difference between the overtones plays a crucial role to the RD SNR. While a counter-phase overtone decreases the overall SNR significantly, an inphase overtone increases the SNR by a few times (for a given amplitude ratio). This emphasises the importance of including overtones with correct relative phases for any RD analysis.
Next, we compute the expected statistical error σ on the recovery of the QNM frequencies using a Fisherinformation matrix framework. In a large-ρ limit, the product of the SNR ρ and the expected Fisher matrix spread σ fi in the estimation of the frequency of the i-th QNM is a constant, i.e., where κ i depends only on the intrinsic parameters of the BBH system [33]. Analytical expressions for ρ and σ fi are presented in Appendix A, where we extend the previous analysis in Ref. [33] to a 2-tone RD model. Following Ref. [33], in the Fisher-matrix computation we assume a circularly polarized RD signal with two overtones such that i.e., A + i = A × i . We set the absolute phase of the fundamental mode to zero. However, unlike in Ref. [33], we do not neglect the phases of the n = 0, 1 tones. The final analytical expression of σ fi is cumbersome and is provided in the supplemental Mathematica R notebook [34].
In Fig. 5 we study the behaviour of ρσ f0 for a 1-tone RD model as a function of the system's parameters. For simplicity we set the phase of the fundamental mode to zero, so that κ 0 = κ 0 (f 0 , τ 0 ) in this case. For a given value of ρ, we find that the systems with similar κ 0 yield similar uncertainty in the recovery of the QNM frequency. We also observe that the value of κ 0 strongly depends on the damping time but weakly depends on the frequency.
In general, adding extra parameters to a model leads to an increase in the spread of the recovered dominant frequency σ f0 due to the correlation between the parameters (see Appendix B for further discussions). The extent to which the uncertainty in the recovery of f 0 changes between a 1-tone and a 2-tone RD model depends on the intrinsic parameters of the BBH system. For simplicity, we set the phase of the dominant mode to zero 9 , but keep the relative phase difference δφ between the n = 1 and n = 0 tones. Therefore, in this case We calculate κ i for a 2-tone RD model calibrated to SXS:0305 and SXS:1220 using a analytical Fisher matrix calculation. We find a strong dependence on the amplitude ratio and the relative phase difference between the 9 We checked that this assumption does not change the final result while simplifying the final expression substantially. Notice that the value of κ0 is largely determined by the damping time τ0 of the mode and has a much weaker dependence on the frequency. From this plot one can read the value of κ0 for a given BBH system for a quick estimate of the expected uncertainty in recovering the frequency for a given RD SNR.
two overtones. For fixed values of ρ and the final mass and spin of the BH remnant, we find that σ f0 is maximum when the relative phase difference δφ is nπ/2 (where n is an odd integer) and minimum when δφ is an integer multiple of π.
Using this value of κ i , we study the variation of σ fi as a function of RD SNR ρ in Fig 6. We plot the relationship between σ fi of the recovered QNM frequencies and the signal strength ρ for two systems, SXS:0305 and SXS:1220, scaled to M f = 68.5M . As expected, the uncertainty σ f1 in the recovery of the QNM frequency of the n = 1 overtone has a much larger uncertainty compared to σ f0 for a given value of ρ. Comparing the left panels (1-tone) to the right panels (2-tone) of Fig. 6, we see that in SXS:0305 including the n = 1 overtone in the RD model does not change the statistical error on the fundamental frequency significantly, unlike for SXS:1220. This is also seen in Table II where we present the values of κ i obtained for the 1-tone and the 2-tone RD models. The uncertainty in the frequency estimate of the fundamental mode using a 2-tone RD model broadens and -depending on the parameters of the BBH system -this may significantly worsen the estimation of f 0 . This must be considered when performing an inspiral-merger-RD consistency test which can be performed with a single QNM.
Here we highlight that σ fi predicted by the Fisher matrix sets a lower bound on the estimation of the frequency f i . It gives an accurate estimation of the uncertainty in the recovery only in a high SNR limit and when the noise is characterized by a Gaussian. In reality, when performing a Bayesian PE on the GW data from the current LIGO/Virgo detectors, both of these assumptions may be violated. We compare the 90% credible interval presented in Fig. 5 of [17] for GW150914 signal to the Fisher matrix calculations using the approximate relation ∆ 90% ≈ 1.64 × σ Fisher 10 [35][36][37]. We see that the Fisher matrix underestimates the errors considerably 11 . For instance, at a start time t 0 ∼ 10M , the ∆ 90% f 0 ∼ 18 Hz [17], whereas the Fisher matrix estimates ∆ 90% f 0 ∼ 10.3 Hz. Therefore in the rest of the paper, we choose to consider 3σ Fisher fi as our estimate for the uncertainty in the recovery of the frequencies to account for larger spread seen in PE.
B. On modelling systematic errors
The statistical uncertainty reduces as the SNR of the signal increases. Pushing the start time of the analysis closer to the merger increase the SNR in the RD exponentially. However, describing a BBH RD close to the merger with a 2-tone RD model introduces inaccuracies leading to a biased estimate of the QNM frequencies. In this section, we study the biases in the recovered QNM frequencies as a function of the start time for a spectroscopic analysis of RDs using a 1-tone and a 2-tone RD models. We use the mismatch [defined in Eq. (3)] as a measure to quantify the inaccuracies of the RD model. This is purely a mathematical measure and does not include any statistical errors due to noise in the detector. Consequently, the mismatch quantifies the bias in the recovered QNM frequencies occurring solely due to modelling inaccuracies as a function of the start time of the fit.
We consider a n-tone RD model, allowing the frequencies of the damped sinusoids to deviate away from the 10 The n − σ spread in Gaussian noise is given by ∆p ≈ n × σ Fisher , where p is the probability of falling inside the n − σ confidence region: eg., n = 1.64 for p = 90% and n = 3 for p = 99.7%. 11 We also compared the Fisher matrix computation to a PE performed on a 1-tone RD injection in zero noise. At a signal strength of ρ = 15, the Bayesian PE (with flat uninformative priors) estimates ∆ 90% f 0 ∼ 8 Hz while the Fisher matrix calculation predicts ∆ 90% f 0 ∼ 6 Hz. Therefore, the Fisher matrix calculation agrees with the Bayesian PE in a zero-noise realisation up to ∼ 0.8% of the dominant-mode frequency.
QNM frequency by a quantity α i . We refer to this model as modified n-tone RD model and is given by - where n is the overtone index and α i is the relative (percent) deviation of the frequencies from the GR-QNM spectrum, ω n ≡ ω lmn . Ideally, should the above n-tone RD model match the NR-RD perfectly, we expect α n = 0, i.e., we expect a zero bias in the recovered QNM frequencies. A non-vanishing value of α n quantifies the systematic errors of a 2-tone model to reproduce an NR-RD.
We first consider the 1-tone RD model comprising of only the dominant l = m = 2, n = 0 mode. The issue of start time for a spectroscopic analysis using a 1-tone RD model has been studied in the past using various techniques [17,24,[38][39][40] and we re-visit this question using a mismatch analysis. We use the 1-tone RD model given by Eq. 6 and fit for the amplitudes and the phases of n = 0 tone as a function of α 0 . In Fig. 7 we show the mismatch as a function of α 0 for various choices of start time t 0 . Each point on the curves in Fig. 7 corresponds to a 1-tone RD model with the frequency of the damped sinusoid set by ω = ω 0 (1 + α0 100 ). The different curves in this figure correspond to various choices of t 0 and the mimima in each of these curves gives a value of α 0 that best models the NR-RD. Therefore, the value of α 0 that minimises the mismatch quantifies the expected bias in the recovery of the frequency.
From Fig. 7, we note that the mismatch curves for a start time t 0 = t peak has a bias corresponding to α 0 = 15 (resp., 12) for SXS:0305 (resp., SXS:1220). We observe that as t 0 is pushed to a later time, the mismatch between the RD-model and NR-RD decreases, indicating that the RD model is more accurate at a later stages in the RD. As expected, we see that for a later times, both the bias and the mismatch decrease. Note the minima of the mismatch curves occur at negative values of α 0 because at early times the instantaneous frequency in a BBH RD increases and at late times reaches the QNM frequency of the n = 0 overtone.
We conduct a similar study for a 2-tone RD model and present its results in Figs. 8 and 9. Figure 8 corresponds to fitting an NR-RD described by SXS:0305 while Fig. 9 corresponds to SXS:1220. Each panel corresponds to a different choice of start time and has 50 level contours of mismatch with the colour-bars indicating the value of M. For each start time, the red-cross indicates the value of α 0 and α 1 that corresponds to the minimum mismatch. Therefore, the distance 12 of the red cross from the axis is the bias in the estimated parameters. If the 2-tone 12 Note that sensitivity towards the f 0 and f 1 is captured by the projection of the mismatch ambiguity contours onto the respective axes while the bias is captured by the distance of the red cross to the axes. RD model were to match the NR-RD perfectly, the red cross would coincide with the intersection of the axis (i.e., α 0 = α 1 = 0). From Figs. 8 and 9 we confirm that a larger value of t 0 produces smaller bias. Furthermore, we see that with a latter choice of the start time (e.g., t 0 = 15M ), the contours are nearly vertical, indicating insensitivity towards α 1 . The importance of the subdominant overtone decreases with time as a consequence of their short damping time (compared to the fundamental mode). While a large value of t 0 ensures an unbiased estimation of subdominant overtone frequency, one would require a large SNR to estimate the frequencies accurately from RF signal at such late start times.
Comparing the mismatch plot for the modified 1-tone model (Fig. 7) to the contour plots for the modified 2tone model (Figs. 8 and 9), we also notice that the bias in the frequency recovery of the fundamental mode is small when we allow the model to have the subdominant overtone. Interestingly, we note that this is true even when the subdominant overtone frequency itself is not well constrained (for example in SXS:1220). We find that -from a modelling point of view 13 -by allowing a degree of freedom in the form of a subdominant overtone we can decrease the bias in the recovery of the fundamental mode in the ambiguity contour plots.
Further, we note that the mismatch of the 2-tone model is better than the fundamental mode only model, implying that the former provides a better representation of the RD signal. Also, from Figs. 7, 8 and 9 we find that for q = 4 the 2-tone model becomes reliable for an 13 From the perspective of PE, however, it is essential to investigate the consequences of adding new degrees of freedom in the model. earlier value of t 0 compared to the q = 1.2 case. This is in agreement with predictions from the perturbation theory, which is valid description even at earlier times if one of the bodies in a BBH system is much smaller than the other. We plot the bias obtained as a function of the start time for α 0 and α 1 in the two panels in Fig. 10. We notice that for SXS:1220 the bias in α 1 does not approach α 1 = 0 monotonically for late times. Therefore, to investigate this we perform this analysis for 2 more system corresponding to q = 2, 3 using SXS:0169 and SXS:0030 respectively. First, we note that the magnitude of bias α min 0 (t) as a function of the start time in the recovery of n = 0 tone's frequency is similar across the 4 BBH systems with q ∼ {1, 2, 3, 4}, However, the bias in the recovery of frequency of the n = 1 tone shows a strong and systematic dependence on the mass ratio of the BBH system. Larger the mass ratio, smaller is the bias obtained for a given start time of fitting. We see that for SXS:1220 the bias is small even when we start the fit from t 0 = t peak and fluctuation in α 1 for SXS:1220 seems to arise from numerical noise.
C. Determining the RD start time
As discussed, starting the spectroscopic analysis of BH RD close to the merger leads to a biased estimation of the QNM frequencies while starting the analysis too late depletes the SNR and results in a large statistical error on the recovered QNM frequencies. We define the optimal time to start the analysis as the time at which the expected statistical spread is comparable to the bias. In this section, we use the results presented in Sec. III B on the expected bias and the estimate of statistical error computed in Sec. III A to provide a computationally . The x-axis shows the relative percentage deviation of frequency from the GR-QNM spectra. The value of α0 that corresponds to a minimum mismatch provides the estimate of bias incurred due to the inaccuracies of 1-tone RD model to describe the NR-RD for a given start time. Note that the value of mismatch corresponding to α minimum is extremely small for large values of t0.
cheap prescription for choosing an optimal start time for a BH spectroscopic analysis. We illustrate this procedure on NR-RDs corresponding to SXS:0305 and SXS:1220 waveforms. First, we demonstrate this procedure on the 1-tone RD model in Fig. 11. We compute ρ as a function of start time t 0 numerically and normalise it such that the SNR for a start time t 0 = 10M is 8.5. Using ρ(t) in Eqs. 5 we then estimate the expected spread in the 3σ credible interval as a function of the start time.
Next, we estimate the bias as a function of start time using Fig. 7. Finally, in Fig. 11 we plot the expected spread in the 90% credible interval for a RD and the expected bias as a function of the start time. The optimal time to start the spectroscopic analysis in RD is when the expected spread in the 90% credible interval is equal to the expected bias.
To check the sensitivity of the optimal start time on the intrinsic parameters of the BBH, we apply the same procedure on SXS:1220 which corresponds to a q = 4 BBH system. For this system, we find that the optimal time to start the analysis is as early as 5M after the peak of the waveform. Therefore, the optimal start time depends on both the system's intrinsic parameters as well on the SNR contained in the BBH RD. As a general rule of thumb, one expects that with increasing mass ratio, the post-merger relaxes to a quasi-linear configuration quicker, and therefore, the QNM description is accurate. Consequently, the optimal start time for a given SNR is closer to the waveform peak for higher mass ratio BBH systems. This picture is consistent with Fig. 11. We implement a similar procedure for a 2-tone RD model. The left panel of Fig. 12 corresponds to the analysis for a GW150914-like signal. Again, the optimal start time for a spectroscopic analysis is when the estimated biases in the recovered QNM frequencies of n = 0 and n = 1 are smaller than or equal to their estimated credible interval, i.e., where t opt fi is the time at which the bias in f i equals the spread ∆f i . From Fig. 12 we find that the optimal start time for performing a 2-tone analysis is t 0 ∼ 4M for a GW150914-like system. Note that this value of t 0 is smaller than in the single-mode case, showing that the inclusion of an overtone allows to start the RD analysis earlier. In the right panels of Fig. 12, we note that for SXS:1220 one can start the analysis at the peak of the waveform, i.e. t 0 = 0. If the SNR in the RD increases, the expected spread in ∆f i decreases while the bias is independent of the SNR; therefore, for a higher SNR RD a later optimal time is expected. From the bottomleft panel of Fig. 12, for an equal mass BBH merger like GW150914 with ρ = 5ρ GW150914 we predict that the optimal time to start a spectroscopic analysis with a 2-tone RD model is at t 0 ∼ 14M .
IV. PRESCRIPTION FOR CHOOSING THE START TIME
Here we provide a summary of the prescription for finding the optimal RD start time after the peak of the strain amplitude in order to perform a reliable spectroscopic analysis of the RD. We stress that this procedure applies to any multi-mode (including single-mode) RD analysis, both for overtones and different angular modes.
1. For each GW event, produce an NR/NR-calibrated inspiral-merger-RD waveform corresponding to the maximum likelihood parameter values obtained by PE on a full-inspiral-merger-RD-waveform approximant.
2. From this reference inspiral-merger-RD signal, compute the SNR contained in the post-merger as a function of the start time, ρ(t 0 ). Using ρ(t 0 ), compute the statistical uncertainty in recovering the frequencies σ fi (t 0 ) through a Fisher matrix formalism (see Eq. 5).
3. To assess the systematic bias incurred due to modelling inaccuracies, for different start times t 0 in the NR-RD obtained in step 1 compute mismatch ambiguity contour plots (similar to Fig. 8) using a multi-tone (or multi-mode) RD model that allows for frequencies deviations α i from the GR-QNM spectrum, as in Eq. 6. The point of minimum mismatch α min i in the α 0 − α 1 plane gives an estimate of the bias in the recovered frequencies as a function of t 0 . Thus, α min i (t 0 ) is the estimate of bias as a function of start time.
4. The optimal time to start a spectroscopic analysis is the time t 0 at which the largest of the bias is The amplitude of the NR-RD is scaled such for at t0 = 10M the RD has an SNR ρ ≈ 8.5. For SXS:0305 (resp. SXS:1220) we see that the optimal time to start an RD analysis is t0 ∼ 8M (resp. t0 ∼ 5M ).
equal to the spread of the corresponding frequency.
Therefore, the optimal time t opt 0 is the earliest time such that the relation holds for all overtones in the RD model. This procedure applies to a generic RD model with any number of modes/overtones.
V. MINIMUM SNR FOR RESOLVABILITY OF THE OVERTONE FREQUENCIES
In this section we discuss another twist on the modelling of the RD with overtones. We assume that the 2tone model is accurate and investigate if we can resolve the frequencies of the two overtones apart. Resolvability is particularly challenging for overtones because their QNMs are spaced much more closely in frequency compared to the angular modes. In Fig. 13 we illustrate this with an example: for a BH with mass M f = 68.5M and spin a f = 0.69 the difference between the frequencies of n = 0 and n = 1 overtone is ∼ 5 Hz, i.e., f 0 − f 1 ≤ 2% of the fundamental QNM frequency 14 . The smaller the difference between the frequencies, the harder it is to resolve them. To address the minimum SNR required to resolve the two overtone frequencies, we use Rayleigh's resolvability criterion by demanding [29] where ρσ fi is evaluated with a Fisher-matrix formalism. Note that since the overtones can have significant overlap with each other, we include the cross-terms in our calculations (unlike for the case of angular modes) [41] 15 . The details of the Fisher-matrix calculations are outlined in Appendix A.
In Fig. 14 we show the minimum SNR ρ crit required to resolve n = 0 and n = 1 overtones for a BH with mass M f = 68.5M for two values of the spin: a f = 0.69 and a f = 0.48. The relative phase difference between n = 0 and n = 1 overtones is set to the best-fit value of δφ obtained by fitting the NR-RD of SXS:0305 and SXS:1220, respectively (using the same procedure described in the previous sections). Regardless of the amplitude ratio between the overtones, Rayleigh's resolvability criterion estimates an SNR ≥ 30 for a comparable-mass BH binary. Further, with the best-fit values of amplitudes and phase calibrated against SXS:0305 and SXS:1220 (scaled to M f = 68.6M ), we obtain a minimum SNR ρ min ∼ 30 and ρ min ∼ 64, respectively. Note that including the cross terms in the Fisher matrix calculation has a significant effect on the estimate of minimum SNR required for overtone resolvability. We find that depending on the BBH system the cross terms can either increase (as in the case of SXS:0304) or decrease (as in the case of SXS:1220) the minimum SNR by roughly a factor of 2.
The value of ρ crit depends crucially on the amplitude ratio and the relative phase difference δφ between the overtones. We illustrate this for a BH with M f = 68.5M 13. Frequency difference between overtones. This figure illustrates the difference in frequencies of the first few overtones as a function of M f for a BH with spin a f = 0.69. The legend ∆n = p − q denotes the difference in frequency between the p th overtone and the q th overtone. Note that for a GW150914-like system the frequency difference between the n = 0 and n = 1 overtone is only ∼ 5 Hz and this poses challenges for resolvability. and a f = 0.69 in Fig. 15. We note that while the minimum SNR for resolvability is about ρ ∼ 28 for an optimal combinations of A 1 /A 0 and δφ, it is as large as ρ ∼ 100 for a large region of parameter space. When the tones have a relative phase difference δφ = π, the SNR required to resolve the overtone frequencies are ∼ 4-5 times smaller compared to δφ = 0, 2π. Further, it is worth mentioning that our results are based on a Fisher-matrix estimation of the statistical errors for a 2-tone model. In a realistic PE we expect the errors to be larger; therefore the minimum SNR required for resolvability must be considered as an lower bound. Finally, for this study the spin of the BH is set to a f = 0.69. The relative difference in the QNM frequencies between overtones decreases for a highly spinning BH. Resolvability of overtones becomes more demanding if the final BH spins rapidly.
VI. DISCUSSION
In the previous sections we focused on investigating the technical aspects of performing the no-hair theorem tests and BH spectroscopy with overtones. In this section, we discuss two conceptual issues with the interpretation of frequency spectrum of the RD fits as QNMs of the BH. We conclude this section by comparing the RD analysis done with angular modes and with overtones. Here we show the minimum SNR required to resolve the overtone frequencies as a function of the relative phase difference δφ and the amplitude ratio between the overtones. We find that the minimum SNR needed to resolve the n = 1 from the n = 0 QNM frequency depends strongly on the relative phase difference between them. theorem is probably due to the fact that overtones decay rapidly compared to the angular modes. The half-life of the overtones in the RD are less than half a wave-cycle as illustrated in Fig. 16 and in Table III. Figure 16 shows the RD waveform (both polarizations and overall amplitude) for simulation SXS:0305 starting from the peak of the GW waveform amplitude. Assuming the RD description begins at the peak of the waveform [27] and assuming that all overtones start simultaneously, the vertical lines in Fig. 16 mark the half-life of the first few overtones (specifically, up to n = 6). Notice that beyond n = 1 overtone, the tones decay rapidly in a fraction of the wave cycle.
The short-lived-ness (typically contributing to less than a cycle) of overtones raise a crucial concern for modelling the RD with overtones. The models used in this paper as well as in previous work [26,27] makes a simplifying assumption that all the overtones are excited simultaneously. This, however, is a strong assumption and there is no first principle reasoning (for instance, from BH perturbation theory) as to why this must be the case. This assumption has been made widely in the literature in the case of modelling RD with angular modes but is better justified in that context as the angular modes live longer. Therefore, for multimode RD analysis one can pick a start time late enough such that all considered modes have all been excited. In contrast, the overtones are short-lived, posing a risk that if the overtones do not start simultaneously, a some of them can decayed to a very small amplitudes while others overtones are still being excited. This leads to several complications in the analysing and interpreting the overtone based RD-analysis. However, from a technical standpoint, allowing different modes to start at different time (naively)introduces unphysical discontinuities in the waveform or in its time derivatives.
B. On the presence of nonlinearity in the source frame By fitting the NR waveforms with a superposition of QNM overtones, recent papers [26,27] showed that one can model RD arbitrarily close to the peak amplitude of the waveform. However this implies that the waveform close to the peak amplitude of waveform is fully described by the linear perturbation theory i.e., without taking non-linearities of the theory into account.
On the other hand, the presence of non-linearities in the source frame, particularly, in a sphere of radius about 5M around the final remnant BH of a BBH spacetime has been seen in previous works [38]. By studying the source frame evolution, it was concluded that one needs to wait for ∼ 16M after the peak of the waveform to start the perturbative analysis in the asymptotic frame. Note that the analysis in [38] characterizes Kerr isometry in the source frame close to the BH and therefore is robust to QNM decomposition into modes and overtones as done in the asymptotic frame. While the results in [38] are not susceptible to the risk of overfitting and assumptions like simultaneous excitation of all overtones, it must be noted that there are error bars in mapping the gauges close to the BH to that at asymptotic infinity.
However, these two results seem mutually inconsistent and raise the question of whether we are interpreting the results of a RD fit with overtones correctly. It is peculiar that the non-linearity observed in the source frame dynamics close to the BH does not give rise to features distinct from the predictions of the BH linear perturbation theory at asymptotic infinity. Although addressing this issue is beyond the scope of this work, we emphasize that it must be investigated carefully in a future study.
C. Comparison of overtones and angular modes
Resolving the overtones requires a higher SNR than resolving the angular modes due to the smaller frequency separation (see Footnote 14). However, for nearly-equal mass BBHs the angular modes other than the dominant l = m = 2 QNM are weakly excited. In such cases, overtones provide an opportunity to perform the no-hair theorem tests. An interesting problem concerns identifying the BBH systems for which BH spectroscopy with angular modes is better than with overtones (or possibly identify the optimal combination between angular modes and overtones to be included in the RD template). dicates that the minimum SNR for resolvability of two overtones (ρ min ∼ 30) is marginally within the capability of LIGO/Virgo at design sensitivity, a more realistic Bayesian PE in the presence of detector noise may require a louder RD signal (ρ ∼ 100) to perform BH spectroscopy with overtones. These values of SNR in the RD will be achievable with the future GW detectors like LISA [43] and third-generation (3G) ground-based GW interferometers (the Einstein Telescope and the Cosmic Explorer [44][45][46]). LISA is a proposed space-based GW detector sensitive to the super-massive BBH coalescence where the BBH population is expected to have a wide range of progenitors mass ratios and spins. This is to be contrasted with the ground-based GW detectors which are sensitive to stellar-origin BHs whose population is largely comprised of binary BHs of comparable mass. A large fraction of RD signals seen by LISA are expected to be generated by unequal-mass binaries; for these systems the angular modes are sufficiently excited to allow for an accurate BH spectroscopy. On the other hand, the 3G GW detectors are likely to detect nearly-equal mass BBH coalescences at a higher SNR, and therefore, the overtones can be instrumental to perform a no-hair theorem test with ground based detectors.
Finally, in the case of neutron-star-BH coalesces the mass ratio departs significantly from unity and therefore we expect that angular modes be sufficiently excited. It would be interesting to study the QNM excitation in these systems and check the role of overtones in the spectroscopy of a neutron-star-BH merger.
D. Future work
In this study we have considered non-spinning BBH systems with mass ratios q = 1.2, 4. In the future we plan to perform a more systematic study in order to understand the role of overtones for difference mass ratio systems. Also, the initial spins of the BBH system can have a significant influence on the excitation factors as shown in Ref. [21,31].
Furthermore, to quantify biases we have considered the systematic errors on the QNM frequencies without investigating the possible systematics in the recovery of the damping times. Our prescription to choose the optimal RD start time can easily be extended to include the biases in the damping times. Adding more modes or a mixture of angular modes and overtones to the RD-model is another interesting extension of this work. Finally, since SNR required for overtone resolvability is expected to be higher than that predicted by the Fisher matrix estimates (optimistically ρ ∼ 30), BBH RD events that allow for a spectroscopic analysis with overtones maybe beyond the reach of the LIGO/Virgo detectors at their current sensitivity. Therefore, it would be pragmatic to explore the possibility of stacking multiple BBH events and validating the BH QNM spectrum statistically using the RD overtones across the detected GW events.
In conclusion, we reiterate that for equal mass BBH systems where the angular modes are not sufficiently excited overtones provide a lucrative prospect for performing no-hair theorem tests. However, to resolve the overtone frequencies from a single BBH observation -and therefore performing actual BH spectroscopy -we estimate that a RD SNR of ∼ 30 is required. This is an optimistic lower bound and it is likely that resolving two overtones in the GW data using a fully Bayesian PE requires a much higher SNR. On the other hand, since population studies indicate a large probability for equal-mass stellar-origin BBHs [47], developing a robust overtone analysis framework is extremely valuable for the current and next-generation ground-based detectors. and John Veitch for interesting discussion. We acknowledge support provided under the European Union's H2020 ERC, Starting Grant agreement no. DarkGRA-757480 and by the Amaldi Research Center funded by the MIUR program "Dipartimento di Eccellenza" (CUP: B81I18001170001).
Appendix A: 2-mode error analysis with Fisher matrix
In this appendix we extend some analytical results presented in Ref. [33] on a multimode error analysis of the RD using the Fisher information matrix. Here we provide the main results and refer the reader to Ref. [33] for details.
We write the RD waveform in the time domain as where ω i = 2πf i is the pulsation of the i-th tone, τ i = Q i /(f i π) is the corresponding damping time (with Q i being the quality factor) and Y i is the spheroidal harmonic corresponding to the angular numbers of the i-th mode. The measured waveform is h = h + F + + h × F × , where F +,× is the detector's antenna pattern functions. As in Ref. [33], we average over the pattern angles. We assume A (i) × = 0 16 , and φ × = φ. Therefore, the templates have seven parameters -A 0 , A 1 , f 0 , f 1 , Q 0 , Q 1 , and φ. Note that we keep f i and Q i as independent parameters in the RD model without assuming their GR relations that allow one to write them in terms of the BH mass and spin only.
If we consider two angular modes (i.e., modes with the same overtone number n but different angular numbers (l, m)), the orthogonality conditions of the spheroidal harmonics imply that the angular scalar product between two modes is practically zero [33]. However, if the two modes correspond to different overtones with the same angular numbers, their angular scalar product is almost unity. In the following calculation we consider both cases.
We define the Fisher matrix Γ pq = h ,p ,h ,q , whereh is the Fourier transform of the time-domain RD waveform, h ,p is its derivative with respect to the p-th parameter and the scalar product is defined as in Ref. [33]. Using the standard techniques [48], we compute the covariance matrix and the (normalized) expected error in the recovery of the model parameters. In particular, we focus on the combination ρσ fi .
Next, in the case of two overtones of the same angular mode, the additional cross-terms in the computation of ρ and σ f are non-negligible. The final result for two different overtones is cumbersome and is provided in a supplemental Mathematica R notebook [34]. Fig. 17 we present the correlation matrix corr(A n , A n ) = r nn for the amplitude A n of the 8-tone RD model used in this study. The correlation matrix corresponds to the complex fits performed from t = t 0 = 0.
Note that the correlation factors r nn of the amplitudes of overtones with n ≥ 2 are increasingly larger. We observe that higher the n, larger the correlation with the neighbouring values of n. Disregarding the autocorrelation factors, for a given overtone amplitude A n the highest correlation occurs with its closest neighbouring A n±1 , i.e., with the elements located at the cells n±1 out of the diagonal. In particular, the line joining the cells r nn±1 increases its intensity from r 32 = r 23 ≈ 0.97 to r 67 = r 76 ≈ 0.99. We note a strong correlation between the amplitudes of higher overtones. This hints towards instability of the best fit amplitude values to addition of an overtone -casting doubt on whether the damped sinusoids used to model the NR RD be interpreted as QNM of a BH.
In Sect. II we studied the effects on the mismatch produced by varying by 2% the standard GR-QNM spectrum on 100 realisations for the 8-tone RD model. Similarly, we want to quantify whether this artificial modification of the GR-QNM spectrum affects the value of the overtone amplitudes A n . To this end, in Fig. 18 we show the dispersion ratio σ n /Ā n as a function of the starting time t 0 obtained from the 100 RD waveforms, where σ n andĀ n are the standard deviation and the average amplitude of each tone respectively. σ n /Ā n provides an estimate of the variation on the amplitude of the tones A n sourced by modifying the GR-QNM spectrum. Notice that σ n /Ā n is lower than 1% for tones with index n = 0, 1 and t 0 > 0. We do not observe a significant increasing along their time line. On the other hand, amplitudes with n > 1 tend to broaden significantly for the set of modified QNM frequencies, where the broadening is larger as the n index increases and for start times near t 0 . These values reach a maximum spread of about 75%. This indicates that the best-fit values of the amplitude of higher overtones is sensitive to small changes in the frequency spectrum. Notice that this ratio is below 1% for the n = 0, 1 tones while it reaches about 75% for n > 1 and times near t0 ∼ 0. | 2019-10-19T06:01:56.000Z | 2019-10-19T00:00:00.000 | {
"year": 2019,
"sha1": "fa70749ee90ee5ec8ee3ea929c3a5b9f9bbeb505",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1910.08708",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1367bdd6b6349bb93f1689543df67599a5b8272c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
250692755 | pes2o/s2orc | v3-fos-license | Structural transformations and their relation to the optoelectronic properties of chromium oxide thin films
The crystallization behavior of chemically vapor deposited chromium oxide films was characterized by X-Ray diffractometry and the vibrational properties of the films were analyzed by Fourier transform infrared (FTIR) spectroscopy as a function of the annealing temperature and the technological conditions. The effect of the oxygen content in the CVD reactor on the optical characteristics (complex refractive index, optical band gap) was considered.
Introduction
Chromium oxide thin films are known as widely applicable in catalysis, solar thermal energy collectors and, as black matrix films, in liquid crystal displays [1,2]. Other attractive usage of Cr 2 O 3 thin films is as electrochromic material, although the chromium oxide system, belonging to the family of transition metal oxides, has been studied only rarely. Many crystalline modifications of Cr oxide exist, such as Cr 2 O 3 (corundum), CrO 2 (rutile), Cr 5 O 12 (three-dimensional framework), Cr 2 O 5 and CrO 3 (unconnected strings of CrO 4 tetrahedra). The only stable bulk oxide, Cr 2 O 3 , is a magnetic dielectric with corundum structure [1]. In the Cr 2 O 3 corundum structure, the oxygen atoms form a hexagonal close-packing array. The metal atoms occupy two thirds of the octahedral interstices between two layers. The chromium atoms form graphite-like layers parallel to the oxygen layers. Chromium oxide, Cr 2 O 3 consists of the rhombohedral primitive cell, where Cr atoms are eight-coordinated with two oxygen layers. They form CrO 6 distorted octahedra that are linked by faces, edges or corners in such a way that each oxygen atom is linked to four Cr atoms.
In this paper we present results on the structural transformation of chromium oxide films caused by thermal annealing. The films were prepared by low temperature atmospheric pressure chemical vapor deposition (APCVD) and their structure and vibrational properties were studied.
Experimental details
Thin Cr oxide films were deposited by way of a low-temperature carbonyl CVD process at atmospheric pressure. The deposition temperature of 200 o C was found as optimal leading to deposition of films with good adhesion to the Si, ordinary glass and conductive glass substrates used. The 6 ) powder was placed in a sublimator and heated up to 70 o C. The vapors were carried to the reactor by Ar gas flow and; the reactive gas (oxygen) entered the reactor chamber through a separate line. Since the gas flow ratio of Ar to O 2 is a specific parameter affecting the film structure and composition, it was varied in the range of 1/4, 1/20 and 1/32. The films thickness was about 0.65 μm. After deposition, some of the samples were additionally treated at 500 o C in air for one hour.
The crystalline phases in the films were determined with a large-angle (2Θ=0-90°) X-ray Philips X'Pert diffractometer (XRD) working with Cu radiation (λ=0.154056 nm). The Bragg peaks in the XRD spectra were identified using the ASTM database [3]. Assuming a spherical shape of the crystallites, their average grain size was deduced from the Bragg peaks broadening by using the Scherrer's formula [4]. FTIR spectroscopy was performed in the spectral region of 350-1600 cm -1 by a Shimadzu IRPrestige-21 FTIR Spectrophotometer. The samples studied were deposited on Si substrates and a bare Si wafer was used as background. The spectroscopic ellipsometry (SE) measurements were carried out after each technological step on a Rudolph 436 ellipsometer in the visible spectral range of incident light at an incidence angle of 50 o . The optical band gap energy E og was derived from the absorption coefficient α values (α=4πk/λ) by constructing (αhν) 1/2 versus photon energy hν Tauc plots. The linear part of the Tauc plot was extrapolated to α = 0; the intersection with the energy axis gave the optical bandgaps E og of the films.
Results and discussion
All as-deposited films were amorphous, but some nuclei of crystalline phase could be detected in the XRD spectra. Although the XRD data analysis was difficult due to the small amount of film material and the presence of strong Bragg peaks originating from the Si substrate, peaks with very low intensity could be detected within the broad band of the amorphous phase, which corresponded to the strongest peaks of Cr 2 O 3 (104) and (111). This is demonstrated in figure 1, where the XRD spectrum of a chromium oxide film deposited at Ar/O 2 gas flow ratio of 1/32 is given.
After annealing at 500 o C, the XRD spectra revealed a fully crystallized film of stoichiometric Cr 2 O 3 . This is illustrated in figure 2, where the typical XRD spectrum for film deposited at gas flow ratio of Ar/O 2 =1/32 is given, and the corresponding crystallographic planes are identified. Although the crystallites are randomly oriented, a certain texture along the (110) crystallographic direction can be seen. (а = 4.939 Ǻ; с = 13.627 Ǻ) [5] and those cited in [3], namely, а = 4.9587 Ǻ; с = 13.594 Ǻ (JSPDC 38-1479). The crystallites were grown in different crystallographic planes; a larger crystallite average size of 58-61 nm was observed in the preferential (110) crystallographic direction. The IR spectroscopic analysis was performed on the films deposited at gas flow ratios of 1/4 and 1/20. Figures 3 and 4 present the FTIR spectra of Cr 2 O 3 films before and after annealing. In the spectrum of the as-deposited films, a few absorption bands appear with a tendency to increase the number of bands with decreasing the oxygen content in the deposition ambient. The FTIR bands around 300 and 440 cm -1 are assigned to Cr-O bonds, as the IR peak at 401 cm -1 is a sign for crystalline Cr 2 O 3 [6]. The latter is in good agreement with the XRD observation (figure 1) and suggests that some crystallites are formed during deposition. The IR bands around 600 cm -1 are associated with amorphous Cr 2 O 3 [7]. In figure 3, for the film grown at gas ratio 1/20, a strong, broad absorption band at 540 cm -1 and a few superimposed weak bands, namely double peaks around 361 and 372 cm -1 and peaks at 385, 403 and 609 cm -1 are observed. It must be pointed out that no distinctive bands are detected above 610 cm -1 . The 545 cm -1 band is assigned to the Cr-O bond [6]. For films deposited at ¼, besides the broad band centered at 519 cm -1 , a multitude of bands appear -weak bands at 355, 374, 401, 670 and 870 cm -1 and well-pronounced bands at 800, 1026 and 1099 cm -1 . Apparently, these films are substoichiometric and, therefore, chemical bonds of Cr and O atoms in different configurations exist in the oxide network. The IR bands detected at 800 cm -1 in the spectrum of the film deposited at Ar/O 2 =1/4 can be due to Cr-O bonds in CrO 2 or CrO 3 phases [6]. The well pronounced peaks at 1026 and 1099 cm -1 are attributed to the chromyl (Cr=O) vibrations [7]. After annealing at 500 o C, the two IR spectra become similar in the 300-700 cm -1 region (figure 4). Above that region, the spectrum of the film deposited at Ar/O 2 =1/20 becomes featureless, while in the IR spectrum of the Cr 2 O 3 film (1/4) strong bands appear at 800 cm -1 , 1024 and 1096 cm -1 . In both spectra, the well expressed band at 401 cm -1 confirms the crystalline oxide phase, as detected in the XRD pattern (figure 2). In addition, the band at 615.3 cm -1 is characteristic for crystalline Cr 2 O 3 [6]. The strong 546 cm -1 band, assigned to Cr-O bond [7], points to the films being additionally oxidized to stoichiometric Cr 2 O 3 , as proved by the XRD data. The chromyl double Cr=O bonds are still present in the film deposited at 1/4 gas flow, as manifested by the corresponding bands at 1026 and 1099 cm -1 [7].
The dispersion spectra for as-deposited and annealed films are summarized in figure 5. The values of the refractive index, n, are smaller than those for Cr 2 O 3 [11] suggesting that the films are rather porous. A clear tendency of increasing the n values is observed with increasing the oxygen content during deposition. This can be explained with better oxidation of Cr atoms rather than with film densification, since the IR spectra analysis clearly revealed the substoichiometric oxide structure at lower oxygen contents ( figure 3). In the range of 450-820 nm the values of the extinction coefficient, k, are close to each other, as is seen in figure 5. The features observed in this region could be connected with development of defects due to some degree of crystallization taking place in the structure (figure 1). Approaching the absorption edge, the k values increase; the higher the oxygen content, the stronger the absorption. After annealing at 500 o C, the values of the refractive index increase (being within 2.2-2.4 in the corresponding spectral range of 820-280 nm) in the whole spectral range studied, thus revealing a better stoichiometric structure. The crystallized films are more transparent, which is reflected in the considerably decreased extinction coefficient values (below 0.3) in the 450-820 nm spectral range. Using Tauc's equation, the optical band gap, E og , of the films was estimated with an accuracy of ±0.05 eV. The linear relationship resulted in a better fitting when a direct electron transition model was applied. Similar results have been reported elesewhere [3].
The E og values are given in the inset in figure 6. For as-deposited films, the E og values are within 3.18-3.26 eV and tend to increase as the oxygen content in the reactor is increased. These values are characteristic for as-deposited chromium oxide film with an amorphous structure and they are in good agreement with the data reported earlier [11]. After annealing, the optical band gap energy increases to 3.32-3.38 eV, which correlates well with the data reported for CVD Cr 2 O 3 films in crystalline state [11]. The change observed of the E og values with the Ar/O 2 ratio is within the calculation error. Photon energy (eV) Figure 5. Spectra of the n and k values for films deposited at gas flow ratios inserted. The empty and full symbols are for films in as-deposited and annealed states, respectively.
Conclusion
The XRD patterns of the CVD chromium oxide films clearly show that during deposition some crystallites are already developed in the amorphous oxide matrix. Annealing at 500 o C yields a fully crystallized and stoichiometric Cr 2 O 3 film with crystallites growing larger in the preferential direction. The FTIR analysis supports the XRD results and shows that obviously the oxygen content influences the vibrational properties of CVD prepared chromium oxide films. | 2022-06-28T00:04:38.243Z | 2008-01-01T00:00:00.000 | {
"year": 2008,
"sha1": "207ed9c24357aba8f5812d792e003adb69fcdb1a",
"oa_license": null,
"oa_url": "http://iopscience.iop.org/article/10.1088/1742-6596/113/1/012030/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "207ed9c24357aba8f5812d792e003adb69fcdb1a",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
261531316 | pes2o/s2orc | v3-fos-license | Experimental method for perching flapping-wing aerial robots
In this work, we present an experimental setup and guide to enable the perching of large flapping-wing robots. The combination of forward flight, limited payload, and flight oscillations imposes challenging conditions for localized perching. The described method details the different operations that are concurrently performed within the 4 second perching flight. We validate this experiment with a 700 g ornithopter and demonstrate the first autonomous perching flight of a flapping-wing robot on a branch. This work paves the way towards the application of flapping-wing robots for long-range missions, bird observation, manipulation, and outdoor flight.
I. INTRODUCTION
Flapping-wing flight offers significant benefits over rotorbased propulsion ranging from noise reduction, increased safety, and potentially higher maneuverability and efficiencies [1]. While these traits could grant safe flight around structures, significant challenges remain to land large flapping-wing robots on perches and branches. Indeed, the combination of forward flight requirement and vertical center of mass oscillations arising in large flappingwing robots vastly increases the difficulty to perch on objects. Achieving this task -as we often see birds do seamlessly everyday -is complex in terms of the control required in stall conditions and in terms of grasping appendage capability. In a robot, this would require a vehicle capable of carrying a claw system that can hold the robot upright for manipulation, a vision system to identify the branch and a control algorithm that can reliably bring the robot there with minimal remaining velocity. We propose to solve the perching maneuver in a controlled context as a significant step towards vision-based outdoor perching. Here, we would like to show a flight experiment to demonstrate autonomous perching of flapping robots in controlled conditions. Complete information about the design, modeling and resulting flight performance of the robotic bird can be found in [2].
II. EXPERIMENT
Demonstration and validation of this experimental method is performed with a large 700 g, 150 cm-wingspan ornithopter whose design stems from a previous robot class described in [3]. Successful perching flights were achieved through a series of state phases, visually represented in Fig. 1 and described in this work.
A. Reaching flight velocity
Large ornithopters are not capable of hovering in windless conditions, due to unfavorable scaling. Such robots can only sustain flight with forward velocity and therefore require a process to reach this velocity initially. While one can certainly achieve this velocity via a hand throw, this results in a wide range of initial conditions. We instead propose an automated launcher system, which boasts sufficient precision for repeatable flights in space-constraints, indoor conditions, see Fig. 1.A. The system features a 2 m long double rail which can accelerate a robot to 10 m/s. In our validation scenario, speeds of 4 m/s are sufficient to initiate flight without losing altitude and without excessive speed. The robotic birds are held at a perpendicular offset of 50 cm, sufficient to avoid any collisions between the wing, tail and structure. Launcher mounting of robot occurs at the trailing edge, to remain as close as possible to the center of mass. A 20 degree angleof-attack at launch is selected. Results show that launches are consistent, both in direction and speed, allowing us to repeatedly perform controlled flights to a point location at a 14 m distance from the launcher.
B. Controlled flapping flight
At the end of the rail, the robot's position and attitude are captured by the motion capture system and start being wirelessly streamed to the robot's flight companion computer. The ornithopter enters a controlled flapping flight phase, see Fig. 1.B. This is handled by the onboard triple loop pitch-yaw-altitude controller. This flight control system needs to handle stable flight with the appendage payload, misalignment compensation system and all the onboard electronics required for flight and perch. The forward flight velocity is a key parameter for perching. It should be kept low to minimize impact forces and increase reaction time before impact. To achieve this, the pitch angle of the robot needs to be high. Experiments show that pitch angles above 40 • result in insufficient speed and
C. Branch approach
The claw dimension is smaller than the flight position accuracy. As this is insufficient to reliably touch the target, we propose an correction method. The leg of the robotic bird is articulated at the elbow level, permitting semi-vertical motion of the claws. When the robot gets to within 150 cm of the branch, an optical detection system is enabled, see Fig. 1.C. The line detection system located on the claw feeds the relative position of the branch to the leg microcontroller.
The leg servo then corrects the angle of the leg to align it with the branch, at 50 Hz. During this approach phase, the flapping motion is maintained and the flight controller remains active. The perching target, a 80 cm-black-painted branch is held at 2 m height at its center by a 45 • aluminum profile which reaches out through the safety net.
D. Perching
At 20 cm from the branch, the flapping is stopped. At this point, the forward velocity of the robot is situated between 2.5 and 3 m/s. Grasping is made possible thanks to a pair of bistable claws which pivot from the open to the closed state within 25 ms and with a 2 Nm torque. A combination of soft silicone pads and spikes offers friction forces sufficient to compensate the torque resulting from center of mass offsets. Experiments have demonstrated that we can reach the branch location repeatedly, within the tolerance range of the legclaw system. The low forward flight velocity shortly before landing ensures that forces remain below 150 N, significantly reducing the likelihood of damage. We report that branch perching was achieved in 6 out of 9 perching flight tests, and that no damage occurred in any of the successful flights. The results were validated with a second robot which perched without re-tuning. | 2023-09-06T06:42:41.769Z | 2023-09-04T00:00:00.000 | {
"year": 2023,
"sha1": "786be249dccd145d1801c390b5b8fb39bbfc584e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "786be249dccd145d1801c390b5b8fb39bbfc584e",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
215780485 | pes2o/s2orc | v3-fos-license | How healthcare professionals respond to parents with religious objections to vaccination: a qualitative study
Background In recent years healthcare professionals have faced increasing concerns about the value of childhood vaccination and many find it difficult to deal with parents who object to vaccination. In general, healthcare professionals are advised to listen respectfully to the objections of parents, provide honest information, and attempt to correct any misperceptions regarding vaccination. Religious objections are one of the possible reasons for refusing vaccination. Although religious objections have a long history, little is known about the way healthcare professionals deal with these specific objections. The aim of this study is to gain insight into the responding of healthcare professionals to parents with religious objections to the vaccination of their children. Methods A qualitative interview study was conducted with health care professionals (HCPs) in the Netherlands who had ample experience with religious objections to vaccination. Purposeful sampling was applied in order to include HCPs with different professional and religious backgrounds. Data saturation was reached after 22 interviews, with 7 child health clinic doctors, 5 child health clinic nurses and 10 general practitioners. The interviews were thematically analyzed. Two analysts coded, reviewed, discussed, and refined the coding of the transcripts until consensus was reached. Emerging concepts were assessed using the constant comparative method from grounded theory. Results Three manners of responding to religious objections to vaccination were identified: providing medical information, discussion of the decision-making process, and adoption of an authoritarian stance. All of the HCPs provided the parents with medical information. In addition, some HCPs discussed the decision-making process. They verified how the decision was made and if possible consequences were realized. Sometimes they also discussed religious considerations. Whether the decision-making process was discussed depended on the willingness of the parents to engage in such a discussion and on the religious background, attitudes, and communication skills of the HCPs. Only in cases of tetanus post-exposure-prophylaxis, general practitioners reported adoption of an authoritarian stance. Conclusion Given that the provision of medical information is generally not decisive for parents with religious objections to vaccination, we recommend HCPs to discuss the vaccination decision-making process, rather than to provide them with extra medical information.
Background
Vaccination programs have successfully controlled many infectious diseases. In recent years, however, healthcare professionals (HCPs) have faced increasing concerns about the value of childhood vaccination [1]. Parental decision making with regard to vaccination is complex. Medical, psychological, social, and cultural aspects can play a role [2][3][4]. Moreover, the medical information provided and trust in the HCP can play a role as well [5][6][7].
Although not all HCPs recommend childhood vaccinations according to the national immunization schedule [8,9], most are convinced of the value of vaccination and many find it difficult to deal with parents who object to vaccination.
Religious objections are one of the possible reasons for refusing vaccination. In the Netherlands, an orthodox Protestant minority of about 250,000 members has religious objections to vaccination. Forty percent of them has been found to not be vaccinated at all [10]. Epidemics of polio, measles, rubella, and mumps have broken out among this group and spread to their relatives in Canada [11][12][13][14]. Orthodox Protestant objections to vaccination focus on the necessity of trust in divine providence. On biblical grounds arguments for vaccination are put forward as well: vaccination may be considered as a gift of God to be used in gratitude [15]. Orthodox Protestant churches leave it up to parents to decide to have their children vaccinated or not.
During the polio epidemic of 1978, Veenman and Jansma identified among orthodox Protestants religious objections, family tradition, and fear of possible sideeffects as major reasons for not being vaccinated [16]. More recently, we performed a study on vaccination decision-making among orthodox Protestant parents and found that vaccinating as well as non-vaccinating parents predominantly used religious arguments to justify their decision. If side-effects of vaccination were mentioned, they often had a religious connotation. Nonvaccinating parents who primarily refused vaccination because of interference with divine providence, also mentioned that man is not allowed to cause disease in a by God given healthy body. On the other hand orthodox Protestant parents who broke with tradition and participated in the National Immunization Program (NIP), interpreted side-effects as a sign of God that they had made the wrong choice [17].
In the Netherlands, all children are offered vaccination free of charge via local child health clinics (CHCs) as part of a NIP (see Table 1). CHC staff consists of trained CHC doctors and trained CHC nurses. They also monitor the children's growth and development [18]. During the standard home visit for every newborn baby, the CHC nurse provides the parents with vaccination information and registers whether the child will participate in the NIP or not. If the parents are unsure, the topic is addressed by the CHC doctor during the first consultation at the CHC. Participation in the NIP is on voluntary basis; vaccination coverage is nevertheless high: more than 95% in 2-year olds [19].
General practitioners (GPs) or family physicians are not involved in the NIP. Other medical care, however, primarily involves GPs. People are listed with a GP who provides general medical care and who coordinates access to specialists and hospital care [20]. The GPs conduct an Influenza Immunization Program, focused on adults and children with a medical indication such as chronic heart or lung disease. Like the NIP this Influenza Immunization Program is offered free of charge.
Religious objections to vaccination have a long history, nevertheless little is known about the way HCPs deal with these specific objections. The American Academy of Pediatrics issued a guideline "Responding to parental refusals of immunization of children" which advises to listen respectfully to all objections, provide honest information, and attempt to correct any misperceptions [21]. A Dutch brochure on objections to vaccination advises largely the same [22]. Few papers were published on the actual response of health care professionals to parents with objections to vaccination [23,24]. The objections in these studies concerned vaccine safety and HCPs responded to them by trying to convince the parents of the medical benefits of vaccination. The response of health care professionals to parents with religious objections to vaccination has -to our knowledge-never been studied.
A qualitative study was therefore undertaken to gain insight into the responses of HCPs to parents with these
Study design
Because of the explorative character of our study we chose a qualitative research design using a grounded theory approach [25]. Semi-structured interviews were undertaken with HCPs who had ample experience with orthodox Protestant parents. According to the grounded theory approach, inclusion of participants was continued until data saturation was reached, coding of the data was entirely based on their content, and emerging concepts were assessed for consistency by reviewing previously coded interviews.
Study population
The study population was composed via purposeful sampling. Participants were recruited in villages with large orthodox Protestant populations. The selection of these villages was based on the results of a previous study [26]. HCPs with different professional backgrounds (CHC doctors, CHC nurses and GPs) and different religious backgrounds were approached by the researchers and invited to participate. HCPs who had little or no experience with orthodox Protestants were excluded. Inclusion of participants was continued until data saturation was reached; this was after 22 interviews. We initially approached 6 more HCPs, 5 of them were excluded because they were not seeing orthodox Protestant patients and 1 HCP could not be interviewed due to logistic problems. Thus, in the end, 22 HCPs who all had ample experience with religious objections to vaccination were interviewed: 7 CHC doctors, 5 CHC nurses and 10 GPs. Six of them were members of orthodox Protestant churches. Participant characteristics are summarized in Table 2.
Data collection
Two interviewers (GvIJ and WLMR) visited the HCPs between January 2009 and June 2010 at their practices to interview them with regard to a number of vaccination topics (see Table 3). The topic list was constructed on the basis of an exploratory meeting with key persons from the orthodox Protestant community, the NIP and CHCs who were represented in the advisory committee of the project. The interviews lasted an average of 30 minutes (range 20-45 minutes). Because vaccination is a particularly sensitive subject among orthodox Protestant parents, observation of consultations was not feasible.
Analysis
The interviews were recorded and transcribed verbatim. The transcripts were thematically analyzed using the qualitative software program Atlas.ti 6.0. Two analysts (WvA and WLMR) independently coded the transcripts and subsequently reviewed, discussed, and refined the coding schemes until consensus was reached. All transcripts were coded and discussed by both analysts. Emerging concepts were assessed using the constant comparative method from grounded theory: previously analyzed interviews were reviewed in order to check if their content fitted into the concept [25].
Results
HCPs reported three different manners of responding to religious objections to vaccination: provision of medical information, discussion of the vaccination decisionmaking process, and adoption of an authoritarian stance. These manners of responding are described in greater detail below. The manner of responding which was applied depended on characteristics of the child, the child's parents, and the HCP him/herself. For each manner of responding the determinants are described.
Provision of medical information
All HCPs reported to respond to religious objections to vaccination predominantly with medical information. They stated that the provision of medical information was their most important contribution to vaccination decision-making, nevertheless many of them thought the provision of such information to be not very rewarding.
Respondent 2, CHC doctor
They think measles is not that serious, it's just a childhood disease . But measles can be really serious and I try to explain that, that it may have serious complications. CHC doctors and nurses reported to provide information on the severity of the vaccine preventable diseases, the benefits of vaccination, and its possible side-effects. They said they attempted to correct any misperceptions and offered later vaccination to parents who were in doubt. The extent of the information that was provided was determined by a characteristic of the child: being firstborn or not. When the child was not a firstborn, most CHC doctors and nurses simply asked if the parents still objected to vaccination. Orthodox Protestant families are large, and CHC doctors and nurses reported not wanting to "bother" the parents too much for fear that they would stop coming to the CHCs for monitoring.
The same was found for the GPs offering influenza vaccination. All of the GPs provided patients with medical information about the vaccination. Moreover, the specific risks of influenza for the patient with a particular medical condition were explained. After repeated vaccination refusal, the GPs generally reported stopping discussion with the patients.
Discussion of the decision-making process
In addition to the provision of medical information, the HCPs sometimes discussed the vaccination decisionmaking process itself. That is, the HCPs verified just how the decision not to vaccinate was made and whether or not the possible consequences of nonvaccination were realized. Some HCPs said they also briefly discussed religious considerations, while others suggested the parents to read a booklet on the religious arguments for and against vaccination published by some orthodox Protestant ministers [15]. Still others systematically discussed all religious arguments for and against vaccination.
Respondent 8, CHC nurse
If they are in doubt I'll discuss that. But I cannot advise them in religious matters. I ask them what they want to know and send them some information Whether HCP s discussed the vaccination decisionmaking process or not depended first and foremost upon the willingness of the parents to engage in such a discussion.
In addition, discussion of the decision-making process depended on HCP-related factors: their religious backgrounds, their attitude to religious objections to vaccination and their communication skills, see Table 4.
Especially an orthodox Protestant background, or at least proper knowledge of the orthodox Protestant religion, was reported to be important. The orthodox Protestant GPs reported being consulteda few times a year -by orthodox Protestant parents for advice on whether to have their children vaccinated or not. In these cases, most of the GPs reported systematically discussing both the medical and religious arguments for and against vaccination with the parents.
Respondent 14, GP
I try to find out how they feel about vaccination and why they came to me to talk about it. Apparently they're not sure what to do. They like to hear the arguments for and against, and I know the medical arguments. But I also know the religious arguments and these arguments are discussed as well. In fact, it's more pastoral than medical.
Respondent 16, GP They sometimes ask: "What should I do?" That's difficult, I don't answer such a question. They have to decide themselves. I give them some material, on which they can base their choice. I show them the pros and cons, medically but also religiously. In the Bible there are arguments for and against vaccination, but it's up to them to weigh these arguments.
Respondent 22, GP I try to tell the whole story, objectively. About the smallpox vaccination in the past and modern vaccines today. I also mention that it's important that they can account for their choice. If you don't feel right about it, you should ask yourself if you should continue in that direction or reconsider your choice.
One of the orthodox Protestant GPs even reported using religious considerations to support the final decision making in a dissenting couple.
Respondent 14, GP
If one does and the other doesn't, then I point to their vows; they are married; they promised in the church that the wife would follow the husband.... If they cannot figure things out themselves, then the husband as head of the family should make the decision and the wife follow him on this. This sometimes works.
Although all orthodox Protestant GPs realized that they could set an example, none of them revealed the vaccination status of their own children to their patients. They reported that they always left the final decision on taking part in the NIP up to the parents.
Apart from the religious background of the HCPs, their attitude to religious objections to vaccination seemed to be important. Some of the HCPs lacked affinity with the dilemmas of the orthodox Protestant parents while others tried to understand their' position.
Finally some HCPs reported that they were willing to discuss the vaccination decision-making process but felt they lacked the skills to do so.
Authoritarian stance
The third manner of responding to religious objections to vaccination, described by the HCPs, was to adopt an authoritarian stance and tell the parents what they must do in their child's best interest. In cases of tetanus post-exposure prophylaxis almost all of the GPs reported to use their medical authority to make parents comply with the immunization regimen prescribed for unvaccinated individuals.
Respondent 17, GP
Tetanus is something that you would not wish upon your worst enemy. If your kid should come down with this, you would never forgive yourself. So I say: "The wound will be cleaned and now a shot because you've never been vaccinated and you've got dirt in your system" and that is usually swallowed more or less without a problem.
Respondent 15, GP I try anything I can think up to persuade them. Only once I didn't succeed.
Respondent 16, GP They have to take it. Well, . . . of course they are not obliged to it. But I explain them that the risk is really high in such a situation. And if I advise them to take a shot, they do so. Respondent 19, GP I sometimes say: "If you get any problems, just tell them that the doctor said that you had to take it." The GPs adopted this authoritarian stance only in cases of tetanus post-exposure-prophylaxis. Given that almost all of the GPs in these cases adopted an authoritarian stance, while none of them did so in any other cases, this approach seemed to be completely dependent on the child running a high risk on serious disease.
Discussion
We identified three manners of responding to parents with religious objections to vaccination: the provision of medical information, the discussion of the vaccination decision-making process, and adoption of an authoritarian stance. The manner of responding was shown to depend on characteristics of the child, the willingness of the parents to engage in a discussion of the vaccination decision, and some personal characteristics of the HCPs themselves.
The three manners of responding to religious objections to vaccination resemble to recent models of medical decision making (in the context of the doctorpatient relationship) in which the informative, the shared decision-making, and the paternalistic approaches are distinguished [27][28][29]. There is, however, a major difference: while providing medical information on vaccination fits into the informative approach, and the adoption of an authoritarian stance on tetanus postexposure prophylaxis fits into the paternalistic approach, discussing the vaccination decision-making process cannot be considered as shared decision-making. Shared decision making means that patient's preferences are taken into account in a final decision that is endorsed by both the doctor and the patient. A prerequisite for shared decision making is that the available options be medically equivalent [30]. This is questionable in cases of vaccination and simply untenable in cases of tetanus postexposure prophylaxis where refusal has a high risk of adverse outcome. Thus, the aim of discussing the vaccination decision-making process with parents who refuse vaccination is to help them make a well-considered decision; that is not necessarily a decision endorsed by the HCP.
All HCPs primarily responded by providing medical information and correcting any misconceptions regarding vaccination. They considered the provision of medical information a key competence of HCPs, and their most important contribution to acceptance of vaccination. Orthodox Protestant youngsters, however, are more interested in religious aspects of vaccination than in medical aspects [31]. And orthodox Protestant parents predominantly use religious arguments to justify their decision on vaccination [17]. Therefore the influence of medical information on parents' final decisions is expected to be limited, as was noticed by some HCPs in the present study.
The discussion of the decision-making process required -except from parental willingness to engage in such a discussion-some religious knowledge, a positive attitude towards parents with religious objections to vaccination, and adequate communication skills.
The religious background of HCPs influences their attention to religious considerations in general clinical practice. In a recent study in the USA, older pediatricians with Christian backgrounds paid more attention to religious considerations than younger, non-religious pediatricians [32]. Similarly, GPs with a Protestant background in the Netherlands have been found to pay more attention to religious considerations in their practice than GPs with a Catholic background [33]. While to our knowledge the present study is the first to focus on the influence of religious background on vaccination discussions, our finding that in particular the HCPs with an (orthodox) Protestant background discussed the decision-making process and the religious considerations involved are in line with these studies. Some extra education on religious aspects of vaccination and training in communication skills could for the other HCPs possibly facilitate the discussion of the decision-making process with orthodox Protestant parents, however the effects of such discussions should be evaluated.
Limitations
The data for this study were collected via interviews with HCPs, andby definitionsubjective. However, the findings are in line with the results of previous studies among orthodox Protestants [17,31]. Because of the sensitive character of the subject vaccination among orthodox Protestants, observation of the HCPs during real life consultations was not feasible. In other research the response of HCPs to simulation patients presenting with standardized problem scenarios was observed [20,21]. Although this method seems to be more objective, in these studies the trust between patient and HCP could not be taken into account. We found that orthodox Protestant parents sometimes preferred to consult their orthodox Protestant GPs with doubts about vaccination, instead of the CHC doctors and nurses who provide the vaccinations. This stresses the importance of trust. Perfectly provided medical information is not enough to make orthodox Protestant parents change their minds.
Another possible limitation on the present study is that the gender distribution of the participants was not balanced. All of the CHC doctors and nurses were female and all of the GPs were male. However, 94% of CHC doctors and the vast majority of CHC nurses in the Netherlands are female while only a third of the GPs in the Netherlands are female [34,35]. Our study population is therefore fairly representative. We tried to include female GPs in our study but, when approached, the female GPs referred us to male colleagues as they saw the orthodox Protestant patients. Given that perceived similarity of values between doctor and patient is an important factor in patient satisfaction [36], it is not surprising that orthodox protestant patientswho generally have conservative views with regard to gender roles and expect women to stay at home and care for the children [37] tend to choose a male GP, if possible with their own religious background. The adoption of an authoritarian stance in cases of tetanus post-exposure prophylaxis, which was only seen among the GPs and thus the male participants in this study, could thus be a gender effect. This seems unlikely in light of the extenuating circumstances created by tetanus exposure, however.
Conclusions
In this study, we identified three manners in which HCPs respond to parents with religious objections to vaccination: provision of medical information, discussion of the vaccination decision-making process, and adoption of an authoritarian stance. The choice of approach depends on the medical condition of the child, the willingness of the parents to engage in discussion, and the personal characteristics of the HCPs themselves. Given that for parents with religious objections to vaccination | 2016-05-12T22:15:10.714Z | 2012-08-01T00:00:00.000 | {
"year": 2012,
"sha1": "7c4479e408179bd7fef88e16c948a3a76b4fa5b1",
"oa_license": "CCBY",
"oa_url": "https://bmchealthservres.biomedcentral.com/track/pdf/10.1186/1472-6963-12-231",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7f2e3e292dce3085b7c479d948b1320c79dd16a7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
209419564 | pes2o/s2orc | v3-fos-license | A case study and implication: particle finite element modelling of the 2010 Saint-Jude sensitive clay landslide
Modelling of landslides in sensitive clays has long been recognised as a challenge. The strength reduction of sensitive clays when undergoing plastic deformation makes the failure proceed in a progressive manner such that a small slope failure may lead to a series of retrogressive failures and thus to an unexpected catastrophic landslide. The clay in the entire process may mimic both solid-like (when it is intact) and fluid-like (when fully remoulded, especially for quick clays) behaviours. Thereby, a successful numerical prediction of landslides in sensitive clays requires not only a robust numerical approach capable of handling extreme material deformation but also a sophisticated constitutive model to describe the complex clay behaviour. In this paper, the particle finite element method (PFEM) associated with an elastoviscoplastic model with strain softening is adopted for the reconstruction of the 2010 Saint-Jude landslide, Quebec, Canada, and detailed comparisons between the simulation results and available data are carried out. It is shown that the present computational framework is capable of quantitatively reproducing the multiple rotational retrogressive failure process, the final run-out distance and the retrogression distance of the Saint-Jude landslide. Furthermore, the failure mechanism and the kinematics of the Saint-Jude landslide and the influence of the clay viscosity are investigated numerically, and in addition, their implications to real landslides in sensitive clays are discussed.
Introduction
A retrogressive landslide is a type of geohazards that usually occurs in sensitive clays. A distinctive feature of this kind of landslides lies in the observed fact that an initial small local slope failure may lead to a catastrophic landslide proceeding rapidly in a retrogressive manner and eventually eroding seemingly stable areas that are kilometres away from the location of the initial failure. Representative retrogressive landslides include multiple retrogressive landslides (also called earthflows), translational progressive landslides and the spreads (Locat et al. 2011a;Cruden and Varnes 1996).
A multiple retrogressive landslide, as its name implies, involves a series of failures occurring sequentially (Locat et al. 2011a;Mitchell and Markell 1974;Quinn et al. 2011). Specifically, the landslide develops in a manner that the migration of mass from the initial slide results in a new backscarp which may be unstable as well. Failures of the newly generated backscarps occur until a final stable scarp is formed. The translational progressive landslide develops in a very different way; that is, a shear surface parallel to the ground surface develops first and then the soil mass above the formed failure surface moves downhill (Bernander et al. 2016;Wang 2019). In a spread, a shear surface along the ground surface forms as that in a translational progressive landslide. However, instead of moving downhill, the soil mass above the failure surface undergoes extension and dislocation producing blocks of intact clay called horsts and grabens (Locat et al. 2013;Geertsema et al. 2018;Locat et al. 2014).
Although computer simulation has been regarded as a powerful tool for slope stability analysis, the numerical modelling of a retrogressive landslide is still a challenging task. The conventional numerical approaches for slope stability analyses such as the limit equilibrium method and the finite element strength reduction method fail because of two issues. First of all, these approaches concern only the initiation of the landslide, for example the failure of the original slope, whereas for a retrogressive landslide, specifically for an earthflow, a series of collapses are triggered after the initiation of the landslide, which results in a large retrogression distance that neither the limit equilibrium method nor the finite element strength reduction method can predict. In addition, the strain softening feature of sensitive clay, which plays a key role in the forming of shear bands and consequently in the failure mode, cannot be captured by the mentioned conventional methods for slope stability analysis because of their assumption of perfect plastic soil behaviour. Indeed, a reliable numerical prediction of a retrogressive landslide and its consequences requires not only a sophisticated constitutive model for representing the complex behaviour of sensitive clay but also a robust numerical approach capable of simulating the entire process where large soil deformations occur and new free surfaces (e.g. new backscarps) emerge.
So far, numerous efforts have been devoted to reproducing the complete process of a retrogressive landslide. Recent contributions in this regard were made in (Dey et al. 2016;Wang et al. 2016;Dey et al. 2015) where the coupled Eulerian-Lagrangian (CEL) method available at the commercial software Abaqus and the material point method (MPM) were adopted for tackling the induced large deformations and the emergence of new free surfaces in the postfailure stage. In these cases, a rate-independent model in the form of the von Mises yield criterion with strain softening was applied to capture the progressive failure behaviour of sensitive clay. Instead, the particle finite element method (PFEM) that is a combination of the particle approach and the finite element method was employed to analyse retrogressive landslides in (Zhang et al. 2017) by adopting a rate-dependent elastoviscoplastic model constructed as a mixture of the Tresca model (with strain softening) and the Bingham model. More specifically, the part of Tresca yield criterion with strain softening in the model was used to capture the solid-like behaviour of sensitive clays. On the other hand, the Bingham plastic rheology was used since it is able to mimic the fluid-like behaviour (e.g. non-Newtonian flow) which is a typical feature of high-sensitivity clays, such as quick clays, when they are Original Paper fully remoulded. Some key characteristics in retrogressive landslides have been reproduced successfully in those works (Dey et al. 2016;Wang et al. 2016;Dey et al. 2015;Zhang et al. 2017) including the development of progressive failure, multiple collapses, formation of horsts and grabens, etc. Despite that, these works focused on the qualitative studies of the mechanism of retrogressive landslides based on virtual landslide models. To the authors' best knowledge, there have been so far very few quantitative studies of retrogressive landslides focusing on the comparison of numerical results with physical model tests or with real landslide cases in sensitive clay. In this paper, a case study on a retrogressive landslide that occurred in 2010 in Saint-Jude, Quebec, Canada, is carried out numerically based on the computational framework of the PFEM associated with an elastoviscoplastic constitutive model with strain softening.
The idea of the PFEM is that the Lagrangian finite element method is adopted for incremental analyses at each time step whereas the element nodes, after each incremental step, are considered free particles that may move freely and even separate from the domain they originally belong to. Computational domains and the associated meshes are constructed based on the particles when material undergoes large deformations. In other words, when modelling landslide problems, the PFEM is able not only to capture the free surface evolution (including the sliding mass movement and the emergence of new backscarps after slope failure) forthrightly owing to the Lagrangian description but also to circumvent mesh distortion issues that regard particularly strong shear deformation zones. The objective of this work is twofold: (a) to assess the performance of the computational framework in reproducing a real retrogressive landslide through detailed comparison between simulation results and data from site investigations and (b) to further analyse the mechanism and evolution of the Saint-Jude landslide through a numerical simulation with the eventual goal to improve the understanding of real retrogressive landslides.
The Saint-Jude landslide The Saint-Jude landslide took place in a deposit of sensitive Champlain Sea clay (Lafleur and Lefebvre 1980) along the Salvail River on 10 May 2010 ( Fig. 1). Leading to death of four people, the landslide had a retrogression distance of 80 m (defined as the distance from the initial crest of the slope to the final stable backscarp) and affected a section about 275 m long, parallel to the Salvail River and a total of 520,000 m 3 soil mass (Locat et al. 2017). Both on-site and lab testing was carried out to obtain information on the stratigraphy and the geotechnical properties of the clays. The topographies of the slope before and after failure are taken from the digital elevation model built from aerial photographs. The chosen cross-section C-C′ before and after failure is illustrated in Fig. 2 for comparison.
The exact cause of the Saint-Jude landslide is unknown. Slope stability analyses were performed in (Locat et al. 2017;Locat et al. 2011b) along the transect C-C′. In (Locat et al. 2011b), it was shown that the initial failure based on drained analysis is very local whereas a large amount of clay would be disturbed if the slope failed under undrained conditions. These different failure modes lead to a substantial discrepancy in the factor of safety (FoS) determined under undrained and drained conditions. According to (Locat et al. 2017;Locat et al. 2011b), the analysis under undrained conditions using the limit equilibrium method provides the FoS of 2.16, whereas the FoS from drained analysis is close to one (more precisely 0.99 with the Bishop method and 1.03 with the Morgenstern-Price method). For detailed slope stability analyses under drained and undrained conditions including the used material parameters, we refer the reader to (Locat et al. 2011b). It was believed that the slope could have been initiated by smallmagnitude triggering events such as the river erosion and the high artesian pore pressure under the river. For instance, one scenario could be that the river erosion may remove the materials at the slope toe leading to an immediate increase of the total stress at the potential shear band zone in the slope, while the high artesian pore pressure may increase the pore water pressure in the slope. Both of these two mechanisms may result in a decrease of FoS. The critical slip surface for the initial slope determined based on the drained analysis is illustrated in Fig. 2(a) (see the dashed blue curve).
The failure surface of the landslide was determined using cone penetration tests with pore water pressure measurement (CPTU test) in a series of locations (see for example the black dashed lines in Fig. 2(b), (c)). More specifically, from the CPTU profiles, one can derive the depth of the failure surface at each location (see e.g. black points in Fig. 2(a)) and then build the entire failure surface by a suitable interpolation. The reader is referred to (Locat et al. 2017;Locat et al. 2011b) for more details on the obtained CPTU profiles and the procedure to determine failure surfaces. According to the report (Locat et al. 2017), the debris of the retrogressive landslide can be divided into four zones (Fig. 2). The first zone occupies about 22% of the total landslide area. Here the soil mass comes from the initial bed and banks of the Salvail River that were pushed onto the opposite side of the river. Settled on top of the initial position of the Salvail River, zone 2 is about 20% of the landslide area and less fissured than zone 1. The ground surface of this zone is roughly horizontal with trees slightly inclined towards the backscarp. Blocks such as grabens and horsts were observed in both zone 3 (24% of the landslide area) and zone 4 (34% of the landslide area). While stratigraphy in zone 3 indicates that blocks did not rotate during the landslide motion, horsts in zone 4 were inclined between 0 and 17°to the horizontal. The sides of clay ridges formed by horsts in zone 4 are inclined around 45 to 80°.
As indicated in (Locat et al. 2017), the limit equilibrium method cannot explain the entire event. A better explanation of this geohazard and the associated failure mechanism requires more advanced calculation approaches. In this paper, the PFEM and the elastoviscoplastic constitutive model with strain softening introduced in (Zhang et al. 2017) are adopted for the back analysis of the Saint-Jude landslide.
Problem description
The aerial view and on-site investigation showed that the final profiles of three sections A-A′, B-B′ and C-C′, all along the landslide direction, after the landslide are very similar to each other. Therefore, in our study, the landslide is considered a plane strain problem, which is in line with the stability analysis performed in (Locat et al. 2017;Locat et al. 2011b). The numerical model in this study is constructed according to the cross-section C-C′ (Figs. 1(b) and 2(b)) whose critical slip surface (i.e. the blue dashed curve in Fig. 2(a)) determined using the limit equilibrium method under drained conditions was given in (Locat et al. 2017).
Original Paper
Landslides 17 & (2020) Governing equations As the landslide process lasted only a few minutes and the permeability of clay is very low (Locat et al. 2017), the simulation is conducted by assuming undrained conditions, which is not in contrast with the fact that drained conditions were invoked to explain the slope instability, as discussed later on. The following governing equations have to be fulfilled on the volume Ω and the boundaries Γ: (a) The momentum conservation equation (c) The boundary conditions is the body force of clay ρ is the density of clay u is the displacement field ε e and ε vp are the elastic strain and the viscoplastic strain, respectively, providing the total strain ε = ε e + ε vp ℂ is the elastic compliance matrix η is the viscosity coefficient λ is the non-negative plastic multiplier κ is the equivalent deviatoric plastic strain u and t are the prescribed displacements and external tractions on boundaries N consists of components of the outward normal to the boundary Γ t ∇ T is the transposed gradient operator.
The superposed dot represents differentiation with respect to time.
Following (Zhang et al. 2017), the Tresca yield criterion is used in which the strain softening behaviour of sensitive clay is taken into account by reducing the undrained shear strength with the equivalent deviatoric plastic strain based on a two-segment linear function as shown in Fig. 3. The equivalent deviatoric plastic strain is defined as κ ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi 0:5e p ij e p ij q according to (Troncone 2005) with e p ij being the deviatoric plastic strain tensor. The present elastoviscoplastic model reduces to the classical rate-independent elastoplastic model in the case of η = 0. The interested readers are referred to (Zhang et al. 2017) for more details on the abovementioned governing equations.
Clay properties
According to (Locat et al. 2017;Locat et al. 2011b), soils at the elevation less than 2 m (see the vertical axis of Fig. 2(b)) are much stiffer than the soils above. Thereby, only soil layers of elevation more than or equal to 2 m are considered in our simulation, including the top crust (a dense sandy layer) and the underneath deposit of firm grey sensitive clay (very uniform with some silt). In the simulation, the crust is considered non-sensitive with shear strength of 75 kPa which is in line with the parameter used for undrained analysis in (Locat et al. 2011b). Field vane testing (Locat et al. 2011b) showed that the intact shear strength increases linearly with the depth of the deposit from 25 to 65 kPa. Thus, the peak shear strength used for the sensitive clay in our simulation is the average value (45 kPa) with the residual shear strength being 1.6 kPa, which is the strength of the clay near the bottom of the deposit when it is fully remoulded. According to (Locat et al. 2017;Locat et al. 2011b), the unit weights of the crust and the sensitive clay are 18.6 and 16 kN/m 3 , respectively, and the Poisson's ratio for both the crust and the sensitive clay is 0.49 to constrain the volume change (undrained conditions). The values of the reference equivalent deviatoric plastic strain and the viscosity coefficient were not reported in (Locat et al. 2017;Locat et al. 2011b). In our simulation, the reference equivalent deviatoric plastic strain is κ ¼ 1:6 calculated from the lab testing data for Canadian sensitive clay reported in (Quinn et al. 2011). The viscosity coefficient for the crust and the deposit of sensitive clay is 1000 Pa ⋅ s which is within the suggested range 100-1499 Pa ⋅ s for case studies of landslides (Edgers and Karlsrud 1982;Johnson and Rodine 1984) and in line with the viscosity for reproducing the Storegga slide in a marine sensitive clay (Gauer et al. 2005).
Solution scheme-PFEM A variety of numerical techniques are available for solving the above governing equations. In this work, the set of equations is solved using the finite element method (FEM) in mathematical programming as suggested in (Zhang et al. 2017). In this computational framework, the governing equations after finite element discretization are reformulated as an equivalent second-order cone programming problem, based on the Hellinger-Reissner variational principle, which is then solved by employing an advanced optimization algorithm-the primal-dual interior point method. For more details of the solution scheme, readers are referred to (Zhang et al. 2017) in which the correctness and robustness of the solution scheme have been illustrated.
Since the used FEM is based on Lagrangian description, it fails in modelling the entire failure process of the retrogressive landslide. This is so because the clay undergoes very large deformations that result in severe mesh distortion if Lagrangian FEM is used. Moreover, the change of geometry may lead to a complex free surface evolution, for example to the generation of new backscarps in the collapse, which cannot be captured by the traditional Lagrangian FEM. To tackle these issues, the version of the PFEM introduced in (Zhang et al. 2017) is also adopted in this work. The PFEM is a mixture of the Lagrangian FEM and the particle approach. The basic idea behind the PFEM is that mesh nodes are regarded as free particles that can move freely while governing equations are still discretized and solved using Lagrangian FEM incrementally (Zhang et al. 2017;Oñate et al. 2004). For the concerned problem, the basic analysis steps for the PFEM in a typical time interval [t n , t n + 1 ] are as follows (see also Fig. 4): (a) Solve the governing equations on mesh M n , using the Lagrangian FEM; (b) Update the mesh nodes to their new position according to the computed incremental displacement and erase the mesh topology leaving behind a cloud of free particles C n + 1 ; (c) Identify the computational domain Ω n + 1 based on C n + 1 using the alpha-shape method; (d) Discretise the domain Ω n + 1 to obtain the new mesh M n + 1 ; (e) Map the state variables, such as velocities, accelerations, stresses and strains, from the old mesh to the new mesh and then step to the next time interval.
A key feature of the PFEM lies in its inheritance not only of the capability of a particle approach for handling general large deformation problems but also of the solid mathematical foundation of the FEM. To date, the PFEM has been used successfully for modelling landslides and their relevant large deformation problems such as the granular flow (Dávalos et al. 2015;Zhang et al. 2014;Cante et al. 2014;Zhang et al. 2016), the debris flow and its interaction with barriers (Franci and Zhang 2018), a flow-like landslide (Zhang et al. 2015), landslide-generated waves (Salazar et al. 2016;Cremonesi et al. 2011), fluid-soil-structure interactions (Oñate et al. 2011), submarine landslides and their impact to nearby subsea pipelines (Zhang et al. 2019), just to mention a few. The possibility of the used PFEM for modelling retrogressive landslides has been explored in (Zhang et al. 2017). In this work, it is used to study the 2010 Saint-Jude retrogressive landslide qualitatively and quantitatively. Simulation results and discussions To trigger the landslide, the clay enveloped by the critical slip surface determined from the drained slope stability analysis performed in (Locat et al. 2017;Locat et al. 2011b) is set to be fully remoulded. This critical slip surface is selected because the FoS from drained analysis (implying that the excess pore water pressure is zero) is much lower than that from the undrained analysis (where the clay is impermeable and the excess pore water pressure cannot be dissipated). However, after the landslide is triggered, the time interval from the mass transport to the final deposition is very short, and therefore, given that the clay permeability is very low, the effect of water seepage in clay can be neglected. This explains why the post-failure analysis in this work is modelled under undrained conditions. In our simulation, the profile of the deposit is set to be sufficiently long to remove the boundary effect. The bottom of the sensitive clay is fully fixed. A total of 17,773 mixed triangular finite elements (36,224 mesh nodes) are used to discretize the initial domain of the deposit. Mixed elements have been introduced in (Zhang et al. 2017), and the term "mixed" means that in each element displacements and stresses are treated as independent fields: Quadratic shape functions are used for displacements whereas linear shape functions are used for stresses. The time step adopted in the simulation is Δt = 0.05 s.
Failure evolution and its implication
The complete failure process of the Saint-Jude landslide from our simulation is shown in Fig. 5 where the contour represents the distribution of the equivalent deviatoric plastic strain κ. Overall, the Saint-Jude landslide is in a form of a multiple rotational retrogressive landslide according to the simulation results. The mass involved in the failure of the initial slope flows into the river leading to the yielding of the clay at the bottom of the newly generated backscarp (Fig. 5(a)). The accumulation of plastic deformation in the backscarp at this stage is relevantly slow since the clay from the first failure is restored in the river that is just in front of the new backscarp. Nevertheless, the failure band still develops gradually from the bottom of the deposit to its top surface resulting in the first retrogressive failure (RF) as shown in Fig. 5(b). A relatively large amount of clay, more than twice as much as that involved in the failure of the initial slope, is involved in this failure. The gravitational potential energy of the clay is transformed into its kinematic energy, and thus, the clay from the first RF pushes the remoulded clay in front to further move forward. An apparent strength reduction is observed for the clay in the failure band while the clay outside the failure band is almost intact and migrates with little deformation. The migration of the clay from the first failure results in a crater in front of the backscarp (Fig. 5(c)), and similarly, the second RF is then triggered and behaves as a rotational failure at the early stage (Fig. 5(c)). The amount of clay disturbed by the second RF is even larger. As can be seen in Fig. 5(c), the shear band of the failure at this time is much longer, which is, to a large extent, due to the increase of the deposit thickness at the distance of 140 m. During the migration of the clay involved in the second RF, the stress intensity at the position where thickness increases evokes two additional shear bands propagating downwards making the second RF evolve from a rotational collapse to a spread process (Fig. 5(d), (e)). The two additional shear bands divide the block in zone 3 into grabens and horsts (see the triangles in Fig. 5(f)). These simulated phenomena have two implications: (1) the evolution of a rotational failure into a spread is possible in a thick sensitive clay layer during the migration process and (2) a retrogressive landslide can be the combination of a multiple retrogressive failure and a spread. While the clay from the second RF migrates, the third RF occurs in the mode of a rotational failure (Fig. 5(e)) resulting in a final stable backscarp shown in Fig. 5(f). Notably, the block involved in the third failure also breaks due to the new emerging shear band, resulting in two parts in the shape of a graben and a horst. It is thus concluded that not only a spread but also a rotational retrogressive failure can give rise to grabens and horsts. Figure 5 (f) shows that grabens and horsts can be observed in both zone 3 and zone 4, which agrees with the observations reported in (Locat et al. 2017).
Site investigations also show that blocks in zone 3 did not rotate during the landslide whereas horsts in zone 4 were inclined between 0 and 17°with respect to the horizontal. These observations can be explained by our simulation where the migration of the clay in zone 3 is in the form of a spread where horsts usually undergo very limited rotation, whereas zone 4 results from a typical rotational retrogressive failure where blocks rotate considerably during the migration.
The final deposit profile of the Saint-Jude landslide (red dashed line) and the failure surface of the landslide determined by on-site testing (black circles) are also depicted in Fig. 5(f) for comparison. As can be seen, the final run-out distance and the retrogression distance from our simulation are 55 m and 145 m (80 m if measured from the crest of the initial slope), respectively, which agrees well with the observation data reported in (Locat et al. 2017). A satisfactory agreement in terms of the shape of the final deposit is also achieved. Simulation results show that the entire landslide area is formed by four zones, and this is consistent with the observations reported in (Locat et al. 2017). Moreover, our simulation results indicate that a single zone of the landslide area is originated from an individual failure. More precisely, the first zone is from the initial failure, and the second zone is from the first RF and so on. Further, the failure surface from our simulation matches that one determined from the piezocone penetration tests (CPTu). In conclusion, the qualitative and quantitative agreements validate the adopted computational framework for modelling real retrogressive landslides in sensitive clay.
Failure mechanism
Shear stress fields τ xy in the deposit at different instants are illustrated in Fig. 6. These time instants refer to the moment when a new retrogressive failure (see the red curves) is triggered. The black curves in Fig. 6 represent the shear stress τ xy at the bottom of the deposit (namely at the elevation of 2 m). It is worth noting that, since the Tresca yield function is used, both the norm of shear stress τ xy and the norm of (σ xx − σ yy ) contribute to the yielding of the clay. Nevertheless, only shear stress field is discussed here since our simulation results show that the magnitude of |σ xx − σ yy | is much lower than the magnitude of |τ xy |.
As seen in Fig. 6(a), when a new backscarp (the red curve in Fig. 6(a)) is created owing to the removal of clays involved in the failure of the initial slope, the shear stress at the bottom of the sensitive clays, represented by the solid black curve, behind the new scarp increases to a very high value. The clay there yields with its strength being further reduced, which results in the emergence of a new shear band. The shear band propagates from the bottom of the clay layer to the top surface of the deposit, which triggers the first retrogressive failure (RF) as shown in Fig. 6(b). The second and third sequential collapses (Fig. 6(c), (d)) follow the same failure mechanism.
When a new RF is triggered, the shear stress on the bottom of the deposit on the left of the new RF surface is very low because the clay there has undergone large plastic deformation and consequently possesses a very low shear strength. On the other hand, the shear stress on the right-hand side of the new RF surface is much higher (slightly below 45 kPa). This is because the clay in this area is slightly disturbed at the moment. The shear strength of the clay is still high, and thus, it can sustain a relevantly large shear stress.
It is also notable that the curves of the shear stress on the bottom of the clay deposit fluctuate somewhat in the area in front of the new RF surface (e.g. at the distance between 50 and 150 m at t = 65 s as shown in Fig. 6(c), particularly at the distance of 70 m and 100 m). This fluctuation stems partly from dynamic effects. But partly it is due to the existence of some intact clay or partially remoulded clay in these areas which can still sustain a high shear stress. Indeed, partially remoulded clay is observed at the distance around 70 m and 100 m at t = 65 s in Fig. 5(c). As the result of the continuous migration of the clay, the strength of the clay further reduces, and thus, the fluctuation of the shear stress in these areas is much less, for example at t = 76 s and 87 s as shown in Fig. 6 (d) and (e), respectively.
Kinematics
The velocity of clay in the landslide process is depicted in Fig. 7 where the meshes are also plotted. As seen, meshes of good quality are maintained in the simulation. The moving direction of the clay is generally parallel to the basal failure surface of the landslide, given by the solid black curves.
Due to the rotational RF, the gravitational potential energy of the evoked clay is transformed into its kinematic energy, and thus, a relatively high speed is observed for the clay involved in the new RF. For the Saint-Jude landslide, the mass involved in a former failure is almost static or possesses a very low migration speed when a new RF is triggered. Thus, the mass evoked by the new failure usually possesses the maximum migration speed and pushes the mass in front to move forward ( Fig. 7(b), (c), (f)). However, a huge amount of mass is disturbed in the second RF, which releases a large amount of gravitational potential energy. The wave soon propagates to the leading front of the landslide causing the front to possess the maximum migration speed.
Additionally, as the mass from the first RF is still almost at t = 65 s (Fig. 7(c)), the block disturbed in the second RF somewhat ploughed into the front mass which can be also identified in Fig. 5(d) (at the distance of around 90 m). Moreover, some small undisturbed areas are observed below the new failure surface when the second and third RFs occur, for example area A in Fig. 7(c) and area B in Fig. 7(f), because these RFs at the beginning are of a rotational failure type. The clay velocity in these areas is zero when the failure is triggered. Soon after, area A is eroded owing to the migration of mass, whereas area B is maintained until the end of the landslide. This indirectly supports the previous explanation of the fluctuation of the bottom shear stress in front of the new generated backscarp.
Effects of viscosity for Saint-Jude landslide modelling
Since the clay viscosity was not calibrated in (Locat et al. 2017), the viscosity coefficient of the clay used in our simulations (1000 Pa ⋅ s) is determined by back analysis. As shown in Fig. 8(a), the landslide leads to a run-out distance of 55 m and a retrogression distance of 145 m (80 m if calculated from the crest of the initial slope) that agree well with the reported data if the adopted Original Paper viscosity is 1000 Pa ⋅ s. To further investigate the influence of the viscosity on the Saint-Jude landslide, a parametric study was carried out with viscosity being 500 Pa ⋅ s, 200 Pa ⋅ s and 0 Pa ⋅ s, respectively. The final deposits of the Saint-Jude landslide from the simulations are illustrated in Fig. 8. It is shown that decreasing the viscosity results in an increase of both the run-out distance and the retrogression distance. When the viscosity is neglected (Fig. 8(d)), the final run-out distance from the simulation is 75 m (36.4% greater than the reported one) and the retrogression distance is 170 m (17.2% greater than the reported one). When the viscosity equals 500 Pa ⋅ s or 200 Pa ⋅ s (Fig. 8(b), (c)), the final depositions obtained from the simulations are almost the same. The run-out distance is slightly larger than the reported data, and the retrogression distance matches the reported data although a shear band emerges at the location from 175 to 220 m which may lead to a further retrogression distance of 25 m (Fig. 8(b), (c)). In general, for the Saint-Jude landslide, the failure mode and the retrogression distance induced by each retrogressive failure are independent of the viscosity. It is thus concluded that the value of the clay viscosity ranging between 200 and 1000 Pa ⋅ s may reproduce a reasonable final deposition for the Saint-Jude landslide, which is a value within the range of viscosity for real landslide simulations (100-1499 Pa ⋅ s) suggested by Edgers and Karlsrud (Edgers and Karlsrud 1982) and Johnson and Rodine (Johnson and Rodine 1984). Nevertheless, it is noticeable that the used viscosity is much greater than the viscosity calibrated in lab testing for Canadian sensitive clay in (Locat and Demers 1988) that varies from 0.007 to 0.2 Pa ⋅ s. The discrepancy between the viscosity value determined in the lab testing and that used to predict a reasonable run-out distance for sensitive clay landslides is still an open question where there is no consensus since there is a variety of influencing factors such as the accuracy of the calibration method for viscosity and the used rheological models. To explain this discrepancy, research works on the synergistic investigations from both physical and numerical modelling are required.
Conclusions
In this paper, a case study on the Saint-Jude landslide that occurred in 2010 was carried out numerically using a new computational framework. The PFEM was adopted to tackle the issues resulting from large material deformation while the behaviour of sensitive clays was described by an elastoviscoplastic constitutive model that is a mixture of the Tresca model with strain softening and the Bingham model. The investigation of progressive failure in Saint-Jude landslide was carried out with emphasis on the failure mode and mechanism, the kinematics and the final deposition. Detailed comparisons between the modelling results and the available data were conducted. The main conclusions are drawn as follows: (1) The PFEM using the elastoviscoplastic model with strain softening is a valid computational framework for simulating real retrogressive landslides in sensitive clay. It has not only qualitatively reproduced the failure mode and the progressive failure process of the Saint-Jude landslide but also quantitatively estimated its final run-out distance, retrogression distance and failure surface. Thus, this computational framework can be used for hazard estimation of real landslides in sensitive clay.
(2) The Saint-Jude landslide is a type of multiple retrogressive landslides (also called earthflows) where a total of three retrogressive failures (RFs) were triggered after the initial failure of the slope according to our simulations. The initial failure and the subsequent three RFs led the landslide into four zones which coincide with the observations. (3) Our simulation results indicated that a retrogressive landslide can be the combination of multiple types of retrogressive failure. This is supported by the evidence that the second rotational retrogressive failure in the Saint-Jude landslide evolves into a spread in the migration process. (4) Owing to the change of geometry, a spread may occur in a deposit of a thick layer of sensitive clays. This is demonstrated by the second retrogressive failure in the Saint-Jude landslide from our simulation. The stress intensity at the location where there is a sudden increase of clay thickness leads to additional shear bands which further decompose the moving block, make the failure evolve as a spread during migration and sequentially produce multiple grabens and horsts in the migrating clay. (5) Fluctuations of the shear stress on the bottom of sensitive clay will occur in front of the new failure surface because of the existence of intact or partially remoulded clay in these areas which may still sustain large shear stress. However, the fluctuation will be alleviated soon afterwards because the unremoulded clay will be further eroded in the migration process. (6) The clay block stemming from a rotational failure may be further decomposed into two smaller parts exhibiting as a graben and a horst in the process of migration. In other words, a rotational retrogressive failure (not just the spread) in sensitive clay can produce grabens and horsts. (7) The horsts and grabens are blocks of intact clays that are usually under translation with very limited rotation in the process of migration if they are from a spread but rotate considerably if they derive from a rotational retrogressive failure. (8) In a multiple retrogressive landslide, usually the materials involved in a new retrogressive failure rather than those at the leading front possess the largest migration speed. However, if a huge amount of gravitational potential energy is released due to the new collapse, the wave may propagate fast to the leading front of the landslide and make it possess a very high migration speed. (9) Neglecting the viscosity of sensitive clays leads to an overestimation of both final run-out distance and retrogression distance for real retrogressive landslides. The modelling results for the Saint-Jude landslide suggest a value between 200 and 1000 Pa ⋅ s for producing reasonable run-out distance and retrogression distance. Although the suggested value is within the range advised by other researchers for real landslide modelling, it is much greater than the viscosity of Canadian sensitive clay determined by lab testing. Thus, synergistic investigations from physical and numerical modelling are required to provide a clear answer to this discrepancy issue. | 2019-12-20T15:49:11.121Z | 2019-12-20T00:00:00.000 | {
"year": 2019,
"sha1": "e17d1d09cd1bbca6acab84eb426dd8afc6cf40fa",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10346-019-01330-4.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "e17d1d09cd1bbca6acab84eb426dd8afc6cf40fa",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Geology"
]
} |
917236 | pes2o/s2orc | v3-fos-license | Field quality of 1.5 m long conduction cooled superconducting undulator coils with 20 mm period length
The Institute for Beam Physics and Technology (IBPT) at the Karlsruhe Institute of Technology (KIT) and the industrial partner Babcock Noell GmbH (BNG) are collaborating since 2007 on the development of superconducting undulators both for ANKA and low emittance light sources. The first full length device with 15 mm period length has been successfully tested in the ANKA storage ring for one year. The next superconducting undulator has 20 mm period length (SCU20) and is also planned to be installed in the accelerator test facility and synchrotron light source ANKA. The SCU20 1.5 m long coils have been characterized in a conduction cooled horizontal test facility developed at KIT IBPT. Here we present the local magnetic field and field integral measurements, as well as their analysis including the expected photon spectrum.
Introduction
Superconducting undulators can produce, with respect to permanent magnet ones, a higher peak field on axis for the same vacuum gap and period length, allowing to increase the brilliance and the tunability of the photon spectrum. The collaboration between the Institute for Beam Physics and Technology (IBPT) at the Karlsruhe Institute of Technology (KIT) and the industrial partner Babcock Noell GmbH (BNG) is developing superconducting undulators for ANKA and low emittance light sources. The first full length device with 15 mm period length has been successfully tested in the ANKA storage ring for one year [1]. Presently the collaboration is focused on the development of a full scale superconducting undulator with 20 mm period length with 75.5 full periods (SCU20). The device is conduction cooled and wound with a round NbTi wire with a diameter of 0.7 mm; each full winding package consists of 113 single turns and the magnetic gap is 8 mm. More details on the layout are provided in References [2,3].
The coils have been measured in a unique test facility developed allowing precise measurements in vacuum and at low temperatures of the field profile and field integrals of superconducting undulator coils. The coils are tested in the KIT IBPT test facility CASPER II [3] in a horizontal arrangement and conduction cooled, in a similar configuration to the one that they will have in the final cryostat. The results of the training, the stability and the thermal behavior of the coils are described in Reference [4]. The field integrals are measured using a stretched wire, while the local field profile along the coils is measured using a Hall probe mounted on a sledge which moves along the coils and is kept below 50 K.
Field integrals
With the aim of minimizing the horizontal first and second field integrals the end field design shown in Figure 1 has been chosen. Two auxiliary coil pairs and Helmholtz coils are wound with a NbTi round wire with 0.25 mm diameter. The first groove of the main coil has 23 turns and the second one 53 turns. Similar configurations are at all 3 other ends. To keep the first and second vertical field integrals (I 1v and I 2v ) within the specified values, only auxiliary coils 1 (AUX1) and the Helmholtz coil downstream (HH DS) are needed. The AUX1 upstream and downstream are connected to the same power supply and are used to compensate the second vertical field integral, while the HH DS is used to compensate the first vertical field integral. All other correction coils are not needed. The measured values of the first and second vertical and horizontal field integrals, presented in Table 1 for five different currents in the main coil in the range from 50 to 395 A, are all within the specified values: | I 1v | < 3 10 -5 T m, and | I 2v | < 4 10 -4 T m 2 .
The horizontal field integrals are specified: | I 1h | < 3 10 -6 T m, and | I 2h | < 10 -5 T m 2 . In order to reach these values correctors will be added outside the cryostat.
Integral field multipoles
The first vertical field integral has been measured at different positions of the transverse coordinate x at the five different currents in the main coils indicated in Table I, and powering the AUX1 and HH DS coils to the same currents minimizing the first and second vertical field integrals on the magnetic axis (x = y = 0 mm, being y the coordinate along the magnetic gap). The vertical integrated field multipoles have been calculated from the measurements of I 1v shown in Figure 2. With these values of the integrated field multipoles the dynamic aperture for the 2.5 GeV optics is not affected.
Longitudinal field profile
With an operating current of 395 A and a magnetic gap of 8 mm an average peak field of 1.187 T is reached. The measured absolute values on axis (x = 0 mm) of the peaks of the magnetic field profile are plotted and compared to the simulated one with Radia [5] with a current in the main coils of 395 A and in the AUX1 coils of 5.6 A, see in Figure 3 (lower plot). The simulation is performed considering the deviations measured at room temperature of the half period length before winding < 20 µm and of the pole height < 50 µm and winding height < 100 µm measured after impregnation. Figure 3 (upper plot) shows also the half period length obtained from the measured and simulated field profiles. The larger deviation of the period length observed at room temperature is at the connection of the building blocks of the coils. The smaller average period length observed in cold conditions is due to the thermal contraction of the yoke. The larger deviations observed in the half period length in cold conditions with respect to the ones at room temperature might be due to the loosening of the connections between the blocks as well as to an increase of the winding height deviations. This could explain a larger variation also in absolute value of the peak field on axis of the measured peaks with respect to the ones simulated.
The roll off has been obtained at each pole by measuring the longitudinal field profile with the same Hall probe positioned at x = 0 mm, -10 mm, +10 mm. In Figure 4 The roll off obtained from the local magnetic field measurements at different currents in the main coils and in the correction coils to minimize the vertical field integrals is below 0.5% at higher currents and it increases up to 0.7-0.8 % at lower currents. The roll off measured for the five currents in the main coils as indicated in Table I induces a negligible dynamic kick [6], and does not constitute a problem for the operation of the SCU20.
Calculated spectrum
The advantage of using the SCU20 with respect to an ideal (without mechanical errors and perfect end fields) cryogenic permanent magnet undulator (CPMU) with the same parameters as the one built at SOLEIL [8] for the same vacuum gap of 7 mm can be appreciated in Figure 5. The CMPU has a period length of 18 mm (CPMU18), a peak field of 0.82 T (highest field reachable with PrFeB) and 2 m magnetic length. The flux calculated with B2E [7] from the measured field at maximum current of the SCU20 is compared in Figure 5 with the flux of a CPMU18 with an ideal field, that is without mechanical errors and perfect end fields. Please note that the magnetic length considered is 1.5 m for SCU20 and 2 m for the CPMU18.
To give a flavor of a comparison in terms of brilliance, the flux is calculated through a pinhole of 50 µm x 50 µm at 10 m from the source. The spectra presented have been simulated using the ANKA beam parameters: beam energy 2.478 GeV, beam current 100 mA, energy spread 0.001, horizontal emittance 41 nm rad, vertical emittance 0.3 nm rad, horizontal beta function 19 m, and vertical beta function 1.7 m.
Discussion
The flux calculated from the measured field of the SCU20 is compared also to the flux of SCU20 with an ideal field. As expected, a slight reduction (less than 28% up to 30 keV) in flux for the odd harmonics, due to the mechanical errors and to the non-ideal end field configuration, is observed. The comparison with the ideal CPMU18 with the same parameters as the one in use at SOLEIL shows the larger flux/brilliance of the SCU20 at high energies up to factor of five (see Figure 5a) and at low photon energies the energy regions allowed with the SCU20, and not reachable with the CPMU18 (see Figure 5b).
Conclusion
The field integrals, the longitudinal magnetic field profile and the roll off of the 1.5 m long coils of the SCU20 have been measured in the test facility CASPER II.
The magnetic field profile has been used to simulate the expected photon spectrum at ANKA. The advantages of the spectrum produced by the measured field profile of the SCU20 with respect to an ideal PrFeB CPMU18 (without mechanical errors and perfect end fields) with the same parameters as the one built at SOLEIL and same beam stay clear have been demonstrated (see Figure 5). | 2018-01-21T16:39:14.479Z | 2017-05-01T00:00:00.000 | {
"year": 2017,
"sha1": "11a94ce07ef23569944e8952d4b5bfd8863e9812",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/874/1/012015",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "cf73233f5511b64492821dfadccee678f2d5c9cb",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
254895156 | pes2o/s2orc | v3-fos-license | Comparison of endoscopic thyroidectomy by complete areola approach and conventional open surgery in the treatment of differentiated thyroid carcinoma: A retrospective study and meta-analysis
Background The feasibility of endoscopic thyroidectomy by complete areola approach (ETCA) remains controversial. This study was conducted by combining our clinical data with the data obtained from a systematic review literature search to examine the effectiveness and safety of ETCA compared with conventional open thyroidectomy (COT) in differentiated thyroid carcinoma (DTC). Methods A total of 136 patients with a diagnosis of DTC who underwent unilateral thyroidectomy with central neck dissection from August 2020 to June 2021 were enrolled. The enrolled patients were divided into the ETCA group (n = 73) and the COT group (n = 63). The operative time, intraoperative bleeding volume, number of removed lymph nodes, number of metastatic lymph nodes, postoperative drainage volume, length of postoperative hospital stay, postoperative parathyroid hormone (PTH) levels, and complications were analyzed. Then, a systemic review and comprehensive literature search were conducted by using PubMed, Google Scholar, Embase, Web of Science, CNKI, Wanfang, and VIP database up to June 2022. Review Manager software version 5.3 was used for the meta-analysis. Results The results of clinical data showed that there were significant differences between the two groups in the operative time, intraoperative bleeding volume, removed lymph nodes, and postoperative drainage volume. There were no statistical differences in the length of postoperative hospital stay, number of metastatic lymph nodes, postoperative PTH level, and complications. In the systematic review and meta-analysis, 2,153 patients from fourteen studies (including our data) were ultimately included. The results of the meta-analysis found that ETCA had a longer operative time, larger postoperative drainage volume, and lower intraoperative bleeding volume. In terms of the length of postoperative hospital stay, the number of removed lymph nodes, and surgical complications, there was no significant difference between the two groups. Conclusion ETCA poses lower surgical bleeding and better cosmetic appearance compared with COT, while the length of operation and postoperative drainage in ETCA is less favorable compared with COT. In addition, ETCA is not inferior to COT in terms of the postoperative hospitalization stay, the number of removed lymph nodes, and surgical complications. Given its overall advantages and risks, ETCA is an effective and safe alternative for patients with cosmetic concerns.
Introduction
Thyroid cancer, the most common malignancy of the endocrine system, has been multiplying in recent years, ranking 9th for incidence in 2020 (1). Among different types of thyroid cancer, differentiated thyroid carcinoma (DTC), including papillary thyroid carcinoma (PTC) and follicular thyroid carcinoma, accounts for most cases worldwide (2). Conventional open thyroidectomy (COT) is a long-proven and effective treatment for DTC with good outcomes, but it also leaves a prominent scar on the anterior neck. The COT involves creating a 5-cm incision in the skin fold at 2 cm above the sternal notch, and the subcutaneous tissues are then separated layer by layer to fully expose the thyroid gland. With a favorable prognosis (3), many patients consider that maintaining a high quality of life with an aesthetic appearance is as crucial as disease management itself. The cosmetic implications of neck surgery have motivated surgeons and investigators to develop new approaches to accessing the neck using minimally invasive techniques. Since Hüscher et al. first performed endoscopic thyroidectomy (ET) in 1997, an increasing number of studies have reported different ET approaches for treating DTC (4)(5)(6). In the procedure of endoscopic thyroidectomy by complete areola approach (ETCA), a 12-mm incision and a 5-mm incision are made in the right mammary areola, while a 5-mm incision is made in the left mammary areola to imbed the puncture sheath, a 30°e ndoscope, and the operating apparatus. As a result of tiny incisions and less skin tension in the areola region, ETCA allows for the removal of bilateral lesions with excellent cosmetic outcome (7). However, certain indicators that may be used to assess the efficacy and safety between ETCA and COT, such as operative time, number of lymph nodes excised, postoperative parathyroid hormone levels, and incidence of surgical complications remains debatable (8,9).
Several previous meta-analysis comparing results between ET and COT has been published (10,11). However, to the best of our knowledge, no meta-analysis investigating ETCA vs. COT has been published. With the increased popularity of ETCA, the number of original studies exploring its safety and treatment outcomes has gradually increased. In this study, along with data retrieved from previously published studies that were searched with the systemic review method, we specifically included the data from one regional academic medical center to perform a meta-analysis to compare the effectiveness and safety of ETCA with COT in DTC patients.
Date collecting
The retrospective study group comprised 136 patients who underwent unilateral lobectomy and central lymph nodes dissection at Chongqing General Hospital from August 2020 to June 2021. The patients were divided into two surgical method groups: the ETCA group (n = 73) and the COT group (n = 63). All patients had a pathological diagnosis of DTC with tumor sizes ≤ 2 cm. Additionally, all patients with suspicious invasion of the recurrent laryngeal nerve (RLN), esophagus, trachea, suspicious lateral lymph node metastasis, or a history of neck surgery were excluded from the study. All operations were performed by one experienced board-certified surgeon. This study was approved by The Ethical Committee of Chongqing General Hospital, and all patients included signed the informed consent.
Clinical data were collected from medical records, and a database was set up to record patient age, tumor size, operative time, intraoperative bleeding volume, number of removed lymph nodes, number of metastatic lymph nodes, postoperative drainage volume, length of postoperative hospital stay, postoperative parathyroid hormone (PTH) levels and complications (transient hoarseness and postoperative infection). Specifically, operative time, postoperative drainage volume, length of postoperative hospital stay, the number of removed lymph nodes, and the number of metastatic central lymph node were compared between these two operative approaches to assess the surgical effectiveness. The intraoperative bleeding volume, postoperative PTH levels, and the incidence of surgical complications were used to evaluate surgical safety.
Statistical analysis SPSS 25.0 software (IBM, Armonk, NY, USA) was used for statistical analysis. Measurement data were expressed as mean ± standard deviation, and t-test was performed for the intergroup comparison. Enumeration data were expressed as rate (%), and χ 2 test was performed for the intergroup comparison. P < 0.05 suggested statistically significant difference.
Meta-analysis
This meta-analysis was reported in conformity to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) (12). In addition, the study protocol was prospectively registered in PROSPERO (http:// www.crd.york.ac.uk/PROSPERO; registration number: CRD42022344008).
Literature search strategy
The articles up to June 2022 were collected from PubMed, Google scholar, Embase, Web of Science, CNKI, Wanfang and VIP database. The keywords were ("Areola" OR "complete areolar" OR "endoscopic thyroidectomy") AND ("thyroid" OR "thyroid cancer" OR "thyroid carcinoma"). In addition, reference lists of the retrieved articles were reviewed to identify other eligible studies.
Identification of eligible studies
Two authors (Y. Yuan and T. Yin) independently carried out the literature search and disagreements were solved by consensus. The abstracts of the retrieved studies were reviewed and excluded if deemed irrelevant. The full text of the relevant studies was further reviewed for eligibility. If there were duplicate publications of the same study, the one with the most detailed information and complete data was included.
Studies included in this meta-analysis must meet al.l of the following criteria: (1) The type of study must be either a randomized controlled trial or an observational study (including cohort and case-control studies).
Data extraction and quality evaluation
Each of the two authors (Y. Yuan and T. Yin) independently disposed the data from the literature, and if discrepancies arose, a consensus was reached by consulting a third person (C. Yan) and comprehensively comparing the data. Information was collected as follows: first author, country, publication year, age, tumor size, operative time, intraoperative bleeding volume, number of removed lymph nodes, postoperative drainage volume, length of postoperative hospital stay, hoarseness, hypocalcemia, hematoma, infection, study design, and Newcastle-Ottawa Scale (NOS) scores.
Methodological quality of the observational researches was appraised using a validated NOS. Three broad subscales including study group selection (0 to 4 points), the groups comparability (0 to 2 points), and the exposures and outcomes elucidation (0 to 3 points). A score of 4-6 was considered moderate, and a score of 7 or more is defined as high quality. Two evaluators (C. Yan and Y. Chen) independently conducted the quality assessment and if there was a divergence of opinion, it would be resolved by mutual communication.
Statistical analysis and publication bias evaluation
RevMan software (version 5.3; Cochrane Library) and STATA statistical Software (version 14.0; StataCorp, College Station, TX) were used for statistical analysis. Continuous data were analyzed using weighted mean difference (WMD) with Yuan et al. 10.3389/fsurg.2022.1000011 Frontiers in Surgery corresponding 95% confidence interval (CI), whereas dichotomous data were measured using odds ratio (OR) with corresponding 95% CI. Heterogeneity tests were performed based on Q test and I 2 statistics. For I 2 > 50%, the random effects model was implemented to create forest plots. In contrast, the fixed-effects model was adopted if I 2 was <50%. Sensitivity analysis was performed by sequentially excluding each incorporated study at one time and observing whether the combined results changed significantly (13). Publication bias was assessable on a funnel plot qualitatively (14). All P-values were two-tailed and P < 0.05 was considered statistically significant.
Comparison between the two groups
All patients in both groups underwent unilateral thyroidectomy and central lymph nodes dissection. The operative time in ETCA group were significant longer than that in COT group (114.72 ± 25.62 vs. 102.96 ± 22.79, P = 0.006). The intraoperative bleeding volume in the ETCA group, was significantly less than that in COT group (18.38 ± 7.77 vs. 35.67 ± 11.42, P < 0.001). The number of removed lymph nodes in the ETCA group was more than that in the COT group (8.84 ± 6.52 vs. 6.41 ± 5.47, P = 0.022) and the number of metastatic lymph nodes was similar between two groups (0.59 ± 1.24 vs. 0.59 ± 1.61, P = 0.994). The postoperative drainage volumes in the ETCA group were more than that in the COT group (195.75 ± 80.10 vs. 137.83 ± 52.09, P < 0.001). There was no significant difference of the length of postoperative hospital stay between the ETCA and COT groups (3.68 ± 0.88 vs. 3.59 ± 1.10, P = 0.567). In the term of postoperative PTH levels, there was no significant difference between the ETCA and COT groups (34.59 ± 12.18 vs. 36.86 ± 11.43, P = 0.268). In regards to incidence of postoperative complications, there were no significant differences between the two groups. For instance, only 2.7% and 3.1% patients developed short-term hoarseness that recovered within two weeks in the ETCA group (2/73) and COT group (2/63), respectively. Postoperative infection is a rare complication that only occurred in 1.3% and 1.6% patients in the ETCA group (1/73) and COT group (1/63), respectively ( Table 1). No postoperative bleeding hypocalcemia, unanticipated secondary surgery or other serious complications had occurred in any of the patients in the two groups.
Meta-analysis Study selection
The search strategy created 721 relevant articles after removing duplicates. After screening titles and abstracts and excluding duplicate references, 232 articles were identified. We found that some of those studies were relevant to benign disease or other surgical approach. Then, a full-text review was conducted to exclude those that did not meet inclusion criteria, and 13 studies (9, 15-26) were identified. A flow chart for selection and exclusion of studies was shown in Figure 1. Combined with our data, 14 studies comprised of 2,153 patients were finally enrolled for this analysis: 955 in the ETCA group and 1,198 patients in the COT group. The quality assessments and general characteristics of the studies included in the meta-analysis were shown in Table 2.
Sensitivity analyses and publication bias
A sensitivity analysis was conducted by deleting individual studies and the replacement of effect models, whereas the overall statistical significance did not change, indicating that the results were robust and reliable. The funnel plot of the studies based on the hoarseness did not find any obvious publication bias (Figure 4).
Discussion
There are several types of endoscopic thyroidectomies that can be applied to avoid scarring on the neck (27)(28)(29). Among those approaches, ETCA has been widely used because of its advantages, including concealed incisions and lower risk of scar hyperplasia due to less skin tension in the areola. A Frontiers in Surgery 08 frontiersin.org growing number of publications on safety and treatment outcomes of ETCA have been published in recent years. In this study, we performed a systematic review and metaanalysis of the studies comparing treatment outcomes for ETCA and COT published until June 2022. Additionally, we also enrolled clinical data from Chongqing General Hospital in this meta-analysis to pursue a more credible conclusion. The results of this meta-analysis showed that the operative time was significantly longer in the ETCA group than that in the COT group, which is consistent with previous similar analyses (30-32). Jiang et al. had concluded a similar result and reported that creating the flap for ET, a procedure not required for conventional thyroidectomy, needs an additional time (33). Furthermore, meticulous bleeding control and accurate lymph-node dissection require longer operation time (34). The instruments of ETCA were challenging to manipulate, prolonging the operation time because of the narrow artificially created operating space. However, the operative time of ETCA could decrease dramatically for upgrading experience and advances in instrument technology. The intraoperative blood loss of the ETCA group was less than that of the COT group in this meta-analysis. The outcome of our clinical data and a retrospective analysis of 134 cases of PTC reported by Qu et al. both revealed the similar conclusion (26). The possible reason includes following aspects. First, the magnification effect of endoscopic surgery allows small vessels to be clearly visualized. Second, the ultrasonic scalpel has good hemostatic ability for glands and microvasculature (9).
The results of the meta-analysis revealed that the postoperative drainage volumes in the ETCA group was more than that of the COT group which is similar to the outcome of our data. The possible reason is that a large amount of Frontiers in Surgery sterile water is usually used to flush the wound to locate the bleeding site at the end of the ETCA procedure. Then, some of the sterile water would remain in the tissue space under the pressure of carbon dioxide, thereby gradually drained out after the operation. Complete dissection of metastatic lymph nodes is vital for the treatment of DTC because it influences the prognosis of patients and the recurrence of the tumor (35). Meanwhile, ETCA and its effectiveness of central lymph node dissection remains controversial. Some studies have concluded that the number of the dissected central lymph nodes in endoscopic surgery is less than that in conventional open surgery and believed that the surgical view of complete areola approach was limited due to the obstruction of the sternum, so the central lymph nodes were not able to be completely removed (33, 36, 37). However, this meta-analysis revealed that the number of lymph nodes removed in the ETCA group was comparable to the COT group, similar to the findings in the previous studies (9,34). A retrospective study of 119 ETCA and 289 COT patients reported by Sun et al. has shown the same result and revealed that a standard 10-mm 30°l aparoscope could provide excellent visibility so that the lymph nodes could be clearly visualized and dissected (9). Incredibly, the number of lymph nodes dissected in ETCA was even more than that in COT according to the result of our clinical data. The magnification effect of the endoscope allows the laryngeal nerve and adipose tissue to be clearly visualized, which might lead to a propensity for the surgeons to remove more lymph nodes. More extensive data from more high-quality multicenter randomized controlled studies are needed to confirm the feasibility of ETCA on central lymph node dissection.
The safety of the RLN by endoscopic surgery remains contentious. The study reported by Li et al. showed that ET shared a higher incidence of transient RLN palsy than COT due to the thermal damage caused by the ultrasonic scalpel (38). Since the surgical field of the complete areola approach is unfamiliar for the surgeon compared to COT, it might affect the judgment of the anatomical location of the RLN Frontiers in Surgery and increase the chance of injury to the RLN. Additionally, the central lymph nodes were pulled laterally from the trachea during ETCA, thereby increasing the risk of traction injury to the RLN (37). Despite this expectation, according to the result of this meta-analysis, there was no significant difference in the incident rates of transient recurrent nerve palsy between the ETCA group and the COT group. This conclusion was also shared by some previous studies (9,30,31,34,39). As surgeon gains experience, one can maintain the integrity of RLN with the help of the nerve monitor, achieving the same results as COT.
There are several limitations in this study. First, all included studies were non-randomized controlled trials, and all included studies were performed in China, potentially limiting the clinical outcomes to patients of Chinese descent and may not apply to other ethnic groups. Second, differences in surgeon experience will influence the outcome of the research. Third, some particular complications of ETCA, such as subcutaneous emphysema, hypercarbia, and tumor seeding, were not analyzed in this study.
Conclusion
Results of our clinical data and meta-analysis both conclude that the ETCA is not as good as the COT in terms of operative time and postoperative drainage volume. However, ETCA is comparable to COT in terms of the length of postoperative hospital stay, number of removed lymph nodes and incidence of surgical complications. In addition, ETCA reduces surgical bleeding and provides an excellent cosmetic appearance, which is a unique advantage over COT. The above results suggest that ETCA is an effective and safety alternative for patients with DTC compared with COT. Larger size and long-term randomized clinical trials are needed to prove the clinical value of the ETCA in the treatment of DTC in the future.
Data availability statement
The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author/s. | 2022-12-21T16:15:24.950Z | 2022-12-20T00:00:00.000 | {
"year": 2022,
"sha1": "543a4b0bbb2acbf7319f21db86f0552e1e8d119c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "543a4b0bbb2acbf7319f21db86f0552e1e8d119c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
39378498 | pes2o/s2orc | v3-fos-license | A fully automated approach for baby cry signal segmentation and boundary detection of expiratory and inspiratory episodes
The detection of cry sounds is generally an important pre-processing step for various applications involving cry analysis such as diagnostic systems, electronic monitoring systems, emotion detection, and robotics for baby caregivers. Given its complexity, an automatic cry segmentation system is a rather challenging topic. In this paper, a framework for automatic cry sound segmentation for application in a cry-based diagnostic system has been proposed. The contribution of various additional time- and frequency-domain features to increase the robustness of a Gaussian mixture model/hidden Markov model (GMM/HMM)-based cry segmentation system in noisy environments is studied. A fully automated segmentation algorithm to extract cry sound components, namely, audible expiration and inspiration, is introduced and is grounded on two approaches: statistical analysis based on GMMs or HMMs classifiers and a post-processing method based on intensity, zero crossing rate, and fundamental frequency feature extraction. The main focus of this paper is to extend the systems developed in previous works to include a post-processing stage with a set of corrective and enhancing tools to improve the classification performance. This full approach allows to precisely determine the start and end points of the expiratory and inspiratory components of a cry signal, EXP and INSV, respectively, in any given sound signal. Experimental results have indicated the effectiveness of the proposed solution. EXP and INSV detection rates of approximately 94.29% and 92.16%, respectively, were achieved by applying a tenfold cross-validation technique to avoid over-fitting.
I. INTRODUCTION
Cry signals have been the object of research and analysis for many years. Researchers have found sufficient evidence that cry signals can provide relevant information about the physical and psychological states of newborns (Barr, 2006;Farsaie Alaie et al., 2016;Golub, 1979;Soltis, 2004). Over the years, a considerable number of works have been conducted in the field of infant cry analysis. Orozco-Garc ıa and Reyes-Garc ıa (2003), V arallyay (2006), Hariharan et al. (2012), and Reyes-Galaviz et al. (2008) described efficient classification algorithms for distinguishing cries of normal infants from those of hypoacoustic infants, and as a result accuracy rates ranging from 88% to 100% were obtained. The accuracy rates for classification tasks between healthy infants and infants with asphyxia, as performed by Sahak et al. (2010) and Zabidi et al. (2010) were reported to be 93.16% and 94%, respectively. Moreover, anger, pain, and fear detection from cry signals were carried out by Petroni et al. (1995), yielding a recognition rate of 90.4%.
A newborn cry-based diagnostic system (NCDS) aims to achieve preliminary screening of newborn pathologies by analyzing the features of audible cry components detected in a realistic clinical environment (Farsaie Alaie et al., 2016).
According to the World Health Organization, "every year, approximately 40% of child deaths are deaths of newborn infants in their first 28 days of life, 75% of newborn deaths occur in the first week of life, and up to two-thirds of newborn deaths can be prevented if known." Therefore, any technique that can contribute in identifying the very first signs of newborn diseases could have a great influence on decreasing infant mortality. Specifically, this is the main goal of our project: to develop a fully automatic noninvasive system able to diagnose diseases based solely on cry sound analysis. The implementation of such a diagnostic system first addresses the issue of finding the useful cry components in an input waveform. The NCDS may suffer in terms of intelligibility if the input audio file contains acoustical activities other than crying. Therefore, one of the challenges in implementing such a system is to create an automatic segmentation system to correctly locate the expiratory and inspiratory phases of a cry sequence. Despite the significant amount of research conducted on pathological cry signal classification, little has been done to address the problem of automatic segmentation of audible expiratory and inspiratory phases of crying. If we could automatically segment and identify important parts of a given recorded signal, it would be easier to develop a fully automatic diagnostic system. Such a system could hopefully be used as a a) Current address: McGill University Health Center, 5252 boulevard de Maisonneuve, Montr eal, QC, H4A 3S5, Canada. Electronic mail: Lina. abou-abbas.1@etsmtl.net real-time clinical decision support tool, and in the case of early detection of a symptom, the necessary treatment could be provided easily and cheaply. This paper is organized as follows. Motivation and literature review are presented in Sec. II. Section III describes the corpus. Section IV details our proposed solution and describes the post-processing stage. The results and discussion are presented in Sec. V and, finally, Sec. VI concludes the paper by summarizing the main contribution of this work and describing future work.
II. LITERATURE REVIEW AND MOTIVATION
The main components of a cry sound are expiration and inspiration segments with vocalization and audible expiration and inspiration (EXP and INSV, respectively).
The main challenge of this work is to develop a method able to locate EXP and INSV correctly within a given audio signal.
The problem of cry segmentation/detection cannot be considered a problem of voiced/unvoiced detection because a single audible cry can typically contain both voiced and unvoiced segments.
Alternately, the problem of cry detection in a corpus recorded in a very noisy clinical environment cannot be solved simply by traditional voice activity detection (VAD) modules, which are common in previous cry analysis systems proposed in the literature (Kuo, 2010;Ruiz et al., 2010;V arallyay, 2006;Zabidi et al., 2009). VAD refers to the issue of locating speech regions from other acoustic activity regions in a given audio signal. The other acoustic activity regions can be any type such as noise, silence, or an alarm warning. However, the signal-to-noise ratio (SNR) represents a crucial parameter and may result in major errors. VAD is an essential part of various audio communication systems such as automatic speech recognition, mobile phones, personal digital assistants, and real-time speech transmission. Common VAD methods are composed of two main modules: (1) feature extraction and (2) decision rules. Common features are those that depend on energy calculation of the signal, cepstral coefficients, zero-crossing rate (ITU, 1996), spectrum analysis (Marzinzik and Kollmeier, 2002), entropy and wavelet transforms (Wang and Tasi, 2008;Juang et al., 2009). The decision rule is often formulated on a frame-by-frame basis and simple thresholding rules.
We applied well-known VAD algorithms used in G.729b and the Rabiner-Sambur method to detect cry segments (Benyassine et al., 1997;Rabiner and Sambur, 1975). We found that: • Threshold settings were not easy to select in a variable and noisy environment. • Traditional VAD could not distinguish between important cry segments (EXP and INSV) and speech segments recorded during the data acquisition. • Traditional VAD failed in distinguishing expiration phases from inspiration phases of cry signals.
To solve the issue of adjusting thresholds, statistical approaches seem to be a good solution. That is why we have given due consideration in our recent and present works to statistical model-based approaches (Abou-Abbas et al., 2015b;Abou-Abbas et al., 2015c).
Compared to other audio-related fields, such as speech and music, investigation of cry segmentation has seen low consideration. We refer readers to the state of the art discussed in our recent works (Abou-Abbas et al., 2015b;Abou-Abbas et al., 2015a).
Thus far, existing cry segmentation algorithms have mainly been able to separate cries from silent pauses or respiratory phases (V arallyay et al., 2008, 2009). Noisy backgrounds or other acoustic activities have not been considered in these works because the database used was collected in a laboratory environment and not in a real clinical environment. For example, in a recent work (Aucouturier et al., 2011), authors applied an automatic segmentation approach based on a hidden Markov models (HMMs) classification system to segment the expiratory and inspiratory parts of cry signals. Due to the limited number of infants and limited available acoustic activities in the corpus existing, the authors considered three classes expiration, inspiration, and silence. To train their models, a support vector machine (SVM), as well as Gaussian mixture models (GMMs) classifiers were used, and considering the arrangement in time between the three classes, a second stage using the Viterbi algorithm was added to consider the whole architecture as HMM.
Cry segmentation or detection has also been studied in Kim et al. (2013) and Yamamoto et al. (2010Yamamoto et al. ( , 2013. The work of Kim et al. (2013) investigated a new feature called segmental two-dimensional linear frequency cepstral coefficients (STDLFCCs), which is based on linear frequency cepstral coefficients (LFCCs). The idea behind it was to capture the lower frequency as well as the higher frequency within long-range crying segments and provide better discrimination between crying and non-crying segments. An average equal error rate of 4.42% was reported in their article.
In Yamamoto et al. (2010Yamamoto et al. ( , 2013, a method for detecting a baby's voice using a well-known speech recognition system called JULIUS and fundamental frequency analysis was introduced achieving a detection rate of 69.4%.
III. THEORETICAL BACKGROUND
Like any audio recognition/detection system, a cry segmentation system can be briefly analyzed using three main modules: • Feature selection and extraction, where suitable information is estimated in a relevant form and size from a cry signal to represent it in a different and more convenient domain; • Classification, where representative models are created and adapted using extracted feature vectors for each available pathology class; • Decision-making.
A. Overview on features extraction
Cry signals can be described by their features within two common domains: (1) the time domain and (2) frequency domain. From either of the mentioned domains, a number of significant characteristics can be extracted (Scherer, 1982). In this section, we describe the different domain features applied at different levels of our work.
Time-domain features
a. Intensity. The intensity is also called the loudness, and it is related to the amplitude of the signal. It represents the amount of energy a sound has per unit area. The intensity is defined by the logarithmic measure of a signal section of length N in decibels as follows: where w is a window function and s(n) is the amplitude of the signal. Intensity is an essential feature widely used in different applications, such as music mood detection (Lu et al., 2006), and an accuracy rate of 99% is achieved, proving the good performance of intensity features.
Considering Fig. 1, one can note that the intensity was increased considerably during the period of cry segments.
b. Zero crossing rate. It is the one of the most widely used time-domain features in VAD algorithms (Bachu et al., 2010;Rabiner and Sambur, 1975). Zero crossing occurs when consecutive samples have different algebraic signs as defined by Shete et al. (2014). Therefore, zero crossing rate (ZCR) represents the number of times in a specific frame the amplitude of the signal x (n) crosses the zero axis.
Frequency-domain features
a. Fundamental frequency or pitch. The time between successive vocal fold openings is called the fundamental period, T 0 , while the rate of vibration is called the fundamental frequency of the phonation, F 0 ¼ l/T 0 . The term pitch is often used interchangeably with fundamental frequency. However, there is a subtle difference. Psycho-acousticians use the term pitch to refer to the perceived fundamental frequency of a sound, whether or not that sound is actually present in the waveform (Deller et al., 1993).
It is one of the most used features in many applications, such as vowel classification, emotion classification, and music recognition. We have noticed that there are differences between expiratory and inspiratory phases regarding their frequency range (see Fig. 2).
There are several ways to estimate the fundamental frequency, for example, cross-and auto-correlation. In this work, we employed the classic auto-correlation method by using PRAAT software (Boersma and Van Heuven, 2001). Traditional statistical features, such as maximum, minimum, and mean of the fundamental frequency, were calculated over the entire corresponding utterance.
Time-frequency features
a. Fast Fourier transform-based Mel frequency cepstrum coefficient. Mel frequency cepstrum coefficients (MFCCs) are introduced in Davis and Mermelstein (1980) as the discrete cosine transform of the log-energy output of the triangular bandpass filters. MFCCs are used to represent the human speech signal by calculating the short term power spectrum of the acoustic signal based on the linear cosine transform of the log power spectrum on a nonlinear Mel scale of frequency. Mel scale frequencies are distributed in a linear space in the low frequencies (below 1000 Hz) and in a logarithmic space in the high frequencies (above 1000 Hz; Rabiner and Juang, 1993).
The steps from the original input signal to MFCCs are as follows: • Split the signal into small overlapped frames of N samples; • Decrease discontinuity between consecutive frames by employing the Hamming window defined as follows: w n ð Þ ¼ 0:54 À 0:46 cos 2pn N À 1 ; 0 n N À 1 ; • Apply the fast Fourier transform (FFT) and compute the power spectrum of the signal; where S k is the output power spectrum of filters and K is chosen to be 12.
b. Empirical mode decomposition-based MFCC. Empirical mode decomposition (EMD) is used for analyzing nonlinear and nonstationary signals and was proposed by Huang et al. (1998). EMD breaks down a given signal into intrinsic mode functions (IMFs) and a residual function during a sifting process. The main advantage of EMD is that basic functions are obtained directly from the signal. An IMF represents a simple oscillatory mode of the signal and is defined to draw the position of the signal in time. Each IMF must fulfill the following two basic criteria (Huang et al., 1998): • The number of ZCRs and extremes in the entire sequence of data must be equal or differ by only one. • The mean value of the envelope defined by the local maxima and the envelope defined by local minima must be zero at any point. • To extract an IMF from a given signal, the following steps are necessary (Huang et al., 1998): • Identify the extrema of the signal xðtÞ separately. • Using the cubic splines interpolation method, interpolate the local maxima to form the upper envelope uðtÞ. • Using the cubic splines interpolation method, interpolate the local minima to form the lower envelope lðtÞ. • Consider the local mean value of the upper and lower envelopes m ðtÞ ¼ ½uðtÞ þ lðtÞ=2. • Consider the local mean value from the original signal hðtÞ ¼ xðtÞ À mðtÞ.
• Repeat the sifting approach until h(t) satisfies the basic criteria to be an IMF. • Finally, the residue rðtÞ ¼ xðtÞ À hðtÞ is regarded as the new signal to repeat the sifting process for the extraction of the second IMF, and so on.
The EMD-based MFCC features are calculated by applying the MFCC extraction technique to the IMFs instead of the original signal. Based on the results obtained from our previous works (Abou-Abbas et al., 2017; Abou-Abbas et al., 2015c) regarding finding the best IMF combination, we have found that the parameters extracted from the sum of IMF3, IMF4, and IMF5 yielded the best cry segmentation results. Therefore, this combination was employed in this work:
Differential and acceleration coefficients
The features extracted from the decomposed frame describe only the static features of the corresponding segment. To add dynamic information and extract both linear and nonlinear properties, delta and delta-delta coefficients were used. The dynamic information obtained by delta and GMMs are commonly used for different applications, mostly speech recognition, speaker verification, and emotion recognition systems (Reynolds and Rose, 1995). It can be represented as a weighted sum of Gaussian distributions as follows: where pðojkÞ is the likelihood of the input observation o with the dimensionality of D, J is the number of mixtures, w j are the weighting coefficients satisfying the constraint P J j¼1 w j ¼ 1, and Gðo j ; l j ; R j Þ denotes the jth Gaussian with mean vector l j and covariance matrix R j .
HMM
HMM-based methods have many potential applications in Airborne Surveillance Radar (ASR) systems, statistical signal processing, and acoustic modeling, including the segmentation of recorded signals (Young et al., 2002). The basic principles of any ASR system involve constructing and manipulating a series of statistical models that represent the various acoustic activities of the sounds to be recognized (Young et al., 2002). Many studies have shown that speech, music, newborn cries, and other sounds can be represented as a sequence of feature vectors (temporal, spectral, or both), and HMMs could provide a very important and effective framework for building and implementing timevarying spectral vector sequences (Gales and Young, 2007).
An HMM generates a sequence of observations O ¼ O 1 ,O 2 ,…,O T and is defined by the following parameters: the number of hidden states, state transition probability distribution A, observation probability distribution B, and initial state distribution p. We denote the model parameters of the HMM as k ¼ {A,B,p} (Kuo, 2010;V arallyay, 2006).
To build and manipulate an HMM, three problems must be solved: the evaluation problem, the decoding problem, and the training problem. HMM theory, the aforementioned problems, and proposed solutions are widely explained in the literature, especially in the well-known Rabiner tutorial (Rabiner, 1989). Moreover, the Viterbi algorithm was proposed as a decoding solution to find the most probable future state of the system based on its current state (Rabiner, 1989). The Baum Welch algorithm is an iterative procedure used to estimate the HMM parameters.
IV. GENERAL ARCHITECTURE AND SHORTCOMINGS OF PRIOR WORKS
We recently developed different approaches for cry segmentation (Abou-Abbas et al., 2015b; Abou-Abbas et al., 2015c). The general structure of the systems designed can be described with a block diagram as shown in Fig. 3. We used machine-learning methods, such as HMMs and GMMs, which have been proven to work well in the acoustic domain. Different feature extraction techniques were employed and compared, reaching an accuracy rate of up to 91%. These methods were validated on our available signals through a tenfold cross-validation technique to perform training and testing operations of dissimilar sets of the total data.
However, there was a problem in precise boundary detection of the classified segments. This misdetection affects the accuracy rate of the overall system because the results of classification were compared to our manually segmented signals, which were labeled by trained colleagues. The exact cry and pause lengths between cries, although commonly overlooked by cry sound classification approaches, represent important parameters containing useful related information. For example, researchers in V arallyay (2006) used cry duration as an additional feature to detect the gender of the baby, and in Aucouturier et al. (2011), the duration of the expiration between contexts of cries, such as hunger, urinating, and sleepiness were investigated. Significant differences between durations have been observed. Lind et al. (1970) proved that cry durations are higher than usual in the cries of infants with Down's syndrome.
V. CORPUS OF INFANTS' CRIES
Most of the subjects recruited for this study were newborns that were just a few days old. The size of our database and the diversity presented in our collected data distinguish our work from other aforementioned research studies: • Signals collected in hospital environments at different times and in different situations, such as after birth, first bath, medical care, neonatal intensive care unit, and private or public maternity rooms with parents or visitors; FIG. 6. A waveform of the first type of expiration called typical cry sound (EXP) generated by a healthy newborn.
• Signals from crying babies with different contexts/causes such as pain, hunger, fear, and birth; • Cry signals of babies with different pathological conditions such as normal infants and babies suffering from respiratory diseases, blood diseases, neurological diseases, or heart diseases.
In Table I, you can find a brief description of our cry database.
The most accurate reason for crying was determined with the help of nurses and newborns' parents according to the situations that caused the infants to cry. The health condition was established by the doctor who examined the newborn and was based on different tests that had been performed in the hospital after the birth.
For recording purposes, an Olympus (Tokyo, Japan) handheld digital two-channel recorder was used and placed 10-30 cm from the newborn's mouth to be effective at a sampling frequency of 44.1 kHz and a sample resolution of 16 bits. There was no well-defined procedure during the acquisition of the cry sounds. Therefore, unwanted noises, cross talk, and clinical environment sounds were also recorded during the data collection process. For this reason, we consider our database a real corpus recorded in a real clinical environment. Our database in this work consists of a total of 507 waveforms of cry sounds interspersed by different unwanted acoustic activities. A summary of the database is given in Tables II and III, and it is the same database as used in our previous work (Abou-Abbas et al.,
2017) for comparison purposes.
To divide the dataset into training and testing corpuses, the tenfold cross-validation technique was applied.
VI. MANUAL SEGMENTATION OF THE TRAINING AND TESTING DATA
Using the WaveSurfer software, we manually annotated recordings of our corpus (Sj€ olander and Beskow, 2000).
Annotations identified the start and end points of each vocalization. The boundaries of each annotation were fixed by the point where the sound cannot be heard anymore. A newborn cry can comprise typical cry sounds, glottal sounds, hiccups, short pause segments between cries, and faint cries. The labels were chosen according to the different types of acoustic activities available in our corpora. The labels were defined as follows: • EXP, which represents the total vocalization occurring during one expiration of a cry, as well as the sound between two inspirations (Lewis, 2007). It is composed of voiced or/and unvoiced segments. It is represented by three types: a typical cry sound (see Fig. 4), a glottal sound, i.e., a cough sound (see Fig. 5), or a spasmodic sound (see Fig. 6). We can easily distinguish typical expiration sounds from other expiration sounds by comparing their durations. • INSV with vocalization, also called hiccups. It is the total vocalization occurring during one inspiration, and it usually follows a long EXP and is followed by a short pause segment. We noticed the presence of INSV in sick babies' cries more than those of healthy babies (see Fig. 7). • Silent inspiration (INS), also called a short pause segment.
It commonly happens between one INSV and the next EXP or directly after one EXP to be followed by another EXP (see Figs. 5 and 8). • Expiration with non-vocalization (EXPN), also called a faint cry. It happens commonly after a long cry and always between two EXPs or EXP and INSV (see Fig. 8). • EXP2 and INS2 represent components of vocalization during expiration and inspiration, respectively, and are produced by infants during babbling, cooing, etc. We could easily distinguish them from the EXP and INSV of the cry by their short durations (see Fig. 9). • Speech (SP) of the medical staff or the parents around the baby. In general, speech signals have a low pitch range of 100-300 Hz (see Fig. 10). • BIP is the sound of medical machines around the baby. It is characterized by a constant fundamental frequency. • NOR or noises represent different types of sounds that can be either outside the cry or during the cry. • BKG represents the background silence between cries, and it is a kind of a very low level noise (a low white noise).
VII. PROPOSED APPROACH
In this paper, we developed a method for improving the accuracy rate of our previous systems based on a statistical approach via a post-processing step based on both temporal and frequency-domain features.
A post-processing stage was added for two reasons: (i) to minimize switching errors between the two predefined classes EXP and INSV, and (ii) to adjust the start and end points of each utterance. Our method aimed to obtain more reasonable and accurate segmentation results by automatically applying additional steps to correctly and precisely classify cry components. Experiments were conducted to evaluate the utility of the post-processing stage and the use of additional temporal and frequency-domain information as acoustic features. The obtained results show that the use of our proposed approach for automatic segmentation was able to provide a major contribution to the performance of a cry segmentation system.
Our system uses a cascaded architecture to create an efficient cry segmentation system with a low false positive rate (FPR) while keeping the true positive rate (TPR) as high as possible.
In any acoustic recognition problem addressed in the literature, the following three concepts of signal processing were applied: (1) Segmentation: First, a detector is used to segment the audio signal.
(2) Features extraction: The input signal is first converted into a series of acoustic vectors, which are then be forwarded to the recognition phase. (3) Recognition: Features extracted are used to create a model to provide information about the active segments and take the best decision defined in the corresponding application.
The problem that arises from applying this traditional methodology concerns the adjustment of a static threshold, which should depend on the environmental conditions. Unlike this traditional methodology, we relied on frame-by-frame results obtained from standard classifiers to achieve a better segmentation by finding the exact boundaries of useful acoustic components. A number of features have been extracted, including MFCC, energy, intensity, ZCR, and fundamental frequency. We divided the proposed segmentation approach into two separate blocks. A block diagram of the main algorithm is depicted in Fig. 11.
In the first block, we chose HMM and GMM classifiers to model and classify each label (EXP/INSV/BKG and NOR, which represents all other acoustic activities, including noise). In the second block, a post-processing step followed the classification task. In this section, we give a general overview of the cry segmentation system and describe its different elements.
Our goal in this work was to automatically detect cries, expiration, and audible inspiration segments for a given recorded signal containing different acoustical activities such as crying, speech, silence, and different types of noises from common hospital environments.
A. Pre-processing stage
It is essential to have a pre-process step as the first step of any audio analysis system. The block diagram of our designed pre-processing stage is depicted in Fig. 12 and contains four modules: (1) The input audio signal was converted to mono by calculating the average of both channels' audio files.
(2) A high-pass filter was applied to emphasize the signal.
The most common filter is 1 -0.97z À1 . (3) A framing module converted the continuous audio signal into overlapped frames based on frame size and overlap percentage. (4) A hamming window was used to avoid the aliasing effect.
B. Feature extraction procedure
A wide range of features can be extracted from the cry signals, but they do not necessarily correlate with a newborn's health condition. The selected feature set used in this work consisted of two parts: 39 features were used in the first classification stage (explained in Sec. VII A) to provide initial results, and another 5 features [ZCR, intensity, and fundamental frequency (minimum, maximum, and mean)] were used in the post-processing stage to give the final results.
The time/frequency-domain features that we employed are listed in Table IV.
C. Supervised cry segmentation system-initial classification
A supervised cry segmentation system is a chain of complex processing units that aims to locate cry sounds in audio signals collected in noisy clinical environments and identify its type as expiration, inspiration, background, or other classes based on trained models for all available classes in the training phase. In the testing phase, the probability that a segment belongs to each one of the classes is calculated using the Viterbi algorithm, and a decision is taken to produce a label for the segment.
In the works conducted previously (Abou-Abbas et al., 2017;Abou-Abbas et al., 2015b;Abou-Abbas et al., 2015c), studies were carried out to compare the accuracy rate of different feature vectors and classifiers. It has been shown that all of the proposed approaches mentioned in Table V allow for reliable discrimination of clean expiration from any other sounds.
The first step of the new proposed cry segmentation scheme was to make the first classification decision based on a supervised cry segmentation system to discriminate between four different classes: EXP, INS, BKG, and other. We choose two previously designed systems to test alongside the new approach due to their robust performance. The window size used is 30 ms with an overlapping of 21 ms.
(1) Supervised cry segmentation system using a GMM classifier with 40 mixtures based on FFT-MFCC features.
(2) Supervised cry segmentation system designed using a 4state HMM with 40 Gaussians classifier based on MFCC extracted from a combination of 3 IMFs: IMF3, IMF4, and IMF5 resulted from EMD.
Data used for training and testing the supervised cry segmentation systems are presented in Table II.
We also added delta and acceleration features to have a feature vector of 39 parameters per frame. We trained four different classes based on their available training data.
D. Post-processing stage
To further improve the obtained classification results from the previous step, a post-processing stage depicted in Fig. 13 is proposed. The main idea behind this stage was to refine the initially obtained results to reduce switching errors and adjust the boundaries of the expiratory and inspiratory phases. Then, to reline the classification results and make the final decision, we propose the second step based on temporal features and frequency features separately.
In particular, the following steps were taken in this stage: (1) First, the results obtained from the initial classification step (in label files) were further taken as input to the post-processing phase.
(2) Different features were extracted for each frame from the corresponding input wave file such as ZCR, intensity, and fundamental frequency.
(3) Two thresholds were calculated: ZC_BKG and INT_BKG. It was based on the Rabiner and Sambur rules in the endpoint detection algorithm (Rabiner and Sambur, 1975). The threshold computation module explained in Fig. 14 I max ¼ maxðINT sigÞ; I 1 ¼ 0:03ðI max À I mean Þ þ I mean ; INT BKG ¼ 5ITL: (4) Each frame was labeled as "0" and "1" based on the F0 results. If F0 value exists, the frame was indexed by "1," and if F0 does not exist for the frame, it was indexed by "0." Table VI shows an example of indexing frames. (5) The localized points were predicted from the transition between 0 and 1 among consecutive segments. These frames were indexed by "2" to determine the transitions within the localized points. See columns 2 and 3 in
VIII. RESULTS AND DISCUSSION
The proposed method in this work was tested on a set of 507 cry signals collected from 203 babies. The performance was estimated at the frame level by comparing each classified frame to the reference frame (the results of manual segmentation performed by experts). Because the main advantage of this work is robustness, the systems were trained and tested using recorded cry signals under varying conditions, as mentioned in Sec. III. Four different statistical measures were derived for each class: true positive (TP), false positive (FP), true negative (TN), and false negative (FN). TP is the number of correctly detected sounds; FP is all sounds detected erroneously; TN represents the number of correctly rejected sounds; FN is all missed sounds.
The performance of the proposed approach was evaluated using metrics such as TPR, FPR, false negative rate (FNR), and accuracy.
The first set of experiments was conducted with two supervised segmentation systems using GMM and HMM classifiers, respectively. Tenfold cross-validation was used to evaluate the performance of the system in a manner such that onefold was reserved for validation while the remaining ninefolds constitute the training set. This procedure was performed five times, and the average accuracies depicted in Table IX were obtained.
In our experiments, the GMM-and HMM-based classifiers were trained, respectively, by FFT-MFCC and MFCC features extracted from the combination of three IMFs, IMF3, IMF4, and IMF5 resulted from EMD.
The second set of experiments, which was designed to evaluate the post-processing stage, was further conducted using results obtained from the first set of experiments. The results obtained before and after the post-processing stage are shown in Table IX. Table IX shows the average TPR, FPR, FNR values of the proposed approach and the previous designed systems. Based on the results, the overall accuracies of FFT-GMMand EMD-HMM-based systems improved by approximately 3.18% and 3.22%, respectively, by applying the postprocessing stage (Fig. 15).
The proposed method achieved a high average TPR of approximately 95.6% for the EXP class and 92.47% for the INS class by using GMM and HMM classifiers, respectively. Figure 16 shows a comparison of the TPR results obtained for the four experiments mentioned in Table IX. The FPR was considered the most important error rate in our case because it is very important not to add misleading segments as input to a cry classification system. As we can see from Table IX and Fig. 17, the post-processing stage reduced the FPR for the EXP and INS classes by 8.2% and 11.44%, respectively.
Moreover, an average FPR rate of approximately 10.32% was obtained for the EXP class using the GMMbased classifier and 18.21% for the INS class with the HMM-based classifier.
From the experiments, it can be seen that the proposed method achieved an average performance of 94.29%.
Based on the results, we can see that an improvement of 3.18% in the average classification rates was achieved by adding the post-processing stage, which was quite significant.
IX. CONCLUSION
Cry segmentation is the process of segmenting a cry signal recorded in a real clinical environment into homogenous regions. When used in conjunction with an NCDS, an automatic cry segmentation system is able to detect useful cry units with vocalizations from non-cry audio regions (such as speech, machine sounds, and other types of noises) and significantly improve the intelligibility of the NCDS.
Our main goal was to precisely locate important audible cry component boundaries of continuous recordings collected in a very noisy environment as a step toward building a cry database that contributes to developing different crybased applications such as the following: • Distinction between diverse types of cries: birth cry, pain cry, normal cry, pleasure cry, etc.; • Classification between healthy and deaf infants; • Distinction between healthy infants and infants with asphyxia; • Discrimination of babies with cleft palates with palate plates from babies with cleft palates but without palate plates; • Classification of pathology from cries: asphyxia, meningitis; • Recognition of newborns' native language from cry melodies; Having a large enough database that is clean, wellrecorded and segmented for training steps is a critical parameter for each of the mentioned applications.
The main goal of the proposed work was to design an automatic cry segmentation system that can be incorporated in the early stage of our NCDS system or any other cry classifier system.
In this work, expiratory and inspiratory cries with vocalization were detected from audio signals containing different acoustic activities other than cries. A post-processing stage was combined with a supervised cry segmentation system that has already been designed in our previous work. The proposed approach achieved better performance in terms of accuracy rate, TPR, and FPR. Future work will include a combination of GMM-and HMM-based classifiers because the GMM-based classifier gave fewer false-positive alarms for the EXP class, while the HMM-based classifier gave fewer false-positive alarms for the INS class. Moreover, the final boundaries of EXP and INS phases could be used to extract temporal features such as the duration of the expiratory and inspiratory phases, duration of pauses between cries, and onset of crying, which may result in a more accurate pathology classification system. | 2018-04-03T04:31:23.709Z | 2017-09-01T00:00:00.000 | {
"year": 2017,
"sha1": "8a09ee69b85028ad40d0c2e2ed02ee7a1ba92d81",
"oa_license": "CCBY",
"oa_url": "https://asa.scitation.org/doi/pdf/10.1121/1.5001491",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "8a09ee69b85028ad40d0c2e2ed02ee7a1ba92d81",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
256401671 | pes2o/s2orc | v3-fos-license | An exploratory assessment of the legislative framework for combating counterfeit medicines in South Africa
Background Substandard and Falsified (SF) medical products are a growing global concern. They harm the individual patient, the healthcare system and the economy. The World Health Organisation (WHO) has highlighted contributing factors globally: insufficient national medicine regulation, poor enforcement of existing legislation, weak stakeholder collaboration and the rise of novel viruses, such as the COVID-19. The study aimed to assess the legislative and policy framework and institutional relationships governing pharmaceuticals and anti-counterfeiting strategies. Methods The study was explorative and consisted of two phases. The first phase was between 2016 and 2017. It looked at document analysis (annual reports and press releases from 2011 to 2016) from government institutions involved in medicines regulation and law enforcement for SF seizure reports between 2004 and 2017. The second phase was between 2016 and 2018 through in-depth semi-structured interviews (seven in total) with selected stakeholders. Results First Phase—the data collected and reported by various departments was sporadic and did not always correlate for the same periods indicating, a lack of a central reporting system and stakeholder collaboration. In South Africa, counterfeiting of medicines mainly involves the smuggling of non-registered goods. The most common counterfeit items were painkillers, herbal teas, herbal ointments, while some were medical devices. Furthermore, Customs identified South Africa as a transhipment point for SF infiltration to neighbouring countries with less robust regulatory systems. Second phase—interview transcripts were analysed by thematic coding. These were identified as the adequacy of legislation, institutional capacity, enforcement and post-market surveillance, stakeholder collaboration and information sharing, and public education and awareness. Conclusion Document analysis and interviews indicate that South Africa already has a national drug policy and legislative framework consistent with international law. However, there is no specific pharmaceutical legislation addressing the counterfeiting of medicines. Law enforcement has also been complicated by poor stakeholder engagement and information sharing. Supplementary Information The online version contains supplementary material available at 10.1186/s40545-021-00387-8.
Introduction
Counterfeit medicines, are becoming a global public health problem in both developing and developed countries [1]. Counterfeit medicines have been found in street markets, legal supply chains like, health care facilities as well as illegal or unregulated websites [2,3].
The COVID-19 pandemic has led to an increase in the supply demand for pharmaceutical products, including vaccines [4]. This COVID-19-induced demand for health products has also been exploited by organized crime groups specifically in developing countries in Africa [5,6]. The United Nations Office on Drugs and Crime (UNODC) reported an increase in trafficking incidents during the pandemic at African seaports. For instance, Lome (Togo), Cotonou (Benin) and Mombasa (Kenya).
According to United Nations [7] these crime groups manufacture, traffic and sell a variety of products from diagnostic testing kits to treatments to preventative measures, such as sanitisers, vaccines and Personal Protective Equipment (PPE). From the beginning of January through mid-April of this year, COVID-19-related scams in the USA totalled approximately US$ 13.4 million in fraud. Furthermore, a total of 1541 cyberattacks related to COVID-19 were detected in the United Arab Emirates, including 775 malware threats, 621 email spam attacks, and 145 URL attacks. In Thailand, about three hundred thermometers were seized in Thailand after being trafficked through three other countries. Similarly, thermometers that did not meet EU regulations were found in Italy [7].
While in the United Kingdom, the Medicines and Healthcare Products Regulatory Agency (MHRA) released figures revealing that approximately 3.5 million unlicensed erectile dysfunctional pills, worth more than £10 million, were seized in the UK in 2019 [8].
The counterfeiting business is highly lucrative, and because it is illegal [9], counterfeiters do not register their dealings and easily evade tax [10]. Companies that deal with pharmaceuticals have been hit the hardest and risk loss of their reputation. Consequently, they may also have to take legal action to protect their brands [11]. In addition, the loss of profits and high legal fees have led to the reduction of research and development investments and job losses in this sector [12]. A study conducted by the European Union Intellectual Property Office in the European Union analysed the cost impact of counterfeit medicines on the pharmaceutical industry to be €10.2 billion euros yearly and an additional value of about €7.1 billion euros from related sectors. It was speculated that these losses in profits could have been responsible for at least 37 700 job losses annually [13].
Scammers and fraudsters have increased significantly during this pandemic. Prior to the COVID-19 global crisis, the Internet offered consumers cheap, otherwise "stigmatised" and controlled medicines without prescription [14]. In addition to so called "miracle cures" the Internet now offers cheap sanitisers, thermometers and surgical masks deceiving consumers even more [15].
A major problem with illegal internet pharmacies and traders is that they usually conceal their real identity, while they ship medications and products with questionable quality and traceability across borders [16]. Since counterfeiters use illegal clandestine channels to introduce these into the market, it is difficult to quantify the extent of proliferation [17]. According to Mackey and Liang [18] the Internet has made counterfeit medicines a transnational and an international crime.
The South African Health Products Regulatory Authority (SAHPRA) is considered mature and stringent [19] when compared with other counterparts in the Southern African Development Community (SADC) region, such as Lesotho, which has no existing regulatory authority [20]. Many developing countries do not have a mature regulatory framework that can take preventative measures due to lack of capacity, technical expertise and financial resources to undertake surveillance [21].
A recent study assessing medicine quality in the South African supply chain showed that although no counterfeited products were identified, only about 55.4% (173/312) of the samples met the US pharmacopoeia standards for quality [22]. Many of them failed the visual inspection tests, and 5.4% failed the dissolution tests. Demonstrating that perhaps regulatory activities tend to focus more on pre-market authorisation than on Post Market Surveillance (PMS) and pharmacovigilance. Another study by Patel et al. [23] examining the perceptions of stakeholders on drug quality in South Africa also highlighted similar views.
On this note, this study aims to contribute to what is known about the regulation of counterfeit medicines in the South African context, the challenges and the opportunities that exist to better combat strategies. participated in the study and informed consent were given and signed by all before commencement of data collection. -The quality of data collected was ensured by following the "Principles of Trustworthiness". These were established by ensuring credibility, confirmability, transferability and dependability as prescribed by Anney [24] See Additional file 7 for how the researcher ensured the quality of data.
First phase of study
The first phase of the study-document analysis was conducted on public records of governmental institutions. These were press releases and annual reports for the periods between 2004 and 2017. The focus of the analysis was on reported incidents of counterfeit products (SFs) and seizure of such goods by border authorities. The researcher also considered reports of SFs by regulatory authorities. It is important to note that the reporting on SF seizures varied from department to department even for the same period.
Inclusion criteria
The search engines used were Google and Google Scholar and the websites visited were government databases and international advocacy group databases (see Additional file 6) for the list of databases). The keywords used for the search were: Counterfeits, substandard medicines, South African regulations, pharmaceutical policy, regulation of medicines, public health or synonyms. Reports only in English were considered.
Exclusion criteria
Data on other types of counterfeited products other than pharmaceuticals, records on counterfeit pharmaceutical products from other countries other than in South Africa and annual report records before the financial period of 2011. Reports not in English were excluded.
Data collection of press releases and annual reports
The data from press releases and annual reports were collected using a data collection tool (Additional file 5) categorised into the following: type of seizure, type of offence, place/period of seizure, the quantity of seizure and the net worth/value of products.
The second phase of study
The second phase was conducted by interviewing seven key stakeholders (see Table 1). These were identified as, the customs and border control, the police service, the national trade regulatory authority, Intellectual Property Rights (IPR) attorneys, the professional pharmacy council, the national medicines regulatory authority, and the national pharmaceutical procurement depot. The interviews took place between 2016 and 2018.
Sample selection and recruitment
Purposive and snowball sampling was used to select study respondents. Respondents were chosen based on their experience and expertise in the subject matter.
Persons and organisations involved in law enforcement agencies (police, customs, the NMRA, industry regulators, wholesaling and legal fraternity) dealing with procurement of pharmaceutical products or the combat of in their line of work in South Africa were selected.
Data collection
Prior to data collection, the study participants signed informed consent forms (see Additional file 5). Seven interviews were done on selected respondents. An interview guide was adapted from a World Health Organisation data collection tool [25] which was previously designed to provide a review for drug regulatory systems (see Additional file 2). Follow-up telephonic and email interviews were done with respondents, where clarity was needed. Principles of anonymity and trustworthiness were applied during the interview process (Additional file 8).
Only handwritten interview transcripts were used, because interviewees requested not to be recorded on audio devices. Therefore, there were no audio records used for this study.
Data analysis
Data analysis was done in three stages, by line by line coding of the findings from the interviews and the organisation of those 'free codes' to construct 'themes' [26]. The following steps were followed during the coding process: i) The evaluation of concrete evidence which involved reading through interview transcripts and notes and reflection; ii) Thematic analysis based on similarities and differences between respondent responses for the same questions, then data was organised into themes and then sub-themes by the aid of Microsoft Word (Macros add-in DocTools); iii) Interpretation of these themes and subthemes was the final process.
Gaps in the legislation
The incident reports (see Table 2) revealed that the most common type of offence or contravention of the law was due to unregistered or expired medicines, false claims of "herbal supplements", sale of controlled substances without prescription and licenses (Medicines and Related Substances Act 101/65, Section 18 and 22C). These offences are all in contravention of the laws governing the regulation of medicines in South Africa [27].
Enforcement and post market surveillance
Based on the review of the annual reports, it was apparent that data collection and reporting on SFs were weak across departments, indicating that counterfeiting of medicines was not urgent as counterfeiting of tobacco, clothing, and even electronics (South African Revenue Service annual report 2014/15). There was inadequate information on counterfeit medicine seizures for the stipulated period (2004-2017) and the data found varied considerably between government agencies. This is indicative that data collection on counterfeit medicines is not systematic, information sharing is poor and law enforcement activities are not coordinated. Although efforts were made to enforce IPRs and strengthen border security, SFs were not always given top priority as a serious public health problem. Furthermore, it is not clear what role the NMRA played in coordinating the fight against SFs in South Africa. The trade and industry department (DTI) and customs division (SARS) were the two leading agencies in the combat against SFs. It is important to also note that, Adverse Drug Reactions (ADRs) were only briefly mentioned in the annual reports of the Department of Health. According to respondents, the lack of capacity, however, has prevented the NMRA from acting on ADRs, which are the responsibility of the pharmacovigilance arm of the NMRA.
Furthermore, the Department of Trade and Industry's (DTI) arm, the Companies Intellectual Property Commission (CIPC) in a joint enforcement effort with the private sector, planned to issue Internet Service providers (ISPs) with notices for allowing the sale of unregistered, and potentially counterfeit medicines online.
Institutional or organisation capacity
As shown in the Department of Health's (DoH) annual reports, the former MCC did not have its own budget. The work was funded through proceeds from product registrations fees which was not sufficient to cover the complex nature of enforcement work. Although, the NMRA was the inspectorate arm of the Department of Health, enforcement activities were not included in the annual reports, but only information on product registrations was mentioned.
In general, departments reported mostly on goods seized for not complying with import processes rather than for the contravention of pharmaceutical requirements and regulations and/or causing harm as a result. The review also revealed that the perception that issues affecting medicine quality and supply chain integrity in South Africa were not attributed to counterfeited medicines but rather unregistered products to be partly true. This similar observation was also reported in the study by Patel et al. [23] which looked at the perception of key players involved in the medicines value chain by respondents. Most seized goods were unregistered products which are almost always substandard. A single incident of raw materials (API) being illegally manufactured in Durbanville, Cape Town, was reported. It was not clear whether the product was falsified or substandard but it was found at the mailing center destined for export to a country that was not mentioned in the seizure or incident report (see Table 2).
Results of interviews
A number of challenges in the combat of SFs were identified in the study through views and perceptions of respondents. The main themes that emerged were: adequacy of legislation, institutional/organisational capacity, enforcement and market control, stakeholder collaboration and information-sharing and education and awareness (see Table 3) shows an overview of the analytical framework used in the thematic analysis.
Gaps in the legislation
Respondents perceived that there was a stringent regulatory environment for pharmaceuticals; however, it was not clear or adequate on how to deal with the issue of SFs. The study revealed that the main pieces of legislation responsible to combat of SFs are the Counterfeit Goods Act 37 of 1997, the Medicines and Related Substances Act 101 of 1965, the Customs and Excise Act 91 of 1964, the Criminal Procedures Act 51 of 1997 and the Prevention of Organised Crime Act 121 of 1998 [28][29][30] which required joint stakeholder engagement to implement (see Fig. 1). See also Table 4 for the practical implications of each law. Many of the applications of these laws overlapped in the inspection function (see Fig. 3) of regulating pharmaceuticals.
There was also a perception that South Africa was not a source of origin for counterfeit medicines and that it was not a big problem on the market. Policing of SFs was often associated with the unauthorised sale of medicines rather than falsified medicines by respondents from the NMRA. As cited by a respondent from law enforcement: "We do not have a problem with counterfeit medicines only unauthorised medicines. " Some respondents cited gaps in the legislation and called for a more specific legislation to address the complexities encountered in handling pharmaceutical crime.
The many complexities included the definition of counterfeit medicine which catered for IPR infringement but not necessarily the safety and quality of medicines.
The main gaps in the current legislation were identified as: i) The lack of harmonisation of existing laws which created loopholes in prosecutions; ii) Jurisdictional limitations between stakeholders made it difficult to address the technical aspects of prosecutions that required expertise from the judiciary in handling pharmaceutical crime; iii) Penal sanctions stipulated were a "manageable business" cost and not deterrent enough (fines and light prison sentences); iv) Weak regulation of the sale of online medicines. The law focused on false advertising and labelling. There was no allocation in the law (the Medicines and Related Substances Act 101/1965) for requirements on online pharmacy registrations or operations (see Table 5 for a review of the legislation regarding operating online pharmacies); v) The absence of a specific anti-counterfeit legislation or policy to mandate stakeholder engagement and enforcement activities; vi) Poor prosecution outcomes due to lack of cross skill expertise in handling pharmaceutical crime cases and lack of political will to prosecute cases of this nature.
Institutional resources
All respondents indicated that limited resources contributed to the weak enforcement efforts to combat SFs. The main resources mentioned were human resources, financial backing and testing facilities.
Limited human resources
Respondents cited that there was a staff complement of eight inspectors assigned to perform inspection on Good Manufacturing Practice (GMP) and Good Wholesale Practice (GWP). While only two were reported to be assigned to cover inspections in only two ports of entry due to high traffic, OR Tambo international airport and the Durban Harbour.
Financial backing
Financial backing is an important component in the work of enforcement agencies. Limited access to funding for pharmacovigilance and post market surveillance work was a common factor cited by all respondents. One of the respondents showed that the NMRA was not in good financial standing, and therefore, enforcement efforts focused on pre-market authorisations.
Pharmaceutical testing facilities
Our study found out that the NMRA did not have its own laboratory testing facilities but outsourced these in the past.
"It would be helpful to have a testing device like the handheld Truscan or a mobile laboratory toolkit that can detect the contents of illegal medicines at points of entry like the ones used in other countries by customs agencies. " Respondent 3.
One respondent cited that the provision of an in-house testing facility would be helpful in re-testing all medication procured at the pharmaceutical depot.
"We would benefit greatly from in-house quality testing but currently we do not have a testing facility. We have to send samples to a government lab in the Western Cape and it takes time to get results. " Respondent 2.
In retrospect, other medicines' regulators, such as the Medicines Control Authority of Zimbabwe (MCAZ) [31], the National Agency for Food and Drug Administration and Control (NAFDAC) in Nigeria [32] and the Food and Drug Administration (US FDA) in the United States [33] and the Central Drugs Standard Control Organisation (CDSCO) in India [34] have their own laboratories.
Notably, the Medicines and Related Substances Act 101/1965 states that there are seven designated ports of entry for the movement of pharmaceuticals [27]. A respondent cited that only two inspectors were available for this purpose, leaving the other designated ports vulnerable to potential smuggling of SFs. This is a notable risk especially during high import volumes like during the COVID-19 pandemic.
Stakeholder collaboration and information sharing
Medicine governance in South Africa, has at least six key institutional stakeholders in ensuring that safe medicines are available. These are, the pharmaceutical companies Table 5 Review of contravention of the law by three online pharmacies (see Table 6) identified during the study
Legislation review Website review
Scheduled medicines/authorised persons: The websites were open to the public so any person can order scheduled medicine being offered (Scheduled 4 unregistered Biological injectable products are being offered) Section 22A of Act 101 clearly states-"(…) no person shall sell, have in his or her possession or manufacture any medicine or Scheduled substance, except in accordance with the prescribed conditions. " Furthermore, Section 14 (5) It is difficult to verify if GMP and GWP standards were maintained Licensing of pharmacies: Some of the companies were not registered importers: e.g. Monarch Medical Supplies is not an authorised importer. All three websites were not registered with the SAPC Section 22C of Act 101 clearly states the following on Licensing: "(…) no manufacturer, wholesaler or distributer referred to in subsection (1) (b) shall manufacture, import, export, act as a wholesaler of or distribute, as the case may be any medicine unless he or she is the holder of a license contemplated in the said. " subsection. " Regulation 12 of the Act states the conditions required to be able to import medicines (permits) (importers, multi-national companies and local producers), the medicine regulator (SAHPRA/MCC), Department of Police (national, provincial and municipal), the South African Revenue Services (customs and border division), Department of Health, Department of Trade and Industry, the CIPC (intellectual property protection) and the Department of Justice (national and provincial prosecutors). Stakeholder collaboration and information sharing remain an important key to the combat of SFs for obvious reasons already outlined.
In dealing with counterfeit medicine cases, all study respondents agreed there was some coordination between certain agencies and not others but these working relationships had challenges. All respondents agreed that cooperation and open communication channels were essential for the smooth and effective functioning of the agencies and that enforcement efforts needed to be more coordinated. However, it was unclear which agency was to lead such a coordination effort.
Discussion
The study shows that the South African legal framework is compatible with international standards. However, there is no comprehensive strategic framework for dealing with counterfeit and substandard medicines. As a result, the pharmaceutical value chain and the health care system are compromised.
It appears that counterfeit medicines are not given the same priority as other public health issues such as the counterfeiting of tobacco. The Department of Health has identified clear priority areas which include, the prevention and treatment of HIV/AIDS, tuberculosis (TB), non-communicable diseases, tobacco use and promoting healthy lifestyles. Notably, each priority area has a policy and implementation strategy. As a result of their cross cutting impact, SFs, should be receive the same attention and concern. They affect patient treatment outcomes, public health safety, household income if a patient dies or has prolonged illness. In addition to affecting patient treatment outcomes and the potential loss of household income if a patient dies or has prolonged disease, SFs also undermine the healthcare system in various ways. Two of them are by costing taxpayers more money to procure expensive medicines and tax evasion, further crippling the economy.
Gaps in legislation
The subsequent result of not having an anti-counterfeiting strategy is that there is no guiding framework to facilitate the enforcement of the law. Objectives are not always clear and stakeholder responsibilities are not outlined. This results in a lack of accountability among stakeholders and the NMRA. As well as clearly assigned leadership responsibilities for enforcement efforts. It is important to get total buy-in by all stakeholders involved in the regulation of medicines to ensure working together to combat SFs.
The interviews show that South Africa has a good regulatory policy and legislative framework; however, implementation remains a challenge as far as SFs and post market surveillance is concerned [35]. Consequently, there need arises for a more specific legislation to address SFs with a clear implementation strategy. There are visible gaps in the supply chain making infiltration possible especially with the informal markets that are allowed to sell schedule 0 medicines, such as some pain medication, for example, 500 mg paracetamol (see Fig. 2). The second gap exists in inspections (see Fig. 3), relating to private medical practitioners with dispensing licences which are currently not inspected as they should primarily due to limited human resources and conflicting mandates between government health agencies.
Our study found that, the regulation of pharmaceuticals in South Africa is governed by the Medicines and Related Substances Act of 101 of 1965 (101/65) and is implemented by the NMRA, the South African Health Products Regulatory Authority also known as the former Medicines Control Council (MCC). The enforcement of the law, however, involves other stakeholders and the use of other pieces of legislation. The lack of harmonisation between the Medicines and Related Substances Act 101 of 1965 and the Counterfeit Goods Act 37 of 1997 and other legislation such as the Prevention of Organised Crime Act 121 of 1998 and the Electronics and Communications Act 25 of 2002, opens up gaps in the regulatory and criminal justice system giving criminals loopholes to manipulate court proceedings and prosecutions. The focus of the law in general was found to be on false advertising, labelling, trademark infringement and operation of illegal pharmacy premises and not necessarily illegal online pharmacies or counterfeit and substandard medicines (see Table 6). Penal Sanctions were also found to be weak, with only a manageable business cost charged and a light prison sentence. The outcome of prosecutions was also poor due to the complexities associated with court proceedings and short timelines. This outcome was also due to the lack of political and technical expertise on how to handle pharmaceutical crimes within the criminal justice system. Office on Drugs and Crime's research brief [36].
In response to this problem, Kenya enacted the anticounterfeiting Act 13 of 2008 [37] which was later revised to facilitate access to generic medicines for HIV/AIDS. The lack of harmonisation between the aforementioned Act and the already existing pieces of legislation-the Industrial Property Act of 2001 and the HIV Prevention Control Act of 2006 resulted in generic medicines being categorised as counterfeit which made the revision necessary. It is considered best practice for an anti-counterfeiting policy framework to encompass the following key components: product and supply chain security, advocacy, engagement and awareness and risk-based enforcement and threat assessment [38]. Specific policy objectives should include the prevention of manufacturing and distribution of SFs (including APIs), securing the legal distribution chain against SFs prohibiting importation of SFs and participating in the regional and global campaigns [39]. Such a policy should also entail restrictions on the trade of pharmaceuticals online and the legal requirements for operating online pharmacies.
Institutional resources
The limited availability of inspectors in the NMRA assigned to perform inspections and post market surveillance was cited as a great concern. The COVID-19 pandemic has placed an even greater demand on movement of health related products across borders. This has in turn slowed down inspections and visibility of officers especially due to the emphasis on 'free trade zones' in the region. This provides a clear gap for cross border criminal activity. Research has shown that infiltration of counterfeit medicines is highest in places, where law enforcement is weakest [40]. Increased visibility of inspectors at ports of entry and cross skill training of regulatory will have a huge impact on curbing transnational crimes of this nature.
Funding plays an important role in facilitating enforcement work and developing countries such as South Africa can benefit from collaborative efforts of international agencies, such as the joint trilateral relationship between the WHO, UNODC and Interpol [18]. A multipronged approach is required for effective combat, hence each of these parties has a unique role to play within its own mandate without infringing on the other. The debates between public health and intellectual property infringements has hindered efforts in the past leading to the collapse of agencies, such as the WHO IMPACT in 2010 [41].
Pharmaceutical testing facilities
Developed countries have more advanced laboratory infrastructure and technology. For example, in the EU, the European Directorate for Quality Medicines (EDQM) has developed a fingerprint database of active ingredients used in the manufacture of medicines which can be identified by analytical methods, such as chromatography and spectrophotometry [42,43]. The information generated by the analytical process can be used by regulators to provide evidence and, therefore, pursue appropriate legal action. It is evident that partnerships between international stakeholders make in-country capacity building possible in resource-limited nations [44].
Such can be seen in the case of the partnership between the Global Pharma Health Fund (GPHF) and the Centre for Pharmaceutical Advancement and Training (CePAT), and the United States Pharmacopeia and Ghana [45]. Similarly, Nigeria has had great success by having the ISO/IEC 17025 accreditation of its five public sector medicines testing laboratories through the on-going support of the U.S Pharmacopeial Convention (USP), [46].
The purpose of such collaborations is to build incountry and regional capacity in pharmaceutical quality assurance and quality control; however, the onus is with individual countries to take ownership of the process. Collaborations and partnerships like these can be equally beneficial to South Africa as it tries to address its own capacity building needs.
Stakeholder collaboration and information sharing
Over the past decade, the international community has made significant efforts to improve cooperation and collaboration among national law enforcement agencies involved in medical-related crime. Unfortunately, the lack of coordination amongst public agencies undermines these efforts. This remains a challenge in most developing countries as is the case with South Africa. These gaps in enforcement have become more evident during the COVID-19 global crisis [47]. A more transparent relationship with all parties involved should be encouraged by the South African NMRA by sharing information with respect to registered products and counterfeit incidents. For example, by providing real-time visibility to products manufactured and exported such as in the case of India (India Department of Commerce [48]). The Indian Drug Authentication and Verification Application (DAVA) programme has empowered consumers to be able to check barcodes of medicines brought against an existing database on the website. Furthermore, funding should be made available for health promotion campaigns so that the public can be educated.
It is, therefore, necessary to introduce a policy that will mandate stakeholder engagement, collaboration, and information sharing. Such should also promote the fostering of industry participation through public-private partnerships. For example, private partners in other countries, such as United States and the European Union include: The Pharmaceutical Security Institute (PSI), Quality Brands Protection Committee of China (QBPC), International Federation of Pharmaceutical Manufacturers Association (IFPMA)-"Fight the fake" partnerships and the International Anti-Counterfeiting Coalition (IACC) [49].
Furthermore, the use of a central database can be used to synchronise the regional policing of the SFs. Reports of threat-based assessments should be used to guide investigations and inspections [50]. For instance, based on annual report analysis for the financial year [51], the agency has already implemented a regional with SADC some member States electronic information sharing system in partnership with the World Customs Organisation (WCO) and the Organisation for Economic Co-operation and Development (OECD). Towards enabling riskassessments at ports of entry, SADC members such as Mozambique, Swaziland, Mozambique, and Malawi are cooperating with South Africa and the WCO. Unfortunately, there is no information regarding whether medicines are included in what is considered high risk.
Enforcement and post market surveillance
The use of forensic intelligence techniques such as DNA fingerprinting and incident trend analysis will greatly enhance law enforcement efforts. Investment in detection tools such as the German Pharma Fund portable minilabs which use Raman spectroscopy and Near Infrared (NIR) spectroscopy and the Truscan handheld device [52] to minimise the long waiting periods for laboratory results and improved information sharing of results in a database such as the WHO's Vigibase system will help support any legal proceedings instituted [53].
Conclusion
From both the document analysis and the interviews findings, South Africa has an existing National Drug Policy and legislative framework that is compatible to international standards [54]. However, there is no existing national anti-counterfeiting strategy to address medicines. The implication of the absence of such a policy are clear. It can be seen in the lack of coordination between governmental departments, lack of stakeholder engagement and collaboration (private-public partnerships), lack of harmonisation on existing legislation, lack of capacity and resource mobilisation subsequently hampering effective law enforcement.
Another point highlighted by the study was the poor outcome in prosecutions of cases. This was cited to have been greatly influenced by the lack of specific pharmaceutical legislation providing guidelines on how to address counterfeit medicines and the lack of harmonisation of existing laws. Although the existing legislation is effective in addressing issues of IP infringement such as the Counterfeit Goods Act (trademark and copyright infringement) and Prevention of Organised Crime Act (asset forfeiture) in most cases prosecutors were said to not be knowledgeable of handling pharmaceutical-related crime. The sanctions were also not deterrent enough.
Again, the penal sanctions stipulated in the Medicines and Related Substances Act were also found to not be specific to counterfeit medicines and were not deterrent enough. Successful cases in our study are due to administering a combination of all three pieces of legislation together. It is, therefore, worth considering the harmonisation of the existing legislation to effectively combat SFs and to implement an anti-counterfeiting policy.
The study also illuminated that although law enforcement agencies had various combat activities to varying degrees around counterfeit merchandise, pharmaceutical crime was not the focus. This lack of coordination of enforcement activities opens up gaps in the regulation chain for the infiltration of SFs. The NMRA was also seen to have taken a more passive role in leading the fight and some institutions cited a lack of coordination to be due to jurisdictional limitations and competing mandates. Recognition of the problem at hand and active engagement of key stakeholders is crucial in combating pharmaceutical crime at national and regional levels.
Practical implications and future research
The findings of this study should be considered as an exploratory analysis that will generate a hypothesis for further investigation on policy interventions to combat SFs.
Developing an anti-counterfeit policy and strategy
There is a need for a specific pharmaceutical anti-counterfeit policy that can address the unique complexities involved such as the IPR issues, safety and efficacy of medicines and the legal implications thereof that the current legislation does not effectively cater for. The implementation of the national anti-counterfeit policy will enforce a legal mandate with objectives and responsibilities for the effective participation of all stakeholders. Amendments that will include an anti-counterfeiting plan and clear objectives for establishing a Post Market Surveillance and a Pharmacovigilance plan can be valuable. This is considered best practice [55].
Harmonisation of existing laws
In retrospect, the alignment of existing legislation to employ a more integrated approach between stakeholders is an important consideration and should be mandated in the aforementioned policy. A thorough process of impact assessment and feasibility studies should be undertaken to ensure the best policy options that suit the South African context. It is considered best practice for an anti-counterfeiting policy framework to encompass the following key components: supply chain security at company, national and regional levels, stakeholder engagement and awareness as well as threat assessment and risk-based enforcement strategies [56,57].
Improving visibility of anti-counterfeit campaigns
The national regulator, SAHPRA should lead the fight against SFs and engage all stakeholders including the public. With the growing risks of internet pharmaceutical crime, educating the public on how to identify SFs and on safe online buying of medicines is equally of value and using readily accessible media platforms such as newspapers, radio, television and social media can be effective [58].
Strengthening stakeholder collaboration
In addition, strengthening existing inter-agency relationships and regional collaborations is key. The study has shown that due to the weak stakeholder relationships collaborations are not always possible. As a result, duplicity of shared enforcement function often can be seen with little impact on the SFs. These can be converted into synergistic efforts drawing on the strength in resources and capacity of all players.
Accurate data reporting and risk based assessment
Finally, the study has shown that there is very little reporting on SFs and poor data collection making it difficult to measure the true extent of the SF problem in South Africa. Proper data capturing and reporting provides important information to policy makers and makes it easier to advocate for deployment of the necessary funding and resources to achieve the desired outcome. | 2022-01-06T14:29:09.677Z | 2022-01-05T00:00:00.000 | {
"year": 2022,
"sha1": "f4b66434667f5d81533360721d8e806ba8b9ccc9",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s40545-021-00387-8",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "c1db87d6466807630932f7e1dffb846092cbada5",
"s2fieldsofstudy": [
"Political Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257254283 | pes2o/s2orc | v3-fos-license | Clinical hypoxemia score for outpatient child pneumonia care lacking pulse oximetry in Africa and South Asia
Background Pulse oximeters are not routinely available in outpatient clinics in low- and middle-income countries. We derived clinical scores to identify hypoxemic child pneumonia. Methods This was a retrospective pooled analysis of two outpatient datasets of 3–35 month olds with World Health Organization (WHO)-defined pneumonia in Bangladesh and Malawi. We constructed, internally validated, and compared fit & discrimination of four models predicting SpO2 < 93% and <90%: (1) Integrated Management of Childhood Illness guidelines, (2) WHO-composite guidelines, (3) Independent variable least absolute shrinkage and selection operator (LASSO); (4) Composite variable LASSO. Results 12,712 observations were included. The independent and composite LASSO models discriminated moderately (both C-statistic 0.77) between children with a SpO2 < 93% and ≥94%; model predictive capacities remained moderate after adjusting for potential overfitting (C-statistic 0.74 and 0.75). The IMCI and WHO-composite models had poorer discrimination (C-statistic 0.56 and 0.68) and identified 20.6% and 56.8% of SpO2 < 93% cases. The highest score stratum of the independent and composite LASSO models identified 46.7% and 49.0% of SpO2 < 93% cases. Both LASSO models had similar performance for a SpO2 < 90%. Conclusions In the absence of pulse oximeters, both LASSO models better identified outpatient hypoxemic pneumonia cases than the WHO guidelines. Score external validation and implementation are needed.
Introduction
The burden of child pneumonia mortality predominantly occurs in low-income and middle-income countries (LMICs). 1 Hypoxemia -a low blood oxyhemoglobin saturation -conveys increased pneumonia mortality risk, yet most children in LMICs lack pulse oximeter access. 2,3 Pulse oximeters identify hypoxemia by non-invasively measuring the peripheral arterial oxyhemoglobin saturation (SpO2). 4 To simplify diagnosis in LMICs the World Health Organization (WHO) Integrated Management of Childhood Illness (IMCI) guidelines consider pneumonia a clinical syndrome. 5 Although IMCI recommends oximeter use when available, it also provides guidance for settings without pulse oximetry. 5 However, when implemented without pulse oximeters, IMCI missed almost 70% of outpatient pneumonia cases with a SpO2<90% in Malawi and nearly 90% in Bangladesh. 6,7 As most children first access health systems at outpatient clinics, improving outpatient hypoxemia identification may be key to reducing LMIC pneumonia mortality. 4 Prior research attempted to determine whether clinical signs accurately identify a SpO2<90% in hospitalized children. 8,9 While research showed high specificity of clinical signs for a SpO2 <90%, sensitivity was low. 8,9 Additionally, recent data showed elevated mortality among children with a SpO2 90-92% than higher SpO2 levels. 7,[10][11][12] These findings challenge the currently recommended SpO2<90% threshold for hospitalization and oxygen treatment in LMICs.
. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted March 1, 2023. ; https://doi.org/10.1101/2023.02.25.23286448 doi: medRxiv preprint We sought to utilize two unique, contemporary outpatient pediatric pneumonia datasets from Bangladesh and Malawi to accomplish three objectives. 13,14 First, evaluate IMCI performance for identifying hypoxemia at a higher SpO2 threshold (<93%) than recommended. Second, examine whether other combinations of clinical features better identify children with a SpO2<93%, followed by development and internal validation of hypoxemia clinical scores feasible for LMICs. Third, repeat these analyses using the currently recommended SpO2<90% threshold.
Settings
We used data from Malawi and Bangladesh. Malawi is an African country with an under 5 mortality rate of 49/1,000 births. 15 This study included 18
clinics in Mchinji and
Lilongwe districts with a 1.2 million population catchment area, at 1,000-1,100 meters altitude. 13 From October 2011-June 2014, non-physician clinicians and nurses at clinics were IMCI trained and documented care of 0-59 month olds with WHO-defined pneumonia. 13 Providers used Acare® pulse oximeters with adult clip probes on the big toe if <2 years old or weighing <10kg. 6 Training, data collection, and supervision methodology has been published. 13 Bangladesh has an under 5 mortality rate of 3/1,000 live births. 16
Inclusion and Exclusion Criteria
We generated an analytic sample of healthcare visits from Bangladesh and Malawi datasets. Inclusion criteria were: valid SpO2, IMCI-defined non-severe or severe pneumonia (2014 guidelines), 5 and age 3-35 months, as the Bangladesh study population with a SpO2 was limited to this age. 17 See Supplemental Material for study definitions. Implicit to this analysis we assumed SpO2 was unavailable and excluded it from pneumonia definitions.
Variables
Our primary outcome was a SpO2<93%; SpO2<90% was secondary. We explored associations between clinical variables and SpO2 ranges to evaluate <93% as the primary outcome. We selected variables a priori: sex, age, weight-for-age z-score . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review)
Analysis
We evaluated missingness using a 5% threshold. We used Chi squared and Fisher's exact tests for proportions, Wilcoxon rank-sum for non-parametric data, and Student's ttest for normally distributed data comparisons. We reviewed individual-level variables and their associated SpO2<93% and <90% predictive quality. For SpO2<93% and <90% model development we randomly split the dataset into derivation (70%) and validation (30%) sets balanced by outcome and country. For each model we fit a logistic regression model with hypoxemia as the binary outcome measure. We allowed selection of country as a fixed effect to account for significant differences by country.
Thereafter, we used a random intercept in each post selection model to control for country after interrogating the suitability of this approach with the Hausman test. 19 All analyses were by Stata 16.1 (StataCorp, College Station, TX).
IMCI guidelines (IMCI model) and Composite WHO guidelines (WHO-composite model)
The IMCI model reflects 2014 IMCI referral criteria (Supplemental Material). 5 The WHOcomposite model is a composite of four WHO guidelines. 5, 18,20,21 We fit both models using the logistic command for implementing multivariable, maximum-likelihood logit . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) whereas for Composite LASSO we used composite variables for 'danger signs' (WHOdefined general danger signs) and 'severe respiratory distress.' For both models we compared the two methods using the C-statistic based on predicted estimates, sensitivity, and specificity. If we found no statistical difference between methods, we used the selection results from the simplest model to implement an unsupervised approach (i.e. selection of the full variable rather than one category of a three-category variable) to refit both models.
Hypoxemia score development and validation
We compared discriminatory power and model fit (C-statistic and Bayesian Information Criteria (BIC)) of the four maximum-likelihood logit models using the derivation dataset models. 24,25 Using an unsupervised approach (i.e., if only two age groups were selected, we retained all age categories), each of the LASSO model covariates were . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted March 1, 2023. ; https://doi.org/10.1101/2023.02.25.23286448 doi: medRxiv preprint kept on the log scale, rounded to the nearest 0.5, and doubled to form an integer. 10,26,27 We then split the score into approximately equally sized quintiles to create hypoxemia risk categories.
Using the validation dataset, LASSO model scores were estimated by child, and score discriminatory power to identify children with and without hypoxemia was determined.
Scores were not developed using IMCI and WHO-composite models. The C-statistic, sensitivity, specificity, positive and negative predictive value (PPV and NPV), and positive and negative likelihood ratios (LR+ and LR-) were compared across all models.
We adjusted C-statistics for optimism by bootstrapping (200 repetitions) to account for any overfitting. 28 A C-statistic 0.71-0.80 was considered moderate and >0.80 as excellent discriminatory power. 29 We applied the same analysis methodology for SpO2<90%.
Study Population
. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted March 1, 2023. We assessed the relationship between referral criteria and hypoxemia at SpO2<90%, 90%-92%, and <93% for the IMCI and WHO-composite models ( Table 2). WAZ<-3, in both models, was associated with a SpO2 90-92% and <93% but not <90%. In the WHO-composite model severe respiratory distress was associated with an increased adjusted odds of an abnormal SpO2 regardless of SpO2 range. While in the IMCI model danger signs were associated with hypoxemia, including respiratory distress in the WHO-composite model attenuated its effect at each SpO2 threshold. Respiratory distress was associated with each SpO2 threshold in the WHO-composite model.
Multivariable predictive model comparison and score development
. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) Table 3 presents adjusted odds ratios (aORs) for SpO2<93% using model-specific predictors from the derivation dataset. In the independent LASSO model, individual predictor scores ranged from -1 to 3. The composite LASSO model scores ranged from -1 to 4. Overall predictive performance of the independent (C-statistic=0.774) and composite LASSO models (C-statistic=0.773) was moderate (Table 3) and did not differ in discriminatory power (p=0.480), with total score ranges in the Supplemental Material. Model fit improved from the IMCI (BIC 5,536.5) and WHO-composite models (BIC 5,155.9) to the independent (BIC 4,650.6) and composite LASSO models (BIC 4,712.2). Overall, the independent (C-statistic=0.774) and composite LASSO models (C-statistic=0.773) better discriminated between children with and without a SpO2<93% than the IMCI (C-statistic=0.661) and WHO-composite models (C-statistic=0.734). SpO2<90% score development and cross-model comparison are in the Supplemental Material.
Clinical score performance -validation dataset
In the validation dataset, independent (C-statistic=0.745) and composite LASSO model (C-statistic=0.752) scores moderately discriminated between SpO2<93% and >94% cases ( Figure 1). When examining the C-statistics adjusted for optimism, there was minimal C-statistic change for both scores (Table 4), and independent and composite LASSO model discriminatory power did not differ (p=0.480). For the independent and composite LASSO models about 4% of children with a score in the first stratum had SpO2<93% compared to >35% in the last stratum (Table 4).
. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) Unlike other studies evaluating hypoxemia predictors we focused on ambulatory rather than hospital settings. This distinction is important as our findings should therefore be generalizable to outpatient settings without oximeters and with lower hypoxemia prevalence than hospitals. 30,31 Notably, individual variable sensitivity and specificity for hypoxemia are largely similar to hospital-based studies. 8 However, our WHO-composite model and two LASSO models have good to excellent discriminatory value distinguishing between hypoxemic and non-hypoxemic pneumonia cases, contrasting to other work with smaller samples that limited analyses. 10 An exception is a large, multicenter, hospital-based Nigerian study that found a combination of respiratory distress, inability to feed, cyanosis, lethargy and severe tachypnea had lower discrimination (C-statistic=0.655) for SpO2<90% amongst respiratory and non-respiratory cases than in our data. 9 The lower C-statistic may reflect the authors inclusion of older children <15 years.
. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted March 1, 2023. ; https://doi.org/10.1101/2023.02.25.23286448 doi: medRxiv preprint It is also important our results are interpreted within the broader child pneumonia context of known mortality risk factors. In the IMCI and WHO-composite models WAZ <-3 was not associated with a SpO2<90%, yet is a known mortality risk factor. 10,32 Conversely wheezing was retained in both LASSO models but, when identified alone without accompanying respiratory distress, it is not associated with radiographic pneumonia or in-hospital mortality. 10,33 Isolated wheeze usually reflects milder, selflimited viral illness (e.g., bronchiolitis) and is susceptible to non-differential misclassification when confused with transmitted upper respiratory sounds.
While our pooling of data from studies in South Asia and Africa aimed to improve generalizability, there are differences between the settings and studies that may be limit. Malawi data had a higher frequency of hypoxemia, danger signs, and respiratory distress than Bangladesh data. Alternatively, WAZ <-3 was more frequent in Bangladesh than Malawi. These differences, in part, may reflect higher HIV and malaria prevalence, modestly higher altitude in Malawi, or known challenges in identifying malnutrition in Malawi clinics. 34 These diseases increase susceptibility for hypoxemia or, for malaria, increase the frequency of signs overlapping with IMCI pneumonia. 35 Although the designs differed, all personnel were similarly trained per IMCI. The under 5 mortality rate in Malawi is higher than Bangladesh, 15,16 which aligns with our results suggesting Malawi cases were more severe. Nevertheless, we attempted to account for unmeasured confounders from epidemiological, health system, and methodological differences by fitting models with a country variable.
. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted March 1, 2023. ; https://doi.org/10.1101/2023.02.25.23286448 doi: medRxiv preprint In sum, these findings may improve hypoxemic pneumonia identification in clinics either without pulse oximetry at all or lacking pulse oximeters suitable for pediatric use. Given a SpO2 <93% is highly associated with mortality, 2,10,12 earlier hypoxemic case identification with successful referral may reduce fatality. While these models could advance care for hypoxemic children they are an inadequate substitution for pulse oximeters. Pulse oximetry scale up in ambulatory settings must be prioritized, as should pulse oximeter device development targeted for young children in LMICs. Where pulse oximetry is unavailable for children, our models suggest children with chest indrawing and/or with other signs of respiratory distress could be referred. To successfully implement these models ambulatory health workers of varying pediatric experience will need further training to recognize signs of respiratory distress in children.
Understanding how these changes may affect referral quality, uptake, and hospitalizations is critical. Next steps include external validation and research evaluating implementation feasibility of the hypoxemia scores. . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted March 1, 2023. Contributors EDM had full access to data in the study and takes full responsibility for the integrity of the data and the accuracy of the data analysis. EDM conceived the study idea and HS, SH, and EDM designed the study; HS analyzed the data; SH, HS, and EDM drafted and revised the manuscript. All authors made substantial contributions to the interpretation of results, critical revision of the manuscript, and approved the final version of the manuscript.
Declaration of Interests
The authors have no competing interests to declare.
Acknowledgments
We offer our thanks to all of the caregivers and children participating in this research in Bangladesh and Malawi. In Bangladesh we would like to thank the Projahnmo Study Group field and data management staff, the Ministry of Health and Family Welfare, Government of Bangladesh. In Malawi we would like to thank Mr. Charles Makwenda . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. 1002.
5.
World Health Organization, 2014. Integrated Management of Childhood Illness: Chart Booklet. Available at: . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted March 1, 2023. ; https://doi.org/10.1101/2023.02.25.23286448 doi: medRxiv preprint . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted March 1, 2023. . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) | 2023-03-01T20:06:24.414Z | 2023-03-01T00:00:00.000 | {
"year": 2023,
"sha1": "d966643cc272a0e6d109941accd596c5f603381d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d966643cc272a0e6d109941accd596c5f603381d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
232046347 | pes2o/s2orc | v3-fos-license | Tunable topological states hosted by unconventional superconductors with adatoms
Chains of magnetic atoms, placed on the surface of s-wave superconductors, have been established as a laboratory for the study of Majorana bound states. In such systems, the breaking of time reversal due to magnetic moments gives rise to the formation of in-gap states, which hybridize to form one-dimensional topological superconductors. However, in unconventional superconductors even non-magnetic impurities induce in-gap states since scattering of Cooper pairs changes their momentum but not their phase. Here, we propose a path for creating topological superconductivity, which is based on an unconventional superconductor with a chain of non-magnetic adatoms on its surface. The topological phase can be reached by tuning the magnitude and direction of a Zeeman field, such that Majorana zero modes at its boundary can be generated, moved and fused. To demonstrate the feasibility of this platform, we develop a general mapping of films with adatom chains to one-dimensional lattice Hamiltonians. This allows us to study unconventional superconductors such as Sr$_2$RuO$_4$ exhibiting multiple bands and an anisotropic order parameter.
I. INTRODUCTION
Combining topology and superconductivity has been heralded as a new paradigm for the realization of exotic new particles -Majorana zero modes (MZMs)whose non-Abelian braiding statistics would enable faulttolerant quantum computations [1,2]. Moreover, the existence of MZMs is topologically protected, making them inert to disorder effects. To-date, two main approaches, based on the Kitaev chain [3], have been pursued in the quest for topological superconductors. In the first approach s-wave superconductivity is proximity-induced in nanowires with strong spin-orbit coupling [4][5][6][7], while in the second approach the hybridization of impurity (Shiba) bound states gives rise to a topologically nontrivial superconducting phase [8][9][10][11][12][13]. Experimentally, the latter has been realized by placing a chain of magnetic FIG. 1. Setup -Non-magnetic adatoms (purple balls) are placed at a distance a0 to form a chain on the surface of a helical triplet superconductor with order parameter ∆. The Cooper pairs exhibit equal spin (red arrows), and their orbital angular momentum (black arrow) points opposite to the spin direction. An external Zeeman field h can be used to tune the system into the topological phase supporting Majorana zero modes at the endpoints of the chain as sketched by the yellow plot of the magnitude |ψ| 2 of the wavefunction. atoms on the surface of an s-wave superconductor [14][15][16][17][18]. In these platforms, evidences for MZMs at the end points of the system were found in transport measurements on nanowires [6,7] and in scanning tunneling spectroscopy on Shiba chains [15][16][17][18][19]. While a number of possibilities of chains of addatoms of potential scatterers, magnetic scatterers and nanowires on top of superconductors have been investigated theoretically [20][21][22][23][24] and proposals for moving, fusing and braiding the MZMs have been brought forward [25][26][27][28], it remains an open challenge experimentally to implement those.
In this work, we propose a path to realize onedimensional topological superconductivity by placing non-magnetic atoms on the surface of an unconventional triplet superconductor, see Fig. 1. The key advantage of our proposal is that it is possible to move and fuse MZMs by controlling a magnetic Zeeman field. Since candidate systems for the realization of triplet superconductivity usually exhibit multiple bands, we go beyond a single-band description and use a model for Sr 2 RuO 4 to demonstrate that our method can easily be applied to multiband superconductivity. At the same time, we note that Sr 2 RuO 4 has been subject to intense theoretical and experimental investigations regarding the nature of the superconducting pairing [29][30][31][32][33]. Implementing our proposal in candidate systems for triplet superconductivity such as UPt 3 [34,35], UTe 2 [36,37] and LaNiGa 2 [38] could establish a new MZM platform and improve the understanding of the pairing symmetries in these systems.
A. Bulk superconductor
Our starting point is a single-band Hamiltonian where H BdG describes a bulk triplet superconductor on a two dimensional lattice, and H Z is the Zeeman term in an external field. In momentum space it has the matrix structure where h(p) = ξ(p) · σ 0 + h · σ σ σ, ξ(p) is the energymomentum dispersion, h the Zeeman field, and σ 0 and σ σ σ are the unit matrix and Pauli matrices acting in spin space. In the following, we measure all energies in units of the Fermi energy E F . In the off-diagonal, ∆(p) = i[d(p) · σ]σ y is the pairing term, whose minimum value we denote by ∆ min ; for details see Appendix A. Here, d(p) describes the vector order parameter of the triplet superconductor. Here, we concentrate on the case of a helical p-wave order parameter d h = −i∆ t (e x sin p y + e y sin p x ) , and in the Supplemental information we discuss the generalizations to a chiral p-wave order parameter and multiband models using Sr 2 RuO 4 as an example.
B. Chain of nonmagnetic impurities
The chain of atoms placed along the x-direction r n = na 0 e x (with integer n) is described by with the matrixÛ = V imp τ z σ 0 mediating non-magnetic impurity scattering of strength V imp . Although the scattering of Bogoliubov quasiparticles preserves spin, there are still Shiba in-gap states in this system because the scattering does not change the phase in order to match the p-wave momentum dependence of the order parameter. In the case of a chain, Shiba states localized in the vicinity of the impurity atoms hybridize and give rise to impurity bands within the bulk gap ∆ min of the superconductor. These bands can be accurately described by an effective Hamiltonian which depends on the momentum k x in a supercell Brillouin zone with lattice constant a 0 . We derive the matrices on the r.h.s. by linearizing the bulk Green function with respect to energy, and obtain G I (k x ), which describes the propagation of Bogoliubov quasiparticles between the impurities, by Fourier transforming the bulk Green function at the impurity sites. The matrixG −1 enters as a prefactor and contains the renormalization of the bandwidth (see Appendix B).
III. EFFECTIVE HAMILTONIAN
The mapping onto the effective Hamiltonian Eq. (5) can be understood as integrating out the quantum numbers p y of momenta perpendicular to the chain. Therefore, in the absence of a Zeeman field the effective Hamiltonian has the structure diagonal in spin space, and describing twofold degenerate impurity bands inside the bulk superconducting gap.
Here, ξ eff (k x ) and ∆ eff (k x ) are the effective dispersion and pairing for the impurity chain, containing furtherneighbor coupling terms between the impurity states.
The Hamiltonian H 0 eff (k x ) is time-reversal symmetric, so that the system can support a topological phase with an even number of MZMs at each end of the chain[39]. We can realize unpaired MZMs by additionally breaking the symmetry down to only particle-hole symmetry by use of a Zeeman field such that the system, when tuned into the topological phase, exhibits unpaired MZMs at the end of the impurity chain A Zeeman field in zdirection can be described by H Z eff,z = h eff,z σ z τ z with a renormalized magnitude h eff,z , which can be calculated by including chemical potential shifts ξ(p) ± h z in the bulk dispersion of the spin up and down electrons, respectively. We find that h eff,z h z because the energy of Shiba in-gap state depends only weakly on the chemical potential (see Supplemental information). Thus, the effective Hamiltonian is diagonal in spin space with each blockH eff± = (ξ eff (k x ) ± h eff,z )τ z + ∆ eff (k x )τ x exhibiting the same topological properties as the Kitaev chain. The effective field ±h eff,z plays the role of the chemical potential, which can drive a topological phase transition. However, since the effective field h eff,z is parametrically small, the topological phase can only be reached by fine tuning the impurity strength V imp such that the impurity bands almost touch zero already without a Zeeman field.
For the Zeeman field pointing in y-direction, the additional term to the effective Hamiltonian is H Z eff,y = h eff,y σ y τ 0 . In this case, the two BdG bands are shifted trivially in energy with respect to each other, leaving their topological character unchanged, i.e. a field in ydirection cannot tune into the topological phase.
Finally, for a field in x-direction, the Zeeman term reads H Z eff,x = h eff,x τ z σ x with weak renormalization of the effective Zeeman field from the bulk value h eff,x ∼ h x . The effective Hamiltonian can now be rotated around the y axis in spin space σ x → σ z , such that in the new basis the effective field h eff,x plays again the role of a chemical potential. The difference to the case of a Zeeman field in z-direction is that the weakly renormalized h eff,x can drive a topological phase transition much more efficiently than the strongly reduced h eff,z discussed above.
IV. TOPOLOGICAL PHASE DIAGRAM
To quantitatively demonstrate the tunability of topological superconductivity, we compute the topological phase diagram (see Figs. 2 and 3) as characterized by the topological invariant It is given as the product of Pfaffians Q(k x ) at the time reversal invariant momenta of the corresponding onedimensional Brillouin zone, details on its derivation can be found in Appendix C. We supplement the fully numerical supercell calculation by computing the topological invariant also using Q eff (k The excellent agreement between Q eff and Q (see Supplemental Material) indicates that H eff (k x ) indeed faithfully describes the low-energy physics of the impurity chain. In the nontrivial case Q = −1 we define the topological gap ∆ topo as the minimum of the eigenenergies of H(k x ) in the Brillouin zone, see Appendix D. To detect the non-Abelian properties of the MZMs, it is necessary that the coupling between MZMs is weak. Thus, the distance between neighboring MZMs needs to be much larger than ζ ≈hv F,eff /∆ topo , where v F,eff is the Fermi velocity for the impurity band. An estimate of v F,eff yields ζ/a 0 ≈ ∆ min /∆ topo . In order to avoid thermal excitations the temperature needs to be smaller than the topological gap, k B T < ∆ topo .
In Fig. 2 we present the topological phase diagram for the single-band model and for a multiband description of Sr 2 RuO 4 , revealing that the topological phase can be reached in both cases by application of a Zeeman field h x in the direction along the impurity chain for a large range of the impurity potential V imp . Experimentally, suitable adatoms can be identified by examining the appearance of in-gap states in the tunneling spectra. We summarize that our proposal might be feasible in a helical p-wave superconductor where impurity adatoms can be controlled experimentally. The multiband helical pwave order parameter considered in Fig. 2 is one of the possible candidate pairing symmetries for Sr 2 RuO 4 [32]. Controllable placing of adatoms might be facilitated by step edges, and in the Supplemental material[40] we show that the topological phase can be reached by application of a Zeeman field also in this case. In Fig. 3 we illustrate that the direction of the Zeeman field can be used to tune a system into and out of the topological phase. Namely, tuning the azimuthal angle φ has a strong effect on the topological gap ∆ topo . A similar analysis of the tunability of a system with chiral p-wave order parameter (see Supplemental Material[40]) reveals that rotating the Zeeman field within the x-y-plane has no effect at all. Hence, this difference in behavior could be used to experimentally diagnose and discriminate the helical pwave from a chiral p-wave order parameter, an important question for example in the Sr 2 RuO 4 system [29][30][31][32]. In both cases, the topological gap ∆topo is maximal for the field along the impurity chain (φ = 0, θ = π/2), and the topologically trivial phase can be reached by either tuning to φ = π/2, or towards θ = 0, π, i.e. to a transverse directions relative to the chain. The isolated point at which the system remains gapless (white cross) corresponds to a in plane field perpendicular to the chain.
V. DISCUSSION
For the case of magnetic adatoms, the dependence of adatom magnetic order on an external Zeeman field has been suggested as a means to to create, braid, and fuse MZMs [27]. Here, we exploit the direct dependence of the topological phase diagram on the Zeeman field (see Fig. 3). So far, we have arbitrarily chosen that the impurity chain is oriented along the x-axis, and as a result a Zeeman field in x-direction was most suitable to induce a topological phase. More generally however, the relevant parameter is the relative angle of the Zeeman field with the impurity chain, and for a curved chain the relevant angle would be the angle between the field and the local tangential direction as defined in Fig. 3(a). Thus, MZMs on curved impurity chains are located at all interfaces between trivial and nontrivial regions, i.e. at positions where the local tangent and the external field draw a critical angle, see Fig. 4. Hence, MZMs can be moved along the impurity chain by either rotating the direction of the Zeeman field or by changing the magnitude of the field, which modifies the critical angle.
Increasing the Zeeman field along a wiggly impurity chain creates two pairs of Majoranas which can formally be described by operators γ i , i = 1, 2, 3, 4, satisfying γ i = γ † i and anticommutation relations {γ i , γ j } = 2δ ij (for the labeling of MZMs see Fig. 4(c)). Grouping the MZMs in pairs of two, we can define the left and right number operators n l = 1 2 (1+iγ 1 γ 2 ) and n r = 1 2 (1+iγ 3 γ 4 ) with eigenvalues 0 and 1, and define a Hilbert space spanned by basis states |n l n r . Creating the MZMs from the vacuum, the initial state is given by Ψ = |00 in this basis. Tuning deeper into the topological phase, the inner MZMs 2 and 3 will fuse such that the final state will be a statistical mixture of 0 and 1 for the operator n o = 1 2 (1 + iγ 1 γ 4 ) (see Supplemental material [40]). In the fusion process the projective measurement can be performed by detecting the charge acquired by MZMs 2 and 3 after they have hybridized [28]. The movement and projective measurements are the key ingredients for ma-nipulation of MZMs, and they can be realized by controlling external magnetic field as discussed above. However, we also need to preserve the quantum information stored in the MZMs. In the Supplementary material[40] we discuss the corresponding requirements and propose to manipulate the local magnetization by spintronic means to obtain signatures of non-Abelian statistics of MZMs.
In summary, we have shown that topological superconductivity can be realized by placing nonmagnetic adatoms on the surface of an unconventional superconductor, and that the topological invariant can be controlled with the magnitude and direction of a Zeeman field. Our considerations are based on a lattice Hamiltonian which can describe materials exhibiting a complex structure of the order parameter and multiple bands. We have identified the field direction which can most efficiently tune the system into the topological phase and we have proposed a scheme to move and fuse MZMs. An experimental realization of this proposal could become a scalable platform for topological quantum information processing based on the non-Abelian statistics of MZMs.
ACKNOWLEDGEMENTS
The research was partially supported by the Foundation for Polish Science through the IRA Programme cofinanced by EU within SG OP. We acknowledge support from Leipzig University for Open Access Publishing. The code to numerically calculate the spectra and topological invariants discussed in this paper as well as the data that has been used to generate the plots within this paper is available from the corresponding author upon reasonable request. B.R. acknowledges support by DFG grant RO 2247/11-1.
Appendix A: Tight binding model
For the single-band model, we use the normal state dispersion ξ(p) = −2t(cos p x + cos p y ) − µ on a square lattice, where t is the nearest-neighbor hopping and µ ≈ −1.44t is the chemical potential fixed such that the filling is one quarter and the Fermi energy E F ≈ 2.56t.
The superconducting order parameter can be written in real space as For the single-band system with a triplet order parameter, the coefficients read ∆ ij,αβ,σσ = ∆ 0 d i,j · σ σ σiσ y with the Pauli operators σ σ σ = [σ x , σ y , σ z ] and a vector d i,j . In the main text, we consider the physical consequences of the helical p-wave order parameter d = −i∆ t (sin p y e x + sin p x e y ) which in real-space leads to nearest-neighbor Derivations of effective Hamiltonians for continuum models have been worked out in detail for example for helical Shiba chains in Ref. 10 and spinless superconductors [22]. Here, we generalize this approach to multiband lattice models and derive the effective Hamiltonian for the impurity bands as cited in Eq. (5) of the main text. Starting point is the eigenvalue equation of the Bogoliubov de Gennes Hamiltonian (including the impurity chain) (H BdG + H Z + H imp )Ψ = EΨ with the eigenstate Ψ and eigenenergy E. Next, we introduce the Green function operator of the bulk system to obtain a nonlinear eigenproblem Evaluating Eq. (B2) only at the impurity sites r m = a 0 me x , and using that the impurity Hamiltonian [Eq.
Because of the periodicity with respect to translations by the impurity chain lattice vector a 0 e x it is useful to transform this equation to momentum space (with respect to the supercell) Ψ(k x ) = m Ψ(r m )e −ikxa0m such that the eigenvalue equation can be rewritten as The real space Green function can be obtained via its Fourier representation where the integral is over the bulk Brillouin zone with momentum space area Ω BZ . By linearizing the bulk Green function at E = 0, we obtain where for fully gapped systems. Furthermore, we keep the linear correction ∝ E only in the onsite term of the Green function to obtain We now insert Eqs. (B4), (B5) and (B6) into Eq. (B3) and introduce the Fourier transforms with respect to the supercell as Rearranging the terms and multiplication with inverse matrices brings the eigenvalue equation in the form where the effective Hamiltonian is This Hamiltonian becomes exact at E = 0 where the topological phase transition occurs and therefore can be used to determine the phase diagram exactly. This approach is general and can be applied to all lattice Hamiltonians. In this work we have applied it to the single-band p-wave superconductors and multiband model for Sr 2 RuO 4 [41], but similar theoretical investigations can be performed also for other candidate materials for multiband triplet superconductors [34][35][36][37][38].
Appendix C: Topological invariant
A finite magnetic field breaks time-reversal symmetry such that the remaining symmetry of the Hamiltonian, Eq. (1) is particle-hole symmetry, described by a particlehole operator P anticommuting with the Hamiltonian, {H, P } = 0. The superconducting pairing has the property ∆ T = −∆ and the normal state block is Hermitean, h † = h. Therefore, P = τ x K is the desired anticommuting operator, where τ x is the Pauli matrix in particle-hole space and K the complex conjugation. The parity oper-atorP = (−1)N , withN being the particle number operator, commutes with the Hamiltonian [H,P ] = 0, thus there is a common system of eigenstates. Since the parity operator has the eigenvalues ±1, the ground state of the system is either of odd or even parity. The parity can be calculated by Eq. (7), where we formally have factorized out a prefactor of (−1) n because Pf(Hiτ x ) = Pf(Hτ x ) for matrices of size 4n with n an integer number. In the fully numerical approach, we set up the Hamiltonian for the superconductor subject to the Zeeman field and including the impurity potential, and use a supercell method to obtain H(k x ) of an (infinite) impurity chain along the x direction. For the calculation of the invariant using Eq. (7) the supercell Hamiltonian needs to be constructed for the two time-reversal invariant momenta, k x = 0, π/a 0 , corresponding to periodic or antiperiodic boundary conditions. Finally, the Pfaffian is calculated using an efficient numerical algorithm [42].
The effective Hamiltonian, Eq. (5) inherits the symmetries from the bulk Hamiltonian in Eq. (2), i.e. it satisfies the particle-hole symmetry τ which can be read off from Eq. (5) by using that also the other matrices in the expression obey the same symmetry, e.g. τ xÛ of the bulk Hamiltonian. Therefore, the effective Hamiltonian can be used to calculate the topological invariant using Eq. (7), while the numerical effort is greatly reduced because of the small size of the corresponding matrices.
Appendix D: Topological gap
For the calculation of the topological gap, i.e. the minimal positive eigenvalue of the supercell Hamiltonian as a function of k x , we calculate the eigenvalues for a grid of a few k x points between 0 and π/a 0 , select the k x with the smallest positive eigenvalue, and then use an iterative procedure to find the smallest positive eigenvalue by a bisection bracketing algorithm to obtain ∆ topo . This procedure is is implemented for both the supercell and the effective Hamiltonian approach to investigate the reliability of the approximation in deriving Eq. (5), see Supplemental Material. Finding very good agreement, we show in the main text only results stemming from the effective Hamiltonian since the calculation of eigenvalues is orders of magnitude faster once the expansion coefficients, Eqs. (B7) and (B8), for H eff (k x ) have been calculated. Figs. S1 to S18 Supplementary Note S1. TIGHT-BINDING MODELS
Single-band model
The normal state dispersion of the single band model on a square lattice is given by where t is the nearest neighbor hopping which we use as energy unit, i.e. t = 1, and µ ≈ −1.44t (E F = 4t + µ ≈ 2.56t) is the chemical potential fixed such that the filling is one quarter yielding a Fermi surface as shown in Fig. S1(a). In the following we use t = 1. The superconducting order parameter can be written as We choose ∆ t = 0.2 throughout the discussion of the single band model unless stated otherwise. In the single band model with filling n = 0.25, the magnitude of the order parameter on the Fermi surface |∆(p)| is almost constant, so is the (inverse) Fermi velocity, Fig. S1(a). The density of states is fully suppressed within the energy interval |ω| 0.2t and yields coherence peaks at the gap maxima, see Fig. S1(b,c).
Multiband model for Sr2RuO4
The model for Sr 2 RuO 4 is based on a tight binding parametrization proposed earlier [1] with hoppings on a square lattice giving rise to a Hamiltonian H(p) = H 0 (p) + H SO with the spin independent part H 0 (p) = Fig. S2. Note that the Fermi energy is E F ≈ 1.34t where the NN hopping t is expected to be of the order of 100 meV [2].
For the calculation in real space with system size of (N x , N y ) lattice points in x and y direction, the hopping elements and contributions from the spin-orbit coupling are set up in sparse matrices to be used to calculate the eigenvalues, or the topological invariant, see below.
The order parameter for Sr 2 RuO 4 is considered to be of helical-p wave type with the same parametrization of the higher harmonics of the pairing components as in Ref. 1 which we state here again for convenience. The order parameters in orbital space are given by the following ansatz ∆ a x,j g x,j (p)e y + ∆ a y,j g y,j (p)e x (S6) where a = xz, yz, xy is the orbital index, g y,j (p x , p y ) = g x,j (p y , p x ) and ∆ zy y,j = ∆ zx x,j ; ∆ zx y,j = ∆ zy x,j = 0; ∆ xy x,j = ∆ xy y,j ∀ j with the following parameters: (∆ zx x,1 , ∆ zx x,2 , ∆ zx x,3 ) = (0, 0.2, 1.0) and (∆ xy x,1 , ∆ xy x,2 , ∆ xy x,3 ) = (0.18, 0.15, −0.3). The magnitude of the order parameter is chosen to be small relative to the overall bandwidth of the (unrenormalized) electronic structure, ∆ t = 0.1 such that the density of states in the normal state within this energy scale essentially flat, see Fig. S2(b,c). Still the DOS in the superconducting state shows the largely anisotropic order parameter with small energy gap and coherence peaks as shown in Fig. S2(c).
Supplementary Note S2. EFFECTIVE HAMILTONIAN FOR THE SINGLE-BAND MODEL WITHOUT ZEEMAN FIELD
In the next sections, we study the effective Hamiltonian for the impurity chains along x direction, with the impurity positions are parametrized as r n = (na, 0) (n ∈ Z), in chiral and helical p-wave superconductors and We start by studying the Hamiltonian in absence of a Zeeman field h = 0. In this case, we obtain for both chiral and helical p-wave superconductors and (S15)
Single impurity
In the case of single impurity the effective Hamiltonian is (S17) By noticing that ∆(−p) = −∆(p), we see that the momentum integral of ∆(p)/(ξ 2 (p) + |∆(p)| 2 ) vanishes, and therefore (S18) Since only the square of the order parameter enters, the single impurity Hamiltonian for both chiral and helical p-wave superconductor is (S19)
Impurity chain in a chiral p-wave superconductor
In the case of chiral p-wave superconductor without field we can write the BdG Hamiltonian as and the bulk Green function at zero energy is Therefore, Thus, we obtain with and By reordering the Nambu basis from ψ † k = (c † k↑ , c † k↓ , c −k↑ , c −k↓ ) toψ † k = (c † k↑ , c −k↓ , c † k↓ , c −k↑ ), we can turn the Hamiltonian into a block-diagonal form Hereτ i andσ i are Pauli matrices in the new basis which still correspond to particle-hole and spin degrees of freedom. With the help of these matrices we can writeH eff (k Although the Hamiltonian is block-diagonal, these blocks are not independent degrees of freedom because the particle-hole symmetry connects these. Let us remind the reader that particle-hole symmetry is in the original basis and it now reads in the new basis asσ The fact that the particle-hole symmetry isτ xσx means that if there is a positive energy solution in the first block there is a negative energy solution in the second block.
Impurity chain in a helical p-wave superconductor
In the case of helical p-wave superconductor without Zeeman field we can write the BdG Hamiltonian as and Thus, we obtain the effective Hamiltonian as stated in the main text Because of the common factor σ 0 , this Hamiltonian is already formally block-diagonal without basis transformation. The important difference to the case for the chiral order parameter is that here the particle-hole symmetry still operators inside each block since it still has the form σ 0 τ x .
Supplementary Note S3. EFFECTIVE HAMILTONIAN IN THE PRESENCE OF THE ZEEMAN FIELD
In this section we include the Zeeman term H Z = h · σ σ σ in the Hamiltonian and study how the magnitude and direction of the Zeeman field h = (h x , h y , h z ) influences the topological properties of the system. We assume that the |h| ∆ t , so that the Zeeman field does not influence the superconducting order parameter [3]. The effect of the Zeeman field can be analyzed numerically using the effective Hamiltonian (S10). Here we try to give an analytically transparent expressions for the effect of Zeeman field.
The first simplification of the effective Hamiltonian H eff (k x ) is obtained by noticing that τ z σ 0G −1 exists as a common factor in H eff (k x ) and therefore its exact structure could be important for the topology only if the Zeeman field would cause a gap closing in the bulk Hamiltonian H BdG (p). We will consider only weak Zeeman fields which do not cause gap closings in the bulk. Therefore, without modifying the topology of the effective Hamiltonian H eff (k x ) for the impurity chain, we can evaluateG in the absence of the Zeeman field.
Although we have not managed to evaluate G I (k x ) analytically in the case of the general direction of the Zeeman field, we have obtained approximate expressions for the effective Hamiltonian. We express these results by utilizing the matrix obtained for one block of the Hamiltonian in the previous sectioñ The dependence of the effective dispersion ξ eff (t, µ, ∆ t , V imp , a, k x ) and pairing ∆ eff (t, µ, ∆ t , V imp , a, k x ) on the parameters t, µ, ∆ t , V imp , a and k x is determined by the equations described in the previous section. Notice that although the Hamiltonian is not necessarily exactly block-diagonal in the presence of the Zeeman field, we find that it can always be expressed approximately in a block-diagonal form in the suitable basis. In the following, we directly express the results in the basis where the Hamiltonian is approximately block-diagonal.
A. Chiral p-wave superconductor
Zeeman field along z-direction
In the case of chiral p-wave superconductor and Zeeman field in z-direction h = (0, 0, h z ) the effective model for the impurity chain isH Here, the Zeeman field trivially shifts one block upwards in energy and the other block downwards in energy. Thus, it cannot cause a transition to a topologically nontrivial state. However, the Zeeman field causes a transition from gapped phase into a gapless phase.
Zeeman field along x-or y-direction
In the case of chiral p-wave superconductor and Zeeman field in x-or y-direction h = (h x , 0, 0) or h = (0, h y , 0) the effective model for the impurity chain is The Zeeman term enters the effective Hamiltonian indirectly via renormalization of the chemical potential µ → µ±h x(y) in the two blocks. This can cause topological phase transitions but typically, the impurity bound state energy, and thus the impurity bands are only weakly dependent on the chemical potential via the change of the (normal state) density of states at the Fermi level (see Fig. S7). Thus, one needs relatively strong Zeeman field to cause a transition and the topological gap stays small.
B. Helical p-wave superconductor
Zeeman field along z-direction
In the case of helical p-wave superconductor and Zeeman field in z-direction h = (0, 0, h z ) the effective model for the impurity chain isH Therefore the topological phase diagram is exactly the same as in the case of chiral p-wave superconductor with Zeeman field in x-or y-direction.
Zeeman field along x-direction
In the case of helical p-wave superconductor and Zeeman field in x-direction h = (h x , 0, 0) the effective model for the impurity chain is Here we have made approximations during the derivation of the effective Hamiltonian, so that the expression only serves as a good approximation for calculation of the topological phase diagram. The Zeeman field in this case is very effective in causing topological phase transitions. It acts almost directly to the impurity states. The magnitude of the effective Zeeman field h eff,z is renormalized from the bare value of Zeeman field h eff,z ≈ h x /2.
Zeeman field along y-direction
In the case of helical p-wave superconductor and Zeeman field in y-direction h = (0, h y , 0) the effective model for the impurity chain is Here we have also made some approximations. After these approximations it seems that the effect Zeeman field in this case is similar as in the case of chiral p-wave superconductor with Zeeman field along z-direction. Therefore, it just shifts the blocks in different directions in energy and cannot induce a topological transition. However, it causes a transition from gapped phase into a gapless phase. The magnitude of the effective Zeeman field h eff,y is again renormalized from the bare value of Zeeman field h y .
Supplementary Note S4. TOPOLOGICAL PHASE TRANSITION WITHOUT ZEEMAN FIELD
In the case of helical p-wave superconductivity without the Zeeman field the system satisfies a time-reversal symmetry. Thus, it belongs to class DIII in the Altland Zirnbauer classification scheme allowing for a possibility of a topological phase transition as a function of the impurity strength. In the topologically nontrivial phase, there exists two degenerate Majorana zero modes at each end of the impurity chain. In the case of chiral p-wave superconductivity without Zeeman field the system supports a spin-rotation symmetry that allows to block-diagonalize the Hamiltonian. Thus also in this case the system can support a topologically nontrivial phase with two degenerate Majorana zero modes appearing at each end of the chain. (To be more precise the effective Hamiltonian in both cases also supports a chiral symmetry allowing infinite number of topologically distinct phases to appear, but the Majorana end modes always appear in pairs in the absence of the Zeeman field.) Because the Majorana end modes appear in pairs they are not so useful for topological quantum computing. However, we can check the numerical implementation by calculating the corresponding DIII invariant as described in Ref. 4 which is based on the calculation of the product of Pfaffians at time reversal invariant momenta. We therefore tune the impurity band through the chemical potential by varying the impurity potential V imp from below V * imp to above that value. Indeed, the topological invariant as calculated numerically from the Pfaffian changes sign when the energy of the bound state e 0 hits zero. When looking at the energy bands as function of k x one can also observe that such an impurity band is pushed through zero in this case. Due to time-reversal symmetry, the eigenvalues still come in pairs, therefore the class D invariant (as described in the main text) stays at +1 as expected, see Fig. S3.
Supplementary Note S5. NUMERICAL RESULTS FOR THE ENERGY OF THE IMPURITY BOUND STATE
The effective Hamiltonian Eq. (S10) describes the topological phase transitions (energy gap closings) exactly, provided that the Brillouin zone integrals are evaluated with sufficient accuracy and all the longer range hoppings and pairing amplitudes, given for example in Eqs. (S24) and (S25), are included. These longer range terms are expected to decay exponentially with distance because the bulk Hamiltonian is fully gapped. To verify this, we have examined the norm of G(0, r n ) as function of distance d = |r n | for the various models. The exponential decay is demonstrated in Fig. S4 for all models considered indicating that the errors are exponentially small if the expansion is truncated at finite r n .
We have also numerically studied the tight-binding models using finite size supercells with the impurity in the center. This approach leads to finite size effects in the energy of the order of the bandwith divided by the number of quantum states. To estimate the required system sizes, we perform a check of finite size effects by varying the system size and calculating the impurity potential V * imp where the bound state robustly crosses the zero energy due to a change of a topological invariant as described in detail in Ref. 5 (see Fig. S5). Plotting this quantity as function of the chemical potential µ one can easily estimate the effects of the energy spacing as small oscillations. These can be seen in Fig. S5(a) for a system size of 15x15 elementary cells (single band model with isotropic order parameter), while for 25x25 elementary cells these effects are not present any more. According to Eq. (S19) the bound state energy is the same for chiral and helical p-wave order parameters because only the absolute magnitude of the gap enters the calculation, and this is verified also numerically in Fig. S5(b-c). Note further that the results of V * imp (µ) are very flat (except close to the van Hove singularity at µ = 0). As explained above, this property makes it very difficult to control the topological phase transition by the use of a Zeeman field in z direction in the case of helical p-wave order parameter and an in-plane field in the case of chiral p-wave order parameter.
To compare our analytical result [Eq. (S19)] with the numerical implementation, we calculate the bound state energy at a fixed filling of n = 0.25 as a function of the impurity potential V imp and show the result in Fig. S6. The zero-energy crossing is captured exactly, while there are small deviations at non-zero energies arising from the expansion of the Green function in powers of the energy. Finite size effects of the numerical implementation are clearly seen when plotting the impurity bound state energy as function of chemical potential (see Fig. S7). Note again that the bound state energy depends only weakly on the chemical potential.
Supplementary Note S6. NUMERICAL RESULTS FOR IMPURITY BANDS IN THE PRESENCE OF ZEEMAN FIELD
In Fig. S8 we show the impurity bands for chiral p-wave superconductor in the presence of the Zeeman field. If Zeeman field is applied in-plane it only weakly breaks the degeneracy of the impurity bands [see Fig. S8(a),(b)] as expected from Eq. (S34). Therefore, a strong field is required to induce a topological phase transition. If Zeeman field is applied along z-direction the degeneracy of the impurity bands is broken strongly so that one band is shifted up and the other down in energy [ Fig. S8(c)] such that the bands crosss yielding a gapless system as expected from Eq. (S33).
In Fig. S9 we show the impurity bands for helical p-wave superconductor in the presence of the Zeeman field. If Zeeman field is applied in x-direction the degeneracy of the impurity bands is strongly broken, but apart from the topological phase transition point the system remains gapped [ Fig. S9(a)] as expected from Eq. (S36). If Zeeman field is applied along the y-direction the bands are just shifted in energy and the system becomes gapless [ Fig. S9(b)] as expected from Eq. (S37). Finally, the field in z direction affects the bands only very weakly [ Fig. S9(c)] so that a strong field is required to induce a topological phase transition as expected from Eq. (S35). (S38) We have studied the topological phase diagram both by using the full tight-binding Hamiltonian and the effective Hamiltonian (Figs. S10, S11, S12 and S13). Figs. S10 and S11 show Q for the chiral p-wave superconductor as a function |h| and V imp as obtained from the effective Hamiltonian and the full tight-binding Hamiltonian, respectively. The results are in excellent agreement with each other. The in-plane fields can cause a topological phase-transition to a topologically nontrivial phase with Q = −1 [Figs. S10(a),(b) and S11(a),(b)], but it requires large fields and the magnitude of the topological gap remains relatively small [ Fig. S14(d),(e)]. Zeeman field applied in the z-direction can make Q = −1 [Figs. S10(c) and S11(c)] but in this case the system is gapless [Fig. S14(f)].
Figs. S12 and S13 show similar phase diagrams for the helical p-wave superconductor. The Zeeman field in xdirection is very effective in causing a transition to a topologically nontrivial phase with Q = −1 [Figs. S12(a) and S13(a)] leading to a large topological energy gap [ Fig. S14(a)]. Zeeman field applied in the y-direction can make Q = −1 [Figs. S12(b) and S13(b)] but in this case the system is gapless [ Fig. S14(b)]. Finally, the Zeeman field applied in z-direction can cause a transition to a topologically nontrivial phase with Q = −1 [Figs. S12(c) and S13(c)], but it requires large fields and the magnitude of the topological gap remains relatively small [ Fig. S14(c)].
The dependence of the topological phase diagram on the direction of the Zeeman field [ Fig. S15] is strikingly different in the cases of helical and chiral p-wave superconductors. Thus, it can be used as a diagnostic tool to determine the order parameter symmetry of triplet superconductors.
In Fig. S16 we show that the topologically nontrivial phase can be reached also by placing the impurities at the step edge appearing on the surface of the system. To illustrate the tuneability of the phase boundaries, we consider a Y-junction geometry of two curved chains with different lattice constants (see Fig. S17), where Majorana zero modes appear at the interfaces of topologically trivial (black) and nontrivial (red) regimes which can be shifted by controlling the magnetic field direction. The local phase diagrams on the symmetrically placed example points P 1 and P 2 are shown in Fig. S17(c) and (d), respectively, to demonstrate that for a field with azimuthal angle φ = 0 and polar angle of θ = 0.7π, point P 1 is in the topologial phase and P 2 on the chain with the different lattice constant is in the trivial phase. Note that the phase diagrams (e,f) are in principle shifted by π/2 and additionally the topological phase is larger in (e) due to the different choice of the lattice constant. By rotating the Zeeman field such that it follows the trajectory drawn as red path in panels (e) and (f), one can in principle achieve an interesting movement of the phase boundaries. First the phase boundary labeled 2 moves completely down and number 3 joins on the junction (panel (b)). Rotating the field further, one can tune the system such that a segment on the left is in the trivial state and boundary 3 moves to the left. Now, one can move boundary 2 to the right hand of the Y-junction before boundary 3 moves back to the junction. In summary, we have executed a full circle in the parameter space, i.e. the magnetic field at the beginning has the same value as at the end. At this point we note that the trajectories in the phase diagrams in Fig.S17(e,f) are very close to the phase boundaries (dashed white line), i.e. the topological gap turns out to be very small such that the MZMs are less localized and tend to overlap. The experimental realization of this protocol to exchange MZMs is therefore more than challenging. More sophisticated trajectories for the external field h might help a bit; we have not attempted to optimize this procedure, but instead propose to use local magnets to overcome this difficulty.
The occupation operators of these fermions are given by (S40) and the many-particle ground states can be defined as (c|00 = 0, d|00 = 0) The back transformation reads In order to describe the state after the fusion, we introduce another set of fermionic operators as The degenerate many-particle ground states can be written using these fermion operators as (e|00 e,f = 0, f |00 ef = 0) |00 e,f , e † |00 e,f = |10 e,f , f † |00 e,f = |01 e,f , e † f † |00 e,f = |11 e,f . in panel (a)). The differences in the two phase diagrams are due to the different tangential directions of the chains at these points and the different lattice constants. The magnitude of the topological gap (not shown) remains small along the trajectory, thus MZMs will almost certainly overlap. In the calculation of the topological phase diagrams we have assumed lattice constants (e) a0 ≈ 7ξ and (f) a0 ≈ 6.2ξ. Other parameters are identical to the ones in the main text, ∆0/EF ≈ 0.08.
In order to find the basis transformation between the states , we We observe that the unitary transformation between the two bases does not change the total parity. Therefore, using the equations (S45) we find that the basis transfomation between states (S41) and (S44) is given by , , , From these expressions it is clear that when the MZMs 2 and 3 are fused corresponding to projective measurement of f † f , the measurement outcomes 0 and 1 have equal probabilities. After the measurement the system has equal probabilities to be in the two different eigenstates of e † e. As discussed in the main text the MZMs can be moved by varying the direction and the magnitude of the magnetic field. Moreover, the parity of the Majoranas can be measured as soon as the MZMs 2 and 3 hybridize and acquire a charge [6]. These are the key ingredients for performing a fusion and braiding experiments. The curved chain geometry can also be generalized so that braiding and more complicated manipulations of MZMs can be performed along similar lines as proposed in Ref. 7. However, to preserve the quantum information stored in the MZMs the following conditions have to be satisfied [8]: (i) The time scale of operations t 0 has to be much shorter than the tunneling time t tunneling ∝ e L/ζ M and thermal excitation time t thermal ∝ e Egap/k B T , where L is the distance between spatially separated Majoranas (the ones which are not fused intentionally), ζ M is the localization length of the Majoranas, E gap is the minimum excitation energy of the quasiparticles (other than the MZMs) during the braiding cycle, k B is the Boltzmann constant and T is the temperature. (ii) The time-scale t 0 has be much shorter than the quasiparticle poisoning time t poisoning (which is typically determined by non-equilibrium quasiparticles). (iii) The time-scale t 0 should be long compared to /E gap to avoid dynamical excitations of the quasiparticles.
The curved chain geometry leads to slowly varying parameters along the chain so that in addition to the MZM there exists low-energy Andreev bound states at the domain wall between nontrivial and trivial regions. This means that E gap is much smaller than the topological gap ∆ topo which can be achieved in a linear chain. Thus, also the localization length ζ M in the curved chain is much longer than the corresponding length scale ζ of the linear chain discussed in the main text. Therefore, t tunneling and t thermal are much shorter in curved chain geometry than in the linear chain, so that it is challenging to satisfy the requirements /E gap t 0 t tunneling , t thermal . We also point out it is difficult to vary the external magnetic field fast so that it is also difficult to satisfy the requirement t 0 t poisoning . In the next section we discuss how it is possible to overcome these problems by using small magnets where the magnetization directions are controlled fast using spintronic techniques [9][10][11][12][13][14]. In order to probe the non-Abelian statistics of the MZMs, we propose a tri-junction geometry (see Fig. S18), where Majorana zero modes appear at the interfaces of topologically nontrivial (red) and trivial (black) regimes. The topology of each segment i = 1, 2, 3 of the tri-junction (see Fig. S18) is controlled by placing two magnets with magnetizations M iA and M iB on the different sides of the chain. The magnets should be placed within the superconducting coherence length from the impurity sites (in the case of Sr 2 RuO 4 this is approximately ∼ 70 nm) and they should be close enough to the surface of the superconductor to cause a magnetic exchange field due to the magnetic proximity effect. If the magnetizations M iA and M iB point in the same direction the segment i realizes approximately an impurity chain in the presence of a homogeneous Zeeman field. On the other hand, if M iA and M iB point in opposite directions, their effect on the impurity bound states cancel each other so that as a good approximation we obtain the Hamiltonian in the absence of Zeeman field. Thus, by designing the magnets so that the magnetizations M iA and M iB have an easy-axis anisotropy along the direction of the segment i, the results obtained in the previous sections demonstrate that we can choose the magnitudes of the exchange fields so that the parallel (antiparallel) magnetizations M iA and M iB lead to topologically nontrivial (trivial) phase with large energy gap E gap and short localization length of MZMs ζ M . Thus, the MZMs can be robustly manipulated by switching the magnetization directions fast using the spintronic techniques [9][10][11][12][13][14].
To perform an exchange of MZMs we utilize the anyon teleportation scheme [8,15,16], where the exchange of MZMs is obtained via a sequence of projective measurements shown in Fig. S18. According to the universal non-Abelian braiding statistics of the MZMs the exchange of MZMs γ 1 and γ 2 is described by applying the unitary operator on the state of the system [8]. The teleportation scheme is based on the decomposition [8,15,16] Π 03 Π 02 Π 01 Π 03 = 1 8 where each operator Π kl = 1 2 (1 + iγ k γ l ) (S49) describes a projective measurement of the parity P kl = iγ k γ l of MZMs γ k and γ l with the outcome of the measurement being +1. Therefore, based on Eqs. . Each of these parity operators iγ0γi can be measured by rotating the magnetizations MiA and MiB from parallel to antiparallel configuration resulting in hybridization of γ0 and γi so that their parity can be measured using a charge sensor.
iγ 0 γ 1 and continue if the outcome of the measurement is +1. (iii) Perform a measurement of the parity iγ 0 γ 2 and continue if the outcome of the measurement is +1. (iv) Perform a measurement of the parity iγ 0 γ 3 . The exchange of Majoranas γ 1 and γ 2 is successfully executed if the outcome of the measurement is +1. In Fig. S18 we illustrate how all the steps of this process can be realized with the help of the small magnets. This exchange operation has a probabilistic element because in each projective measurement the outcome of the measurement can also be −1. Therefore, one typically needs to repeat the process many times to successfully execute the exchange operation. Finally, we point out that the circuit shown in Fig. S18 can also be utilized to perform the fusion experiment discussed in the main text without the obstacles mentioned in the previous section. Moreover, the ideas presented in this section can be generalized for a construction of a fully scalable network of impurity chains and small magnets, where the MZMs are manipulated by spintronic means. | 2021-02-26T02:15:30.141Z | 2021-02-24T00:00:00.000 | {
"year": 2021,
"sha1": "a90197b2c18bcac57c3c2ff8f64e6255ed3cb2b1",
"oa_license": "CCBY",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevResearch.3.033049",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "a90197b2c18bcac57c3c2ff8f64e6255ed3cb2b1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
210869896 | pes2o/s2orc | v3-fos-license | A rare case of abdominal adenoid basal cell carcinoma in a patient with a history of radiation therapy
Basal cell carcinoma (BCC) is the most common skin cancer and its incidence is steadily increasing. Prior radiation therapy is one of the most important risk factors for BCC. Although the mechanism remains undefined, long-term studies have shown that people exposed to radiation have an increased risk of BCC. Despite the fact that BCC occurs most frequently in sun-exposed areas of the body, patients with a history of radiation therapy have an increased risk of BCC in areas previously exposed to radiation. Here, we report a case of adenoid BCC on the abdomen in a 67-year-old woman after radiation therapy post-hysterectomy.
INTRODUCTION
Basal cell carcinoma (BCC) is the most common skin cancer, with over 2,000,000 cases diagnosed annually [1]. BCCs have three well-recognized growth patterns: nodular, superficial, and morphemic [1,2]. Histopathological variants of nodular BCCs include the solid (primordial), keratotic (pilar), cystic, and adenoid types. Adenoid BCC (ABCC) is a rare histopathological variant with a reported incidence of seven out of 103 BCC cases [1][2][3]. ABCC can appear as pigmented or nonpigmented nodules or ulcers without a predilection for any particular site [4,5]. Because the major risk factor for BCC is exposure to ultraviolet A and B radiation, BCC occurs most commonly in sun-exposed areas such as the face and neck (80%-90%), with the remaining 10%-15% of cases occurring in other areas, such as the abdomen, armpit, and chest [4,6]. However, in patients with a history of radiation treatment, the prevalence of BCC is higher than in patients with no history of radiation treatment [7][8][9]. Moreover, several reports have characterized radiogenic BCC as gen-erally more aggressive, difficult to excise completely, and more likely to recur than non-radiogenic BCC [9,10]. Therefore, it is important to conduct a close physical examination, especially of areas previously exposed to radiation, in patients with a history of radiation treatment [7][8][9]11]. Here, we report a case of ABCC that showed unique histological and morphological features in a patient with a history of radiation therapy.
CASE
A 67-year-old Asian woman presented with a lesion in the abdominal area, which first appeared 5 years prior as a small protruding pigmented mass. The patient stated that the lesion initially appeared as a pigmented mass of less than 1 cm in diameter near the navel and grew slowly over 5 years. When she presented to the clinic, the lesion was 10.5 × 7.1 × 3.4 cm, with a firm papillary shape and a brownish-black crust; additionally, the lesion was painless and pedunculated (Fig. 1). The patient had a history of leiomyosarcoma, for which she underwent a A rare case of abdominal adenoid basal cell carcinoma in a patient with a history of radiation therapy Ji Hun Kim, Sun Eung Kim, Young Woo Cheon hysterectomy in 1999, with metastasis to the lung, for which she underwent a wedge resection procedure in 2001. Radiation therapy was performed post-hysterectomy in 1999, with regular follow-up. Since the wedge resection procedure in 2001, the patient had not been diagnosed with any tumor other than the abdominal mass. The patient had no personal or family history of skin cancer or other skin diseases. Laboratory studies, including a complete blood cell count, blood chemistry profile, test for sexually transmitted infections, urinalysis, chest X-ray, and electrocardiogram, were negative or within normal limits. ABCC was diagnosed with a skin biopsy, and magnetic resonance imaging confirmed that the ABCC existed only at the level of the skin and the superficial subcutaneous layer of the lower abdomi-nal wall (Fig. 2). The resection margins were determined according to international guidelines [12,13]. Intraoperatively, an elliptical wide excision with safety margins of 2.0 cm was made under general anesthesia (Fig. 3). The wound was closed using an advanced flap technique consisting of the elevation of the cephalic and caudal skin flaps. Complete excision with a negative margin was confirmed by frozen section examination in the operating room, and the final histopathological findings also confirmed the result. As positron emission tomography/computed tomography showed hypermetabolic lymph nodes in both inguinal areas, bilateral inguinal lymph node biopsies were performed. The histopathological examination showed proliferation of basaloid cells extending into the dermis with cyst-like
A B
A localized, firm, and tender brownish-black papillary mass in the abdomen measuring 10.5 × 7.1 × 3.4 cm.
Wide excision specimen of the adenoid basal cell carcinoma at surgery. (Fig. 4A). Peripheral palisading was prominent in many of the cellular aggregates. In the center of the structure, many basaloid cells were observed, with dark-staining nuclei and little cytoplasm. The entrapment of the mucinous stroma between the anastomosing strands of cells and within cell islands produced the appearance of gland-like or tubular structures (Fig. 4B). Both the clinical and histopathological features of the carcinoma were consistent with adenoid BCC. There was no lymphovascular invasion within the peripheral or deep resection margin, and the depth of involvement was 1.5 cm. On the immunohistochemical analysis, the atypical cells stained positive for Bcl-2 and c-kit and negative for HMB45, S-100, and epithelial membrane antigen. The surgical wound healed well without complications (Fig. 5). Upon consultation with an oncologist, treatment was terminated with surgical resection, and regular follow-up was conducted.
DISCUSSION
BCC represents approximately 70% of all malignant diseases of the skin, 75% of nonmelanoma skin cancers, and 65% of epithelial tumors [1][2][3]. A recent Korean study reported that from 1999-2002 to 2011-2014, the incidence rate of BCC increased notably, from 1.18 to 2.35 per 100,000 people in men and from 0.98 to 2.25 per 100,000 people in women [6]. BCCs occur most commonly in sun-exposed areas of the skin, such as the face and neck (80%-90%); the remaining 10%-15% of cases occur in areas not exposed to sunlight, such as the abdomen, armpit, and chest [4,6]. The histological diagnosis and classification of BCC is essential for evaluating the risk of recurrence and comparing treatment outcomes [1][2][3]. Histologically, BCC can be classified as nodulocystic (35.4%), mixed (30.1%), infiltrative (9.3%), superficial (6.7%), micronodular (6.2%), adenoid (5.9%), metatypical (4.0%), morpheaform (2.1%), or fibroepithelioma (0.3%) [3][4][5]14]. Of these, ABCC is a rare histopathological subtype that exhibits tubular differentiation and that has not been reported to occur preferentially at specific sites [4,5,14]. Histopathologically, this rare variant of BCC consists of small uniform cells with peripheral nuclear palisading, arranged in nests and cords with intertwining strands [1,5]. The nuclei may arrange radially around islands of connective tissue, resulting in a tumor with a lace-like pattern. A colloidal eosinophilic substance or amorphous granular material is sometimes observed within the tubular structure of the lumen. However, studies have not yet determined the precise composition of this material or the exact details of the secretory activity of the lumen-lining cells [1,5]. Although exposure to ultraviolet light is considered the single most important risk factor for BCC, radiation therapy, exposure to arsenic or coal tar derivatives, scars, burn sites, chronic inflammation, ulcers, and immune deficiencies are also associated with an increased risk of BCC [1,2]. Radiation therapy for leiomyosarcoma may have been the pertinent risk factor for BCC in the patient described in the current study. A cohort study conducted in 2011 using data from the U.S. Surveillance, Epidemiology, and End Results cancer registries reported that approximately 8% of all second solid tumors in adult cancer survivors may be related to the radiation therapy for the original cancer [11]. Evidence of radiation-induced skin cancers has been reported in uranium miners, radium dial painters, radiologists, and patients with a history of using early World War II-era highvoltage cathode-ray tube oscilloscopes, as well as in patients treated with X-rays for childhood acne, tinea capitis, or thymic enlargement [7,8]. Possible mechanisms of the development of radiation-associated BCC include radiation-related DNA damage, which induces an intricate range of cellular responses involving the cell signaling pathways implicated in the progression of radiation-induced tumors over time, as well as other factors that may make patients more susceptible to radiation-induced BCCs [9].
Heckmann et al. [15] found that several facial areas, such as the medial quadrant of the orbit, showed high frequencies of BCC despite low ultraviolet light exposure; these areas were found to be characterized by a concave shape, reduced skin ten-sion, and the presence of marked skin folds. Heckmann et al. further suggested that the disturbed cell-matrix interactions at these sites may contribute to the development of BCCs. This aligned with the morphological features of our patient, as her abdomen displayed folds (Fig. 1).
As shown in our case, BCC may occur in sun-protected areas of the body, which can be explained by prior radiotherapy and morphological features of the area. Although clinically, BCCs tend to be relatively stable, occasionally the lesions may grow aggressively and infiltrate the surrounding structures, as in our patient [1][2][3]. Risk factors for recurrence include the location and size of the tumor, the histological tumor type, and the treatment strategies used. In patients with a history of radiotherapy, lesions that developed longer ago, are located in the center of the face or ear, exceed 2 cm in size, and are histologically infiltrative, micronodular, and morpheaform have been found to show a higher risk of recurrence [1,2,4]. Infiltrative and micronodular variants such as ABCC are more aggressive and are associated with higher recurrence risk [1,4,5].
This report highlights the importance of performing a complete cutaneous examination, including of sun-protected sites, to identify lesions present in areas of the skin that have been irradiated, especially in patients with a history of radiation therapy. Regardless of pigmentation, any small nodular lesion can be a candidate for BCC. Early detection and appropriate treatment can reduce morbidity due to this condition.
Conflict of interest
No potential conflict of interest relevant to this article was reported.
Ethical approval
The study was performed in accordance with the principles of the Declaration of Helsinki. Written informed consent was obtained.
Patient consent
The patient provided written informed consent for the publication and the use of her images. | 2020-01-23T09:05:32.372Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "41cbbaaae9366d8b3c973097d7732e447f15919f",
"oa_license": "CCBYNC",
"oa_url": "http://www.e-aps.org/upload/pdf/aps-2019-01081.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7896533861d43a1232852be6e040884dc4986483",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221320093 | pes2o/s2orc | v3-fos-license | Safety and Efficacy of Stem Cell Therapy in Patients With Ischemic Stroke
Stem cell therapy is emerging as a promising treatment strategy to treat patients with stroke. While there are established modes of treatment for stroke patients such as thrombolysis and endovascular intervention, most of the stroke patients frequently end up with major residual deficits or even death. The use of stem cells to treat stroke has been found to be beneficial in the animal models but strict evidence for the same in humans is still lacking. We reviewed 13 clinical trials of stem cell therapy in stroke patients conducted between 2014 and 2020 based on the search using the database PubMed, and the clinical trial registry (www.clinicaltrials.gov). We aimed to assess the safety and efficacy of stem cell treatment in stroke patients who participated in the trials. Quality assessment of the clinical trials revealed a sub-optimal score. We found mixed results regarding the efficacy of stem cells in the treatment of ischemic stroke although we could not do a quantitative analysis of the effect outcomes. Assessment for safety revealed promising results as there were only minor side effects related to cell therapy. Although stem cell therapy seems to be a promising strategy to treat stroke patients in the future, we concluded that the field needs more evidence regarding the safety and efficacy of the use of stem cells in stroke patients before we use them in the clinic.
Introduction And Background
Stroke is the second largest cause of death globally (~5.5 million) and also the second most common cause of global disability-adjusted life years (~116.4 million) [1,2]. Stroke prevalence in adults is 2.9% in the United States, with about 795,000 people experiencing a new or recurrent stroke each year [3]. Although currently recommended therapies such as pharmacological thrombolysis, endovascular intervention, and rehabilitative strategies have proven to be beneficial, many stroke patients continue to have residual deficits despite treatment [4]. Regenerative therapy in the form of stem cells (neural stem cells, hematopoietic stem cells, and mesenchymal cells) is emerging as a promising treatment strategy to prevent stroke-related tissue damage, promote repair of damaged tissues and enhance functional recovery [5,6].
Stem cells (SCs) work through the mechanisms that involve integration into the host brain to replace within the damaged host tissue, neuroprotection involving downregulation of inflammatory and immune response, inhibiting apoptosis in a transplanted host as well as increasing endogenous repair process via vascular regeneration, induction of host brain plasticity, and recruitment of endogenous progenitors [6]. As stroke involves loss of multiple cell types including blood vessels, astrocytes, neurons, and oligodendrocytes, the neuroprotective and restorative property of stem cell-based therapy is indicative of a promising future in the treatment of stroke [7]. Because of similar pathophysiology and treatment strategy including thrombolysis in myocardial infarction (MI) and stroke, we can also use the cell-based therapy experience in MI as a guide to use in stroke management [7].
There have been multiple preclinical and clinical studies to establish the safety and efficacy of the use of SCs in stroke patients. Although many preclinical studies have reported a promising outcome of SC therapy, complete success has not been established through human clinical trials [5]. While some of the trials in humans have reported neutral outcomes and minor adverse effects related to the treatment, most of them have indicated that SCs are safe, and improve the functional outcome. However, there are multiple factors that may correlate with the safety and efficacy of the SCs including the host factors, type and source of SCs, dose, and route of delivery, time from stroke, and measures of safety and outcome [5].
Sufficient evidence to establish the safety and efficacy of the SC therapy in ischemic stroke is still lacking. It is urgent to explore the outcome of the therapy and its correlates to assess the possibility of bringing it to the clinics in the near future. Although several clinical trials of SC therapy in ischemic stroke conducted in the past have been reviewed, we could not find reviews of the studies conducted in recent years. The aim of this study is to highlight the findings of recent clinical trials published in the last six years to further establish the safety and efficacy of the SC therapies in patients with ischemic stroke and inform future research.
Search Strategy
We manually searched for the clinical trials published from in between 2014 and 2020 using the PubMed database with the search strategy: "Stem cells or Stem cell therapy" and "Stroke or Middle cerebral artery or MCA or anterior cerebral artery or ACA or ischemic stroke" and "human" and "clinical trial". We also searched http://clinicaltrials.gov to gain some additional required information about the studies.
Eligibility Criteria
We included the human clinical trials (controlled and noncontrolled, randomized and nonrandomized, single centered and multicentered, open-label and blind, phase I and II) published in English in between January 2014 and January 2020, that examined the safety and/or efficacy of SCs administered to patients with acute or chronic ischemic stroke via any routes (intravenous, intra-arterial, intrathecal, direct transplantation). Trials using any type of stem cells (mesenchymal, bone marrow-derived, umbilical cordderived, neural, hematopoietic) with or without manipulation were included. Patients with hemorrhagic stroke and preclinical studies were excluded.
Data Extraction
We extracted data including date of publication, study design, location, number of participants in each trial, baseline inclusion criteria (at least one), the demographic profile of participants (gender, mean age), stem cell type, dose, route and time from onset of stroke to delivery, stroke type, and outcome: adverse events and efficacy. Measures of outcome measured latest were included if they were reported for >1 time points. In the case of controlled trials, the baseline characteristics of controls were similar to that of the treatment group, so we reported the baseline data only for the treatment group. Outcome measures for the treatment group were reported in comparison with that of controls. If any additional treatment was provided along with stem cell therapy, only stem cell therapy was considered. Two intervention groups (excluding the control group) in the same study, were reported separately. We extracted quantitative data from all available sources in each paper, including text and figures. Whenever this was not possible for instance, in the case of small graphs, we reported only the qualitative data. We developed a data abstraction spreadsheet using Microsoft Excel version 2016 (Microsoft Corp., Redmond, WA, USA) to organize the data. Two authors independently did the quality check of the selected clinical trials using the Physiotherapy Evidence Database (PEDro) scale and any difference in assessment was brought into concordance by discussing with the third researcher.
Results
We identified 237 articles from the database search. Screening by reading the abstracts resulted in the inclusion of 71 studies. Exclusion was because of different fields of study, different situations, nonrelevance, and different study designs. We found full text for only 61 articles. Thirteen articles [8][9][10][11][12][13][14][15][16][17][18][19][20] that met the inclusion criteria were finally selected for review. The selection process is detailed in Figure 1. Seven [9,11,12,14,16,17,20] out of thirteen studies had a control arm, one of which was a historical control. Quality check of the studies by PEDro scale ( Table 1) revealed the scores of the included studies which ranged from four to 10 (median: 5.5 [interquartile range (IQ) range: 3]). No study was excluded after quality assessment.
Study Characteristics
In the case of controlled clinical trials, we included characteristics of only the intervention group (wherever not specified), as the baseline characteristics of controls were found to be comparable to the intervention group. The baseline characteristics of the study participants are summarized in Table 2. Details of the type, dose, route, and time of delivery of stem cells along with the nature of the stroke are summarized in Table 3.
Study Outcome
The outcomes (adverse events and efficacy) of the treatment reported in the included studies are summarized in Table 4. For controlled studies, the outcome measures for the treatment group are detailed in comparison with that of controls. Serious adverse events reported were transient ischemic attack, seizure, asymptomatic subdural hematoma/hygroma, urinary tract infecton (UTI), sepsis, pneumonia, hyperglycemia, neutrophilia, shingles, ischemic stroke, cellulitis, muscle cramps, fracture neck femur, and peripheral vascular disease. None of these events were attributed to the cell therapy but some were reported probably to be procedure-related.
Discussion
Cell-based therapy offers a promising future in the field of ischemic stroke management amidst the limited benefits seen with the existing therapies such as thrombolysis and endovascular intervention [4]. Several factors may come into play when we analyze the safety and efficacy of cell-based therapies, such as the type of stem cells, modifications, dose, and route of delivery along with the patient's baseline characteristics and time of intervention. In this study, we attempted to review the results of the clinical trials of stem cell treatments conducted in the last six years in patients with ischemic stroke regarding the safety and efficacy of the therapy.
In our review, we included thirteen clinical trials [8][9][10][11][12][13][14][15][16][17][18][19][20] conducted from 2014 to 2020, which used stem cells to treat ischemic stroke patients. Seven studies [9,11,12,14,16,17,20] out of 13 were randomized controlled trials (RCT) and six [8,10,13,15,18,19] were single-arm studies. Only three RCTs, Savitz et al, Sprigg et al., and Prasad et al. of the seven showed no significant change in the outcome measures [9,14,17]. These studies had a relatively higher methodological quality and used allocation concealment and blinded outcome. As allocation concealment helps to minimize selection bias, the rest of the studies which showed the positive results without allocation concealment might have exaggerated the effects of treatment [21]. In Prasad et al. [17] and Savitz et al. [9], intervention was not in the acute stage, and the baseline National Institutes of Health Stroke Scale (NIHSS) were higher (which means poor prognosis), which might have caused to yield no benefits. Prasad et al. also reported that future trials should focus on treating stroke within one week of onset [17]. In Sprigg et al., relatively non-serious and chronic cases of stroke were enrolled, which might have caused to miss the noticeable changes with the therapy [14]. The chance of type 2 error due to small sample sizes is also possible.
It is noteworthy that four RCTs, Bhatia et al., Hess et al., Taguchi et al., and Chen et al. out of seven RCTs, which also had higher methodological quality on the basis of study design, outcome measurement and analyses, revealed positive results [11,12,16,20]. In Hess et al. [12], Taguchi et al. [16], and Bhatia et al. [11], relatively early intervention (at 24 hours, 7-10 days, and 8-19 days respectively) using intravascular delivery (intravenous or intraarterial) in the patients with higher baseline NIHSS (13.5, 16.5, 10.6 respectively) yielded positive results. This is in accordance with the previous findings that intravascular delivery of stem cells may be ideal within 24 hours -a month of the onset of stroke [22,23,24]. For the cases of stroke after months to years of onset, direct implantation of cells can be the choice as it permits better engraftment after the initial inflammatory response is over [25]. Accordingly, we found that in Chen et al., treatment with subcutaneous (s.c.) injection of Granulocyte Colony Stimulating Factor (GCSF) and intracerebral implantation of CD34(+) immunosorted Peripheral Blood Stem Cells (PBSCs), as late as six months to five years of stroke onset in patients with baseline NIHSS of 8-20, demonstrated positive results [20]. Especially in Taguchi et al., an important thing to account is the dose-response relationship of the SCs and efficacy outcomes in the patients, which explains strong evidence of the causal relationship between the therapy and response [16]. In contrast, no correlation between the dose of administration and modified Rankin Scale (mRS) outcome was seen in Savitz et al. [9]. However, analysis of the dose-response relationship of the SCs and efficacy outcomes in the future cannot be emphasized more to determine the measure of benefits of the treatment.
Although all of the single-arm studies [8,10,13,15,18,19] showed positive results, there is a chance that the natural progression of the disease has some role, if not all, on improvement of the patients. These studies possibly gave higher estimates of neurological improvement, and publication bias is also possible. There is a chance of type I error as most of them were found lacking effective study design such as randomization, blinding, allocation concealment, intention-to-treat (ITT) analysis, and statistical comparison. However, these studies can be useful to look out for the adverse events associated with cell therapy.
Regarding the route of administration, although intra-arterial (i.a.) is preferred to intravenous (i.v.) route because of no first-pass metabolism in i.a. route, six out of seven studies in our review, which used i.v. route correlated with positive results, except Prasad et al. [17,26]. This is similar to the studies in rodents which yielded similar or greater benefits associated with i.v. route than through i.a. or intracerebral route [27,28]. Six trials that used other routes (i.a., s.c. or direct implantation) showed mixed outcomes (four showed positive results and two showed neutral results) but more procedural adverse effects (such as hematomas, infections) were reported in most of them. For instance, in Steinberg et al. which used the surgical transplantation method, post-surgery headache, a subdural fluid collection and a life-threatening seizure were reported probably or definitely related to the procedure [15]. Also, in Savitz et al., i.a. route correlated with no benefits, rather some procedural adverse effects were seen [9].
Among the adverse events reported in the included clinical trials, only minor or less severe adverse events were due to the stem cell treatment as such. The serious adverse events reported, such as transient ischemic attack, seizure, asymptomatic subdural hematoma/hygroma, urinary tract infection (UTI), sepsis, pneumonia, hyperglycemia, neutrophilia, shingles, ischemic stroke, cellulitis, muscle cramps, fracture neck femur, and peripheral vascular disease were not attributed to the cell therapy although some were reported probably to be procedure-related. Only two studies followed up the participants up to two years of treatment for the outcome and the rest of the studies followed up only up to one year of treatment or less than that. To reject the potential of adverse events especially tumorigenesis, a long-term follow-up is needed [29]. However, as we included the trials using any type of stem cells in our review, adverse events specific to stem cell type could not be effectively interpreted. Direct comparison of the source of stem cells would have helped to establish the ideal source and delivery technique and to make the outcome measures more comparable. Overall, as no treatment-related mortality, tumorigenesis, or major adverse effects were reported, this encourages us to proceed with the further stages of trials in a larger scale.
There is no established mechanism of action of the stem cells and a specific intention of treatment has not been recognized yet [30,31]. Accurate hypotheses governing those factors including type of stem cells, techniques of extraction, modification, and delivery is deemed necessary in the future. Long term follow-up to watch for the potential adverse events and monitor the effectiveness of the therapy including the survival benefit using survival analysis seems important to define the prognostic indices after the therapy. Outcome measures also need to be more specific to the deficit in a patient, for instance, one with aphasia should be assessed for aphasia rather than a general neurological assessment as proposed by Cramer et al. to use "modality-specific outcome measures" [32]. Clinical trials combining stem cell treatment with traditionally established treatment such as thrombolysis, endovascular intervention, and physiotherapy should be conducted in a large scale to explore the potential benefits. None of the reviewed studies included patientfocused outcomes, although they used the objective scales to measure impairment and functional status (i.e., NIHSS, mRS, or BI).
Our study has some limitations. We excluded the studies published in languages other than English, which might have limited our comprehensiveness. Although we did an extensive search of the relevant clinical trials, there is a chance that we might have missed some. We could not do statistical analysis of the measured outcome; hence quantitative comparisons could not be done between the studies. Because of difficulty to accurately extract the data from small graphs and figures, we mentioned only the reported qualitative data for some studies, and we did not contact the author. Overall, the sample size of the included studies was relatively small. While Hess conducted the largest study with 125 subjects (60 in the control arm and 65 in the treatment arm), in a blinded fashion, Banerjee conducted with as less as five participants as phase I, nonrandomized, open-label trial [12,19].
Conclusions
The clinical trials we included in our review showed mixed results in terms of efficacy of the SCs in the treatment of ischemic stroke while assessment of safety profile yielded promising results as there were only minor side effects related to the cell therapy. There are discrepancies and heterogenicity in results to date in this emerging field which reflect the obstacles we must overcome to bring the SCs to the bedside. Before the stem cell therapy reaches the clinic, there needs to be larger, adequately powered, and well-designed clinical trials with more comparable results to assess and establish the safety and efficacy of SCs in stroke patients.
Conflicts of interest:
In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | 2020-08-25T22:31:02.112Z | 2020-08-01T00:00:00.000 | {
"year": 2020,
"sha1": "14817843108b74cbc3ba48b6972b59e78a9dcdfa",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/38405-safety-and-efficacy-of-stem-cell-therapy-in-patients-with-ischemic-stroke.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "14817843108b74cbc3ba48b6972b59e78a9dcdfa",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246707937 | pes2o/s2orc | v3-fos-license | Strain Rate and Stress Amplitude Effects on the Mechanical Behavior of Carbon Paste Used in the Hall–Héroult Process and Subjected to Cyclic Loadings
Carbon products such as anodes and ramming paste must have well-defined physical, mechanical, chemical, and electrical properties to perform their functions effectively in the aluminum electrolysis cell. The physical and mechanical properties of these products are assigned during the shaping procedure in which compaction stresses are applied to the green carbon paste. The optimization of the shaping process is crucial to improving the properties of the carbon products and consequently to increasing the energy efficiency and decreasing the greenhouse gas emissions of the Hall–Héroult process. The objective of this study is to experimentally investigate the effect(s) of the strain rate, of the stress maximum amplitude, and of the unloading level on the behavior of a green carbon paste subjected to cyclic loading. To this end, experiments consisting of (1) cyclic compaction tests at different maximum stress amplitudes and strain rates, and (2) cyclic compaction tests with different unloading levels were carried out. The study obtained the following findings about the behavior of carbon paste subjected to cyclic loads. The strain rate in the studied range had no effect either on the evolution of the permanent strain as a function of the cycle number, nor on the shape of the stress–strain hysteresis during the cyclic loading. Moreover, samples of the same density that had been subjected to different maximum stress amplitudes in their loading history did not have the same shape of the stress–strain curve. On the other hand, despite having different densities, samples subjected to the same number of cycles produce the same stress–strain curve during loading even though they were subjected to different maximum stress amplitudes in their loading histories. Finally, the level of unloading during each cycle of a cyclic test proved significant; when the sample was unloaded to a lower level of stress during each cycle, the permanent strain as a function of the cycle number was higher.
Introduction
The Hall-Héroult process is used worldwide to produce primary aluminum. This process aims to transform the raw material alumina (Al 3 O 3 ) into liquid aluminum (Al) in an electrolysis cell operating at high temperatures (920-980 • C) [1]. A direct current travels the electrolysis cell downwards; it passes through the carbon anode and the cryolite bath where the alumina is dissolved, and finally through the cathode blocks. A voltage drop carbon paste was investigated. Relaxation tests carried out on a room temperature carbon paste at different levels of imposed strains revealed a highly time-dependent behavior on the part of the carbon paste under relaxation loads and a negligeable viscoelastic strain during the recovery phase in comparison with the instantly reversible and permanent strains [22]. In [6], a modified creep test was performed on the green anode paste at high temperature. A significant increase in deformation during the creep phase was indicative of a time-dependent behavior for the green anode paste. In [22], vibro-compaction tests done with a maximum amplitude of 1 MPa and at frequencies ranging from 0.1 Hz to 7 Hz. indicated that increasing the frequency within the studied range would lead to an increase in the number of cycles and a decrease in the total time needed to reach the target density The frequency, however, would not affect the final rigidity of the vibro-compacted sample.
The potential effects of the vibro-compaction parameters (e.g., frequency, the duration of maximum load) on the final quality of the compacted green anode paste went under investigation in [23][24][25]. It was found that increasing the vibration frequency and the maximum load applied to the green anode paste would result in increasing the apparent density of the carbon block [23][24][25]. However, Increasing the vibro-compaction time beyond an optimal time would result in a decrease in the apparent density and a deterioration in the mechanical properties of the carbon block [24,25].
The findings of the present work aimed to gain a deeper understanding of carbon paste behavior when it is subjected to cyclic loadings. The study focused on aspects not attended to by similar studies; namely, (1) the effect of the strain rate during cyclic tests on the permanent strain evolution, (2) the effect of the stress amplitude during cyclic tests on the permanent strain evolution, (3) the effect of the strain rate on the shape of the hysteresis generated during the cyclic loading, (4) the effect of the stress amplitude on the hysteresis shape, and (5) the effect of the unloading level at each cycle during cyclic loading on the densification behavior. To this end, an experimental campaign was carried out, consisting of (1) cyclic tests at different strain rates and different maximum stress amplitudes, and (2) cyclic tests at different unloading levels.
Moreover, the test results can be used in future studies to develop an appropriate constitutive law for the carbon paste subjected to cyclic loading. The investigation of the strain rate effect will be useful to determine whether the effect of time should be taken into account in the intended behavior law. In addition, the dependence of the mechanical properties of the carbon paste on the density, the cycle number, and the maximum stress amplitude will be determined. The latter result can be used to suggest empirical formulas for the mechanical properties of the carbon paste.
Material
The carbon paste tested in this study was a room-temperature ramming paste provided by the industrial partner Alcoa in a ready-to-use formula. According to the provider's instructions, it is preserved in hermetic containers to prevent the loss of volatiles. The carbon paste is made of calcined anthracite, coal tar pitch, and a softener. The softener decreases the viscosity of the coal tar pitch allowing the formation of the paste at room temperature. For confidentiality reasons, the detailed recipe and the commercial trade name of the carbon paste cannot be disclosed.
Experimental Set-Up
A Dartec compression apparatus is used to perform the experimental tests ( Figure S1), and the axial stress is applied by a computer-controlled servo-hydraulic machine (DARTEC LTD, Stourbridge, UK). The press was modified in the past, the original controller was replaced by a MTS FlexTest 40 (MTS Headquarters, Eden Prairie, MN, USA), and the original load cell by a 250 kN MTS (model 661.22H-01, (MTS Headquarters, Eden Prairie, MN, USA)). The carbon paste is placed in a thin-walled stainless steel mold which has a height of 140 mm, a diameter of 253 mm, and a thickness of 0.3 mm. The Young modulus and the Poisson ratio of the mold were determined by a simple traction test, and they are equal respectively to E = 175 GPa and υ = 0.265. The Mold is equipped with 4 axial and 4 radial strain gauges. They are placed in pairs at a height of 40 mm from the base of the mold and are equally spaced on the mold's circumference. A calibrated LVDT (range: +40 mm, resolution: 0.001 mm) is placed vertically between the base of the mold and the top of the loading plate to continuously measure the height of the tested samples. Each test (T1 to T8) was repeated twice. The results of the repetitions were similar. Therefore, for each test, only the results of one trial were reported.
Methodology
To prepare a sample, 6 kg of the carbon paste was placed in a loose state in the mold. The initial height of the samples was approximately 135 mm. Two series of experiments were carried out to investigate the effects of the strain rate, the stress amplitude, and the unloading level on the mechanical behavior of the carbon paste subjected to cyclic loads. What follows is a detailed discussion of these experiments and the obtained results.
Series I: The Effects of Strain Rate and Stress Amplitude
This first series of tests consisted of 8 cyclic tests carried out at different strain rates and at different maximum stress amplitudes. The objective was to investigate the effects of the strain rate and those of the maximum stress amplitude on (1) the permanent deformation evolution as a function of the cycle number and (2) the shape of the hysteresis curve generated by the cyclic loading. The load path of these tests is shown in Figure 1. Each cycle consists of three steps. The sample was first compacted at a strain rate . until the vertical stress reaches the maximum stress amplitude specified for the test (σ max ). Then, the sample was unloaded by moving the loading plate upward at the same strain rate of loading for a total relative displacement of 20 mm to ensure a complete unloading of the carbon paste. As the last step, the loading plate was kept fixed for 5 s. The cycling was repeated until the sample would reach a density of approximately 1.6 g/cm 3 . The tests were performed for several values of the strain rate . and the maximum stress amplitude σ max , which are shown in Table 1. Three values of strain rates were selected at the outset: . = 0.7 × 10 −2 s −1 , . = 3.7 × 10 −2 s −1 , and . = 7.4 × 10 −2 s −1 . The maximum value of . was 7.4 × 10 −2 s −1 , which corresponds to a displacement velocity of 10 mm/s. Tests with higher strain rates that approach the values used in the industry could not be carried out because of the hydraulic press capacity limits. It was planned to carry out further tests with additional values of the strain rate within the hydraulic press capacity (< . = 7.4 × 10 −2 s −1 ). However, it will be shown in the results section that the strain rate in the studied range had no effect on the carbon paste behavior. Therefore, the study of the strain rate effect was limited to the values shown in Table 1. The values of σ max were chosen to meet the estimated load level applied to the carbon paste in the industry during cyclic compaction. The second series of tests consisted of 5 cyclic compaction tests where the stress amplitude varied between a maximal and a minimal value denoted, respectively, as σ max and σ min (Figure 2). They were conducted at a constant strain rate: These tests were carried out to investigate the effect(s) of the unloading level on the cyclic behavior of the carbon paste, specifically regarding the permanent deformation evolution in function of the cycle number. The tests were carried out for several values of σ max and σ min , which are shown in Table 2. During tests T-III and T-V, the samples were completely unloaded 3 times to investigate the effect of these complete unloadings on carbon paste densification ( Figure 2).
Series II: Effects of Unloading Level during Cyclic Tests
The second series of tests consisted of 5 cyclic compaction tests where the stress amplitude varied between a maximal and a minimal value denoted, respectively, as and ( Figure 2). They were conducted at a constant strain rate: 0.7 × 10 −2 s −1 . These tests were carried out to investigate the effect(s) of the unloading level on the cyclic behavior of the carbon paste, specifically regarding the permanent deformation evolution in function of the cycle number. The tests were carried out for several values of and , which are shown in Table 2. During tests T-III and T-V, the samples were completely unloaded 3 times to investigate the effect of these complete unloadings on carbon paste densification ( Figure 2).
Measurements and Investigated Properties
The axial stress, the sample height and the mold's local axial and circumferential strains at a height of 40 mm from the mold base were monitored and recorded during all Sample Figure 2. The load path of the second series of tests. Table 2. The values of σ max and σ min of the second series of tests.
Measurements and Investigated Properties
The axial stress, the sample height and the mold's local axial and circumferential strains at a height of 40 mm from the mold base were monitored and recorded during all the tests. The measurements were recorded at a time interval depending on the applied strain rate during the test (recalling that each test had a constant strain rate) and having a minimal value of 0.01 s for tests with strain rate . = 0.007 s −1 and a maximum value of 0.05 s for tests with strain rate . = 0.07 s −1 . The total axial strain of the sample was calculated using the following equation [6]: where 'h' is the sample height and h 0 is the sample initial height. The Permanent strain at the beginning of cycle 'j', represented by p (j) is assumed to be equal to the total strain at this point when the stress is equal to zero. The sample density at the beginning of cycle 'j' is denoted by ρ(j) and is calculated according to the following equation [6]: where m and r 0 represent the sample mass and the mold's initial radius, respectively. The variation of the sample radius during the test is negligeable, thus the initial radius r 0 is used in the calculation of the density (see Appendix A). The shape of the hysteresis is an important aspect to watch for since its area indicates the energy dissipation during the loading-unloading cycles and its inclination correlates with the sample's rigidity. For tests T1-T8, the air of the hysteresis w(j) as a function of the cycle number was calculated using the Trapezoidal rule. The focus of this study was to highlight the behavior of the carbon paste during cyclic compaction from the height of 82 mm ( = 0.4, ρ(j) = 1.45 g/cm 3 ) to that of 74.5 mm ( = 0.45, ρ(j) = 1.6 g/cm 3 ). As a result, all the cycles that resulted in a sample height of more than 82 mm (ρ(j) < 1.45 g/cm 3 ) were not considered. When the carbon paste's density ρ > 1.45 g/cm 3 , the solid skeleton of the carbon paste is already consolidated [21]. The density of the samples before this threshold is so low (ρ ≤ 1.45 g/cm 3 ) and the permanent deformation increase during a given cycle is important with respect to the maximum deformation of the cycle. Thus, the reversible and the irreversible deformations could not be differentiated.
Series I 4.1.1. Stress-Strain Behavior of the Carbon Paste Subjected to Cyclic Loading
The stress-strain curve of test T6 (Figure 3a) displays the general behavior of the carbon paste subjected to cyclic loading. The test was carried out under a maximum stress amplitude of σ max = 1.5 MPa and with a strain rate of . = 0.007 s −1 . During the first cycle, the total strain of = 0.375 (ρ(2) = 1.41 g/cm 3 ) ( Figure 3a) accounts for 84.84% of the total strain accumulated at the end of the test ( = 0.451). This is due to the fact that the carbon paste is placed in a loose state in the mold, with an initial density as low as ρ 0 = 0.88 g/cm 3 . During the subsequent cycles (i.e., cycles 2-145), the strain was increased from = 0.375 to = 0.451. Figure 3b shows the hysteresis loop for cycles 15, 50, and 140 of test T6. It is noticeable that the hysteresis area, and therefore the total energy dissipation, decreases as the cycle number increases. The angle between the line that joins the cycle's start and peak points as well as the strain axis increase when the cycle number increase, which means that the difference between the maximum and the start strains of the cycle declines. These observations suggest that the sample becomes stiffer as the number of cycles increases.
Strain Rate and Stress Amplitude Effect on the Permanent Strain Evolution
To compare the densification behavior of the carbon paste subjected to cyclic loads from tests with different strain rates but subjected to the same maximum stress amplitude, the permanent strain at the beginning of each cycle ( ) was calculated and plotted as a function of the cycle number. The permanent strain can be determined only at the beginning of each cycle, since during loading recoverable and permanent strains cannot be differentiated. Figure 4 shows the variation of ( ) as a function of the cycle number for
Strain Rate and Stress Amplitude Effect on the Permanent Strain Evolution
To compare the densification behavior of the carbon paste subjected to cyclic loads from tests with different strain rates but subjected to the same maximum stress amplitude, the permanent strain at the beginning of each cycle p (j) was calculated and plotted as a function of the cycle number. The permanent strain can be determined only at the beginning of each cycle, since during loading recoverable and permanent strains cannot be differentiated. Figure 4 shows the variation of p (j) as a function of the cycle number for tests T1 to T8. For all the tests, the permanent strain increased rapidly at the beginning of the test, when the density of the sample was the lowest. As the number of cycles increases, the permanent strain rate decreased. The samples became denser, with additional permanent strains demand more cycles.
from tests with different strain rates but subjected to the same maximum stress amplitude, the permanent strain at the beginning of each cycle ( ) was calculated and plotted as a function of the cycle number. The permanent strain can be determined only at the beginning of each cycle, since during loading recoverable and permanent strains cannot be differentiated. Figure 4 shows the variation of ( ) as a function of the cycle number for tests T1 to T8. For all the tests, the permanent strain increased rapidly at the beginning of the test, when the density of the sample was the lowest. As the number of cycles increases, the permanent strain rate decreased. The samples became denser, with additional permanent strains demand more cycles. Figure 4 show that tests carried out under the same maximum stress amplitude ( ) but with different strain rates had very similar curves of ( ) as a function of the cycle number. Therefore, the strain rate in the studied range had no effect on the permanent strain evolution as a function of the cycle number. The permanent strain is not a timedependent variable, but rather depends on the cycle number and the maximum stress amplitude applied to the carbon paste. Figure 4 show that tests carried out under the same maximum stress amplitude (σ max ) but with different strain rates had very similar curves of p (j) as a function of the cycle number. Therefore, the strain rate in the studied range had no effect on the permanent strain evolution as a function of the cycle number. The permanent strain is not a time-dependent variable, but rather depends on the cycle number and the maximum stress amplitude applied to the carbon paste.
To illustrate the effects of the maximum stress amplitude on the densification behavior of the carbon paste, the evolutions of p (j) as a function of the cycle number for tests T1 (σ max = 0.5 MPa), T3 (σ max = 1 MPa), and T6 (σ max = 1.5 MPa) are presented in Figure 5. These tests were done at the same strain rate of . = 0.7 × 10 −2 s −1 . The increase in σ max led to a decrease the total number of cycles needed to reach the target density of ρ(j) ≈ 1.6 g/cm 3 , which corresponds to a permanent strain: namely, p (j) ≈ 0.45. It is noticeable that there is a substantial difference between the curve representing T1 and the other curves representing T3 and T6. It seems that the stress amplitude of T1 (σ max = 0.5 MPa) is below the threshold of stress needed to induce permanent deformation efficiently. The carbon paste is a mixture of a solid skeleton (aggregates) and a binding matrix (liquified coal and fine aggregates). The permanent deformation of this material during the loading-unloading process is controlled by complex mechanisms including aggregate's rearrangement and the degradation of coarse aggregates into finer particles [21]. Samples compacted at higher stress amplitudes are expected to experience a higher rate of aggregates degradation. Similarly, higher stress amplitudes lead to an increase in the internal stresses during the loading. This internal stress relaxes during the unloading and causes a better densification during next loading. However, the experiments conducted in this coal and fine aggregates). The permanent deformation of this material during the loadingunloading process is controlled by complex mechanisms including aggregate's rearrangement and the degradation of coarse aggregates into finer particles [21]. Samples compacted at higher stress amplitudes are expected to experience a higher rate of aggregates degradation. Similarly, higher stress amplitudes lead to an increase in the internal stresses during the loading. This internal stress relaxes during the unloading and causes a better densification during next loading. However, the experiments conducted in this study would not allow the quantification of degradation or rearrangement of aggregates. Nor could they determine the cycles during which these mechanisms occur.
Strain Rate Effect on the Hysteresis Shape
The hysteresis shape reveals information about some mechanical properties of the material. Its area represents the dissipated energy during the loading-unloading caused by the material internal friction or the material viscosity. Likewise, its inclination is related to the sample stiffness. As the material becomes stiffer, its inclination with respect to the strain axis will become less significant. To compare the effects of the strain rate on the hysteresis shape, the relative strain ∆ has been calculated. For any cycle 'j', the relative strain is defined as ∆ = − (j), which represents the increment of strain during the cycle 'j' with a strain origin being set to zero at the beginning of each cycle. The aim of this presentation is comparing the hysteresis slope and width of cycles across different tests with different values for (j). Figure 6 shows the stress-relative strain curves for cycles 25, 450, and 1200 of tests T1 and T2, both with = 0.5 MPa. One can notice that the strain rate value does not affect the hysteresis shape. The same observation can be made in Figures 7 and 8 for tests T3-T8.
Strain Rate Effect on the Hysteresis Shape
The hysteresis shape reveals information about some mechanical properties of the material. Its area represents the dissipated energy during the loading-unloading caused by the material internal friction or the material viscosity. Likewise, its inclination is related to the sample stiffness. As the material becomes stiffer, its inclination with respect to the strain axis will become less significant. To compare the effects of the strain rate on the hysteresis shape, the relative strain ∆ has been calculated. For any cycle 'j', the relative strain is defined as ∆ = − p (j), which represents the increment of strain during the cycle 'j' with a strain origin being set to zero at the beginning of each cycle. The aim of this presentation is comparing the hysteresis slope and width of cycles across different tests with different values for p (j). Figure 6 shows the stress-relative strain curves for cycles 25, 450, and 1200 of tests T1 and T2, both with σ max = 0.5 MPa. One can notice that the strain rate value does not affect the hysteresis shape. The same observation can be made in Figures 7 and 8 for tests T3-T8. have similar curves of w(j) vs. cycle number. In this way, the strain rate does not affect the hysteresis shape, which means it has no effect on the energy dissipation and the evolution of the rigidity during cyclic tests with a constant maximum amplitude. In [22], rigidity tests were carried out on compacted carbon paste samples. The samples were vibro-compacted under an axial stress amplitude of 1 MPa, and with frequencies of 0.1 Hz, 2 Hz, and 4Hz to reach a final density of 1.6 g/cm 3 . The results of the rigidity tests showed that all the sample had approximately the same rigidity, with the frequency in the studied range having no effect on the final rigidity of the compacted carbon paste. The observations from Figures 6 and 9 thus corroborate the results presented in [22] regarding the effect of the strain rate (or of the frequency) on the rigidity of a sample compacted under cyclic loading with a constant maximum stress.
were vibro-compacted under an axial stress amplitude of 1 MPa, and with frequencies of 0.1 Hz, 2 Hz, and 4Hz to reach a final density of 1.6 g/cm 3 . The results of the rigidity tests showed that all the sample had approximately the same rigidity, with the frequency in the studied range having no effect on the final rigidity of the compacted carbon paste. The observations from Figures 6 and 9 thus corroborate the results presented in [22] regarding the effect of the strain rate (or of the frequency) on the rigidity of a sample compacted under cyclic loading with a constant maximum stress.
Effect of the Stress Amplitude on the Hysteresis Shape
To assess the effect of the stress amplitude on the mechanical behavior of the carbon paste, the stress-relative strain curves of cycles from tests T1 ( = 0.5 MPa), T3 ( = 1 MPa), and T6 ( = 1.5 MPa), all having approximately the same values of ( ) are plotted in Figure 10. For the same value of ( ), the stress-relative strain curve has a steeper slope during loading when the cycle belongs to a test with a lower maximum stress amplitude . This means that the sample compacted to reach a target density will be stiffer when the maximum stress amplitude is lower. This observation could be related to the phenomenon of aggregates crushing. It could be suggested that two samples that have the same densification level, but different stress-relative strain relations have different aggregates crushing levels caused by the application of different stress levels. It is likely that higher stress levels cause a higher crushing of the coarse aggregates, which can lower the sample stiffness.
Effect of the Stress Amplitude on the Hysteresis Shape
To assess the effect of the stress amplitude on the mechanical behavior of the carbon paste, the stress-relative strain curves of cycles from tests T1 (σ max = 0.5 MPa), T3 (σ max = 1 MPa), and T6 (σ max = 1.5 MPa), all having approximately the same values of p (j) are plotted in Figure 10. For the same value of p (j), the stress-relative strain curve has a steeper slope during loading when the cycle belongs to a test with a lower maximum stress amplitude σ max . This means that the sample compacted to reach a target density will be stiffer when the maximum stress amplitude is lower. This observation could be related to the phenomenon of aggregates crushing. It could be suggested that two samples that have the same densification level, but different stress-relative strain relations have different aggregates crushing levels caused by the application of different stress levels. It is likely that higher stress levels cause a higher crushing of the coarse aggregates, which can lower the sample stiffness. Figure 11 shows the stress-relative strain curves for cycles 25, 50, 100, and 145 for tests T1 (σ max =0.5 MPa), T3 (σ max =1 MPa), and T6 (σ max = 1.5 MPa). For the same cycle number of tests T1, T3, and T6 the stress-relative strain curves during loading are superimposed. As Figure 5 shows, for any cycle number 'j', the value of p (j) increases with a higher σ max . Therefore, a sample that was compacted at a low stress amplitude and which had a lower compaction degree (lower p (j)) at a definite cycle number 'j' had the same stiffness during loading as a sample compacted with a higher stress amplitude and having a higher compaction degree (higher p (j)) at the same cycle 'j'. It can be concluded, therefore, that the elastic properties of the material during the loading phase depend on the cycle number and not on the sample's apparent density. Figure 11 shows the stress-relative strain curves for cycles 25, 50, 100, and 145 for tests T1 ( = 0.5 MPa), T3 ( = 1 MPa), and T6 ( = 1.5 MPa). For the same cycle number of tests T1, T3, and T6 the stress-relative strain curves during loading are superimposed. As Figure 5 shows, for any cycle number 'j', the value of ( ) increases with a higher . Therefore, a sample that was compacted at a low stress amplitude and which had a lower compaction degree (lower ( )) at a definite cycle number 'j' had the same stiffness during loading as a sample compacted with a higher stress amplitude and having a higher compaction degree (higher ( )) at the same cycle 'j'. It can be concluded, therefore, that the elastic properties of the material during the loading phase depend on the cycle number and not on the sample's apparent density.
Effect of the Unloading Level on the Permanent Deformation
To investigate the effect(s) of the unloading level during cycling on carbon paste's behavior, cyclic tests with complete and partial unloadings were conducted according to the load path shown in Figure 2. The values of the maximal and the minimal stress applied in these tests are identified in Table 2. Since in some of these tests the samples were not completely unloaded ( ≠ 0), ( ) could not be measured. To compare the densification behavior of the carbon paste across several tests with the same value for , the value for the sample height at the stress peaks, represented by ℎ ( ), was plotted as a function of the cycle number in Figure 12 for tests T-I, T-II, and T-III and in Figure 13 for tests T-IV and T-V. At the same cycle number, the value of ℎ ( ) is lower when the min-
Effect of the Unloading Level on the Permanent Deformation
To investigate the effect(s) of the unloading level during cycling on carbon paste's behavior, cyclic tests with complete and partial unloadings were conducted according to the load path shown in Figure 2. The values of the maximal and the minimal stress applied in these tests are identified in Table 2. Since in some of these tests the samples were not completely unloaded (σ min = 0), p (j) could not be measured. To compare the densification behavior of the carbon paste across several tests with the same value for σ max , the value for the sample height at the stress peaks, represented by h p (j), was plotted as a function of the cycle number in Figure 12 for tests T-I, T-II, and T-III and in Figure 13 for tests T-IV and T-V. At the same cycle number, the value of h p (j) is lower when the minimal stress amplitude σ min is higher. Recall that three complete unloadings were performed during tests T-III and T-V. The purpose of these unloadings was to demonstrate the effect of a complete unloading during cyclic compaction tests on the densification evolution. These two Figures also indicate an enhancement of densification reflected by a slope change in the curve of h p (j) vs cycle number for tests T-III and T-V when the samples are fully unloaded. It can be inferred from these data that a complete unloading at each cycle leads to a better densification of the carbon paste during the next cycle. Complete unloading at the end of the cycle decreases the contact forces between the aggregates and give them the possibility to rearrange into a more densified configuration over the next loading. These results, corroborate those obtained for asphalt mixtures in [26].
Conclusion
In this paper, a specific room-temperature carbon paste used in the aluminum industry was experimentally tested to investigate any potential effects of the strain rate, maximum stress magnitude, and unloading level on its behavior during cyclic loading. The results obtained provide a framework for the development of an appropriate numerical model to reproduce the behavior of carbon paste under the cyclic loading. The experimental results and their implications for modelling the compaction process of carbon paste will be delineated/outlined below. 1. The evolution of the permanent deformation as a function of the cycle number during cyclic tests with a constant maximum stress amplitude is nonlinear and is not influenced by the strain rate. Therefore, the law representing the irreversible behavior of carbon paste subjected to cyclic loading must be independent of time. In addition, the law itself or its parameters must consider the non-linearity of the permanent deformation as a function of the number of cycles. 2. During cyclic tests, the stress-relative strain hysteresis curves changed shape as the
Conclusion
In this paper, a specific room-temperature carbon paste used in the aluminum industry was experimentally tested to investigate any potential effects of the strain rate, maximum stress magnitude, and unloading level on its behavior during cyclic loading. The results obtained provide a framework for the development of an appropriate numerical model to reproduce the behavior of carbon paste under the cyclic loading. The experimental results and their implications for modelling the compaction process of carbon paste will be delineated/outlined below. 1. The evolution of the permanent deformation as a function of the cycle number during cyclic tests with a constant maximum stress amplitude is nonlinear and is not influenced by the strain rate. Therefore, the law representing the irreversible behavior of carbon paste subjected to cyclic loading must be independent of time. In addition, the law itself or its parameters must consider the non-linearity of the permanent deformation as a function of the number of cycles. 2. During cyclic tests, the stress-relative strain hysteresis curves changed shape as the
Conclusions
In this paper, a specific room-temperature carbon paste used in the aluminum industry was experimentally tested to investigate any potential effects of the strain rate, maximum stress magnitude, and unloading level on its behavior during cyclic loading. The results obtained provide a framework for the development of an appropriate numerical model to reproduce the behavior of carbon paste under the cyclic loading. The experimental results and their implications for modelling the compaction process of carbon paste will be delineated/outlined below.
1.
The evolution of the permanent deformation as a function of the cycle number during cyclic tests with a constant maximum stress amplitude is nonlinear and is not influenced by the strain rate. Therefore, the law representing the irreversible behavior of carbon paste subjected to cyclic loading must be independent of time. In addition, the law itself or its parameters must consider the non-linearity of the permanent deformation as a function of the number of cycles.
2.
During cyclic tests, the stress-relative strain hysteresis curves changed shape as the number of cycles increased. They become narrower and gain a steeper slope. This reflects the evolution of the elastic properties of the paste as the number of cycles increases. On the other hand, the hysteresis shape of the stress-relative strain curves is not influenced by the strain rate. The part of behavior law describing the reversible behavior of the carbon paste subjected to cyclic loading must be independent of time and should include variable elastic parameters that evolve during compaction.
3.
Carbon paste samples with the same density that were subjected to different maximum stress amplitudes during compaction had different stiffness values, as highlighted by different slopes of the stress-relative strain hysteresis curves. This result implies that the elastic properties of a compacted carbon paste do not depend exclusively on its density. Therefore, the elastic parameters of any law intended to represent the cyclic behavior of carbon paste should not be exclusively as a function of the density.
4.
Carbon paste samples that had been subjected to the same number of cycles but different maximum stress amplitudes in their loading histories had different densities but the same elastic properties. This is reflected by the same stress-relative strain curve during loading ( Figure 11). Therefore, the evolution of the elastic properties of carbon paste during cyclic compaction can be defined as a function of the cycle number.
5.
Complete unloading at the end of each cycle during cyclic compaction of the carbon paste enhanced the accumulation of the permanent deformation. This behavioral aspect must be taken into consideration in any work aiming at modeling the shaping process of carbon paste.
In a nutshell, when describing elastic processes during cyclic loading, any behavior law proposed for carbon paste subjected to cyclic compaction must include time-independent elastic parameters that evolve as a function of the cycle number. If irreversible processes during cyclic loading are to be described, the formulated behavior law should be nonlinear and time independent. As a final recommendation, it is suggested that cyclic tests be carried out on the carbon paste with strain rates that approach those used in the industry. | 2022-02-11T16:30:34.643Z | 2022-02-01T00:00:00.000 | {
"year": 2022,
"sha1": "9f0d760d14c914ce2e5dc14528820b98d94efc3a",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "11eb09b393033d3e8f7860028e3930007b1efaac",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
29630168 | pes2o/s2orc | v3-fos-license | Cardiovascular effects of paroxetine, a newly developed antidepressant, in anesthetized dogs in comparison with those of imipramine, amitriptyline and clomipramine.
The cardiovascular effects of various antidepressant drugs including paroxetine, imipramine, amitriptyline and clomipramine, administered intravenously, have been assessed. Paroxetine, imipramine, amitriptyline or clomipramine potentiated the response to norepinephrine (0.1 microgram/kg, i.v.) on systemic blood pressure, while paroxetine, imipramine and amitriptyline weakened the response to tyramine (30 micrograms/kg, i.v.). A marked decrease in systemic blood pressure was observed after large doses of each drug (3 and 10 mg/kg of paroxetine; 1-10 mg/kg of imipramine, amitriptyline or clomipramine); and half of the animals died following administration of 10 mg/kg of imipramine, amitriptyline or clomipramine. Paroxetine did not show a marked effect on heart rate at a dose of up to 3 mg/kg, although 0.1-3 mg/kg of imipramine, amitriptyline or clomipramine dose-dependently caused tachycardia. ECG disturbances were observed in animals administered 10 mg/kg of imipramine, amitriptyline or clomipramine; but in contrast, 10 mg/kg of paroxetine caused only slight changes in the ECG. Prolongation of atrio-ventricular conduction time was observed with all the drugs. It was concluded that the effects of paroxetine on the canine heart are more mild in comparison with other tricyclic antidepressants used, although its pharmacological features are essentially similar to those of other drugs.
Abstract-The cardiovascular effects of various antidepressant drugs including paroxetine, imipramine, amitriptyline and clomipramine, administered intravenously, have been assessed.
Paroxetine, imipramine, amitriptyline or clomipramine potentiated the response to norepinephrine (0.1 /ig/kg, i.v.) on systemic blood pressure, while paroxetine, imipramine and amitriptyline weakened the response to tyramine (30 acg/kg, i.v.). A marked decrease in systemic blood pressure was observed after large doses of each drug (3 and 10 mg/kg of paroxetine; 1-10 mg/kg of imipramine, amitriptyline or clomipramine); and half of the animals died following administration of 10 mg/kg of imipramine, amitriptyline or clomipramine.
Paroxetine did not show a marked effect on heart rate at a dose of up to 3 mg/kg, although 0.1 3 mg/kg of imipramine, amitriptyline or clomipramine dose-dependently caused tachycardia.
ECG disturbances were observed in animals administered 10 mg/kg of imipramine, amitriptyline or clomipramine; but in contrast, 10 mg/kg of paroxe tine caused only slight changes in the ECG. Prolongation of atrio-ventricular conduction time was observed with all the drugs. It was concluded that the effects of paroxetine on the canine heart are more mild in comparison with other tricyclic antidepressants used, although its pharmacological features are essentially similar to those of other drugs.
Tricyclic antidepressants have been used for therapy of depressive illnesses for a fairly long time with good results, but as a drawback, a number of cases with adverse cardiac effects have been reported (1-5). Most serious of these card lotoxi cities are severe arrhythmias which sometimes can be fatal. In experimental animals, the drugs produced ECG abnormalities including tachycardia and change in mycardial con tractile force (6)(7)(8)(9)(10). These effects of tricyclic antidepressants have been attributed to their inhibition of norepinephrine re-uptake at the sympathetic nerve ending, anti cholinergic action and also to their "quini Permanent Permanent address: Department of Pharmacology and Toxicology, Fundamental Research Labora tories, Sunstar Co., Ltd., 3-1 Asahimachi, Takatsuki, Osaka 569, Japan dine-like" action (11,12). Paroxetine is a newly developed non tricyclic drug having antidepressant activity which is now under clinical study (13)(14)(15). Hitherto performed pharmacological studies have shown that it is a highly selective serotonin re-uptake inhibitor, having weak norepinephrine re-uptake inhibitory prop erties, and also has a weak anticholinergic action in vitro (16)(17)(18)(19)(20)(21).
It is interesting to examine the cardiovas cular effects of this compound in comparison with representative antidepressants; and in the present study, we have compared paroxetine with three commonly used anti depressants, imipramine, amitriptyline and clomipramine, in anesthetized dogs.
Materials and Methods 1. Studies in closed-chest anesthetized dogs with pentobarbital: Mongrel dogs of either sex, weighing 7.8-26.0 kg, were anesthetized with sodium pentobarbital (30 mg/kg, i.v.). A polyethylene cannula was inserted into the right femoral artery to measure systemic blood pressure with a pressure transducer (Statham P23Db, Oxnard, CA, U.S.A.). A rubber tube was inserted into the right femoral vein for ad ministration of drug solutions. A non cannulating type probe of an electromagnetic flowmeter (Narco RT-500, Houston, TX, U.S. A.) was attached around the left femoral artery to measure femoral blood flow. A polyeth ylene catheter was inserted in the external jugular vein and passed into the caval vein to measure central venous pressure (Statham P23BB). Respiration rate was counted with the waves of central venous pressure. ECG electrodes were sutured on the limbs to record the standard limb ECG by means of biophysical amplifiers (San-el 1205, Tokyo, Japan). A cardiotachometer (Data Graph T-149, Tokyo, Japan) was triggered by R waves from lead 11 of the ECG. These pa rameters were recorded on a directly writing oscillograph (San-ei 8S53).
After completion of the surgical prep aration, 0.1 /cg/kg of norepinephrine, 1 iig/ kg of acetylcholine and 30 /,,g/kg of tyramine were administered intravenously in sequence to examine the cardiovascular response of the animal; and then increasing doses (0.1, 0.3, 1, 3 and 10 mg/kg) of paroxetine, imipramine, amitriptyline or clomipramine, were administered intravenously at 30-60 min intervals, and their effects were observed.
After administration of 0.1, 0.3 and 1 mg/ kg of one of the antidepressants, the pre viously mentioned doses of norepinephrine, acetylcholine and tyramine were adminis tered to estimate the possible interaction of the antidepressant with these agents. In another series of experiments, atropine was used (0.1 mg/kg, i.v.) to assess the par ticipation of the parasympathetic nerve in the response to paroxetine (0.1 , 0.3 and 1 mg/kg) or imipramine (3 and 10 mg/kg were sutured on to the right atrial appendage and apex of the heart and the electrograms were processed through an atrio-ventricular interval counter (Data Graph HT-11) via biophysical amplifiers (San-ei 1205). Atrio-ventricular conduction time was measured as the interval between these two electrograms.
A rubber tube was inserted from the atrial appendage into the right atrium to measure right atrial pressure (Statham P23BB). Systemic blood pressure at the femoral artery and heart rate were measured as described above and these parameters were recorded on a directly writing recorder (San-ei 8S53).
Drug administration was performed via the femoral vein. In another group of dogs, midcervical vagotomy and postganglionic nerve section of the stellate ganglia were performed bilaterally in order to analyze the participation of the autonomic nerve control in the response to the drug.
After the surgical preparation was com pleted, 0.1, 0.3, 1, 3 and 10 mg/kg of paroxetine, imipramine, amitriptyline or clomipramine were administered intra venously in sequence at 30-60 min intervals, and their effects on atrio-ventricular con duction time were observed.
The drugs used were: paroxetine hydro chloride (Beecham, Surry, U.K.), imipramine hydrochloride ( Fig. 1, and the peak responses of the blood pressure and the heart rate to each dose obtained at 1-5 min after the injection are summarized in Fig. 2 and Fig. 3 dose of 1 mg/kg caused a marked tachy cardia, while a marked bradycardia was induced with 3 mg/kg. Amitriptyline showed similar effects to imipramine on the systemic blood pressure and heart rate, although central venous pressure was decreased by 0.1-3 mg/kg of amitriptyline, and 3 out of 5 animals died at 10 mg/kg.
Clomipramine, at doses of 0.3 and 1 mg/kg, caused a slight increase in systemic blood pressure, followed by a sustained decrease. Doses of 3 and 10 mg/kg of clomipramine caused marked bradycardia, and 2 out of 5 animals died with 10 mg/kg of the drug.
Respiration rate: Paroxetine, at doses of 3 and 10 mg/kg, increased respiration rate (32% at 3 mg/kg and 58% at 10 mg/kg, respectively), which was less than other drugs.
ECG: Results are summarized in Fig. 4. Paroxetine did not cause any marked changes in the ECG up to 10 mg/kg. Prolongation of PQ and QTc intervals and a spread of QRS width were less than 10% and lasted only 1-2 min after the injection. Three mg/kg of paroxetine prolonged PQ interval by 13 msec, which was not statisticllay significant; However, at 10 mg/kg, QRS width was spread by 10 msec (25%) at the maximum, PQ and QTc intervals were pro longed by 13 and 18%, respectively, lasting for 5-10 min, and QRS amplitude was reduced for 2 min. In one experiment, a very high dose of 30 mg/kg of paroxetine was administered which caused a marked QRS spread with deformation and finally a cardiac arrest in the dog after 6 min.
Imipramine at a dose of 1 mg/kg caused a spread of QRS width by 10%, and QTc prolongation by 20%. These changes were more marked at 3 mg/kg, and in addition, PQ interval was prolonged by 11 msec (16%). With 10 mg/kg of imipramine, the con figuration of QRS complex was variously changed in 4 of 5 animals, e.g., the QRS spread widely and lost its amplitude with multiple deflection forming a zig-zag wave, and displacement of the ST segment oc curred. Moreover, in 2 out of 5 animals receiving 10 mg/kg of imipramine, the P wave was lost, cardiac rhythm was taken over by ventricular rhythm, in consequence, ventricular fibrillation developed.
Amitriptyline at doses of 0.3 and 1 mg/kg caused a slight, transient widening of QRS width and PQ prolongation by 10 msec (13%); but at a dose of 3 mg/kg, it caused more marked prolongations in PQ, QTc intervals and QRS width by 42, 18 and 49%, respectively. At dose of 10 mg/kg, the above mentioned changes were even more marked, and 3 out of 5 animals finally suffered cardiac arrest or ventricular fibrillation.
Clomipramine reduced the amplitude of QRS complex only slightly at a dose of 3 mg/kg. At 10 mg/kg QRS deformation and PQ prolongation occurred, and 2 out of 5 animals suffered cardiac arrest. A sequential administration of paroxetine increased the basal systemic blood pressure cumulatively and raised the mean systemic blood pressure from an initial value of 149.4+3.9 mmHg to 156.0±2.6 mmHg during the period before administration of the 3 mg/kg. Imipramine also potentiated the response to norepinephrine, the increases being 54.8, 105.0 and 91.9% at dose of 0.1, 0.3 and 1 mg/ kg, respectively, while amitriptyline did not show a clear-cut interaction. One mg/kg of clomipramine potentiated the response to no repinephrine.
The response of the systemic blood pressure to 30 ug/kg of tyramine was de pressed by administration of 1 mg/kg of paroxetine; however, at lower doses (0.1 or 0.3 mg/kg), the response to tyramine was potentiated in 4 out of 5 animals.
Imipramine potentiated the response to tyramine at doses of 0.1 and 0.3 mg/kg in 3 out of 5 animals, but depressed the response to tyramine at a dose of 1 mg/kg in 4 out of 5 animals. Amitriptyline at a dose of 1 mg/kg depressed the response to tyramine signifi cantly, while 0.3-1 mg/kg of clomipramine potentiated the effect.
Paroxetine, 0.1-1 mg/kg, minimized the decrease in systemic blood pressure evoked by acetylcholine (1 dig/kg) in 3 out of 5 animals, although the inhibition was not statistically significant when the changes in the 5 experiments were compared. Imipramine, amitriptyline and clomipra mine, at doses of 0.1-1 mg/kg, did not show a clear-cut effect on the response to acetyl choline, although 3 out of 5 animals showed a slight potentiation of the response with clomipramine. 3. Effects of the antidepressants on atrio ventricular conduction time Results are summarized in Fig. 6. Atrio-ventricular conduction time was prolonged by administration of paroxetine at doses of 1-10 mg/kg. Mean values of prolongation were 6.4 msec (5.7%), 9.8 msec (8.6%) and 27.8 msec (26.5%) at 1, 3 and 10 mg/kg, respectively. These changes became smaller in vagotomized and stel lectomized dogs. Maximum prolongation occurred within 1 min of administration, and recovery to the pre-administration value took 6-20 min in both the nerve-intact and vagotomized and stellectomized dogs.
Imipramine at a dose of 1 mg/kg shortened atrio-ventricular conduction time slightly, but at 3 and 10 mg/kg, it caused prolongation. The prolongation was similar to the change seen with paroxetine in the nerve-intact open-chest dogs, while a more marked prolongation was produced in the vago tomized and stellectomized animals. Ami triptyline caused a slight prolongation of atrio-ventricular conduction time at 1 mg/kg, while doses of 3 and 10 mg/kg resulted in a marked prolongation.
Clomipramine at doses of 1 and 3 mg/kg shortened atrio-ventricular conduction time, but at 10 mg/kg, a slight prolongation occurred.
Discussion
The pharmacological profiles and cardio toxicities of tricyclic antidepressants have been extensively studied. Their well-known pharmacological actions are norepinephrine re-uptake inhibition at the adrenergic nerve ending, anticholinergic activity and a "quinidine -like" effect on cardiac tissue .
Inhibition of norepinephrine re-uptake in creases heart rate and myocardial contractile force, both of which have been observed with therapeutic doses of tricyclic antidepres sants (1, 2). The anticholinergic action may also result in an increase in heart rate besides the more frequently encountered con sequence of dry mouth (3, 22). in the present study not only for paroxetine but also for the other three by examination of the depressor effect of ex ogenous acetylcholine.
The reason for this discrepancy is not clear, but the dose of acetylcholine selected (1 ug/kg) might have outgone the maximum effective dose.
Recently, the mechanism of the cardio toxicity, mainly due to conduction dis turbances, of tricyclic antidepressants have been considered by several authors to be a result of the combination of anticholinergic action and direct myocardial depressant action or "quinidine-like" action (11,12,(23)(24)(25). The anticholinergic effect may induce sinus tachycardia and shortening of atrio ventricular conduction, while the myocardial depressant effect may induce intracardiac conduction disturbance resulting in a pro longation of PQ interval and QRS width. In the present study, the antidepressant drugs including paroxetine did not shorten PQ interval as expected from their anticholinergic and sympathomimetic activities, but both PQ interval and QRS width were prolonged at relatively higher doses; and these changes are considered to be due to the disturbance of conduction mainly distal to the AV node (i.e., His bundle and Purkinje systems) by a direct myocardial depressant action of the drug.
Paroxetine did not cause marked pro longation of PQ interval up to 10 mg/kg, and at this dose, it caused only a simple pro longation (slight degree AV block), while similar doses of tricyclic antidepressants caused severe AV block and ventricular arrhythmia.
On the other hand, in open chest dogs, 10 mg/kg of paroxetine clearly prolonged the atrio-ventricular conduction time. In the methods adopted in this ex periment, atrio-ventricular conduction time was defined as the interval between the right auricular and ventricular apical electrograms; and therefore, it involved intra-atrial and intra-ventricular conduction time besides the AV nodal conduction time.
Potentiation of the response in systemic blood pressure to norepinephrine by parox etine was observed in the present study, while the drug diminished the response to tyramine. This diminution of the response to tyramine may reflect a depletion of the stored norepinephrine at nerve endings, induced by re-uptake inhibition by paroxetine. The slight increase in systemic blood pressure at small or middle doses (0.1-1 mg/kg) of paroxetine and slight increase of heart rate in atropinized animals may also be due to norepinephrine re-uptake inhibition by the drug. Tanigaki et al. reported that paroxetine demonstrated a weaker inhibitory activity on norepinephrine uptake with a potency about 1/1.6-1/4.5 of tricyclic antidepressants in rat hypothalamic synaptosomes (15). Nevertheless, paroxetine did inhibit the norepinephrine uptake, although not so potently in the above mentioned report, and also in the present experiments, paroxetine enhanced blood pressure response to norepinephrine which may have been based on the inhibitory effect of the drug on norepinephrine re-uptake. | 2018-04-03T01:29:13.308Z | 1987-11-01T00:00:00.000 | {
"year": 1987,
"sha1": "24aedd415c35f68a46856aa9c6500f8e1c9fe6ae",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/jphs1951/45/3/45_3_335/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "4572113a03a2ef91df15f4cb5c6275bc75c88f1c",
"s2fieldsofstudy": [
"Psychology",
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
1817917 | pes2o/s2orc | v3-fos-license | Ethanol Enhances High-Salinity Stress Tolerance by Detoxifying Reactive Oxygen Species in Arabidopsis thaliana and Rice
High-salinity stress considerably affects plant growth and crop yield. Thus, developing techniques to enhance high-salinity stress tolerance in plants is important. In this study, we revealed that ethanol enhances high-salinity stress tolerance in Arabidopsis thaliana and rice. To elucidate the molecular mechanism underlying the ethanol-induced tolerance, we performed microarray analyses using A. thaliana seedlings. Our data indicated that the expression levels of 1,323 and 1,293 genes were upregulated by ethanol in the presence and absence of NaCl, respectively. The expression of reactive oxygen species (ROS) signaling-related genes associated with high-salinity tolerance was upregulated by ethanol under salt stress condition. Some of these genes encode ROS scavengers and transcription factors (e.g., AtZAT10 and AtZAT12). A RT-qPCR analysis confirmed that the expression levels of AtZAT10 and AtZAT12 as well as AtAPX1 and AtAPX2, which encode cytosolic ascorbate peroxidases (APX), were higher in ethanol-treated plants than in untreated control plants, when exposure to high-salinity stress. Additionally, A. thaliana cytosolic APX activity increased by ethanol in response to salinity stress. Moreover, histochemical analyses with 3,3′-diaminobenzidine (DAB) and nitro blue tetrazolium (NBT) revealed that ROS accumulation was inhibited by ethanol under salt stress condition in A. thaliana and rice, in which DAB staining data was further confirmed by Hydrogen peroxide (H2O2) content. These results suggest that ethanol enhances high-salinity stress tolerance by detoxifying ROS. Our findings may have implications for improving salt-stress tolerance of agriculturally important field-grown crops.
INTRODUCTION
High-salinity stress is detrimental to plant growth and productivity, and causes considerable yield losses to economically important crops, thereby threatening sustainable agriculture (Shrivastava and Kumar, 2015). Thus, it is essential that methods to enhance high-salinity stress tolerance are developed. A recent study summarized that certain chemical compounds can be used to enhance plant stress tolerance (Savvides et al., 2016). Other studies confirmed that the application of exogenous chemical compounds enhance high-salinity stress tolerance in many plant species. These chemicals included phytohormones such as salicylic acid, methyl jasmonate, and strigolactone (Yoon et al., 2009;Ha et al., 2014;Khan et al., 2015). Epigenetic inhibitors, such as Ky-2 and suberoylanilide hydroxamic acid (Sako et al., 2016;Patanun et al., 2017), and other chemical compounds, including sodium nitroprusside, melatonin, and polyamines (Savvides et al., 2016), can also improve tolerance to salt stress condition.
One of the molecular effects of chemical compounds that enhance abiotic stress tolerance in plants involves the activation of antioxidant processes. Reactive oxygen species (ROS) are toxic to proteins, lipids, carbohydrates, and DNA, and ultimately lead to membrane damage and cell death (Gill and Tuteja, 2010). ROS generated by NADPH oxidase accumulate under stress conditions leading to the production of singlet oxygen ( 1 O 2 ) and a superoxide anion radical (O •− 2 ), which are converted to H 2 O 2 . The H 2 O 2 is then converted to a hydroxyl radical (HO • ) via the metal-dependent Haber-Weiss reaction or the Fenton reaction. The excess HO • can react with lipids and results in the degradation of the cell membrane, which is an important barrier that protects plant cells (Asada, 2006). ROS homeostasis is regulated by the antagonism between ROS producers and scavengers. Several reports have described the network of ROS signaling genes in Arabidopsis thaliana Gadjev et al., 2006;Miller et al., 2010). Thus, the induction of genes encoding for key enzymes that regulate ROS accumulation, such as superoxide dismutases (SODs), ascorbate peroxidases (APXs), catalases (CATs), and other peroxidases, is necessary to remove excess O •− 2 and H 2 O 2 and ensure plant survival (Miller et al., 2010). Additionally, in A. thaliana, the expression of antioxidant defense genes is regulated by related transcription factors, including AtZAT10 and AtZAT12 (Rizhsky et al., 2004;Mittler et al., 2006;Miller et al., 2008).
Organic solvents, such as acetone, dimethyl sulfoxide (DMSO), N,N-dimethylformamide (DMF), ethanol, and methanol, are commonly used to dissolve compounds during experiments (Savvides et al., 2016). However, their effects on plant stress responses and tolerance have not been elucidated. Ethanol is a volatile, flammable, and colorless liquid, with a slight odor. Ethanol fermentation is one of the fundamental processes occurring during plant stress responses, and is necessary for responses to low-oxygen stress conditions (Tadege et al., 1999). Endogenous ethanol is produced under anaerobic conditions as part of a fermentation pathway (Kimmerer and Kozlowski, 1982;Kimmerer and MacDonald, 1987). Although, rice plants treated with exogenous ethanol have been reported to exhibit tolerance to chilling stress (Kato-Noguchi, 2008), it is unclear whether an ethanol treatment can enhance high-salinity stress tolerance in plants.
We herein provide new insights into the biological functions of ethanol influencing plant responses and tolerance to highsalinity stress. We revealed that the application of exogenous ethanol enhances high-salinity stress tolerance by regulating ROS-related genes and enhancing ROS detoxification.
Plant Materials and Growth Conditions
A. thaliana (ecotype Columbia-0) seeds were sterilized and sown in half-strength Murashige and Skoog (MS) liquid medium supplemented with 1% sucrose and 0.1% agar. The plants were grown under previously described conditions (Sako et al., 2016). Four-day-old plants were treated with ethanol (Wako, Japan), acetone (Wako, Japan), methanol (Wako, Japan), N'Ndimethylformamide (DMF) (Wako, Japan), dimethyl sulfoxide (DMSO) (Wako, Japan), or sterilized deionized water for 24 h, with or without a subsequent treatment with 100 mM NaCl (Wako, Japan). The NaCl solution was added into the medium containing the solvents ( Figure S1A). The survival rate of 20 plants was calculated 4 days after the NaCl treatment. The experiment was conducted using three biological replicates.
For rice experiments, Oryza sativa L. cv. Nipponbare seeds were germinated in water at 30 • C for 2 days, and then transferred to plastic pots containing granular soil (Bonsoru No. 2;Sumitomo Chemical,Tokyo). The plants were grown in a vat filled with water at 30 • C for 2 weeks under a 14-h light:10h dark photoperiod. For the salinity stress test, soil moisture was removed by leaving the pots on Kimtowel (Nippon Paper Crecia) for 20 min. The pots were then incubated in a 0, 0.3, or 0.6% (corresponding to 0, 51 or 103 mM, respectively) ethanol solution for 4 days. After the ethanol treatment, the pots were left on Kim towel for 20 min to remove soil moisture. The pots were subsequently transferred to a 200 mM NaCl solution and incubated for 5 days. The data was analyzed from 12 plants for each treatment. The experiment was conducted using two independent biological replicates.
Measurement of Chlorophyll Content
Four-day-old A. thaliana plants were treated with 0.3% (51 mM) ethanol for 24 h and then exposed to 100 mM NaCl for 72 h. We then measured the chlorophyll content of 30-50 mg seedlings for each treatment as previously described (Kim et al., 2003). The experiment was conducted with three biological replicates. Statistical significance was determined by ANOVA, followed by post-hoc Tukey's tests. Means that differed significantly (P < 0.05) are indicated by different letters.
RNA Extraction
Total RNA was extracted from 5-day-old A. thaliana seedlings that were treated with 0.3% (51 mM) ethanol for 24 h, with or without a subsequent treatment with 100 mM NaCl for 2 h. Sterilized deionized water was used as a negative control. The RNA was extracted using the Plant RNA reagent (Thermo Fisher Scientific) as previously described (Nguyen et al., 2015). The quality of the extracted total RNA was evaluated using a Bioanalyzer system (Agilent). The RNA was extracted from 30 plants. The experiment was conducted using three biological replicates.
Microarray Analysis
A microarray analysis was completed as previously described (Nguyen et al., 2015). The microarray data underwent a one-way ANOVA method and were deposited in the GEO database (GEO ID: GSE95202). Each treatment was analyzed using four biological replicates. A total of 30 plants were used for each treatment. Genes with an expression log 2 ratio ≥ 0.7 [t-test analysis, Benjamini-Hochberg correction (FDR) ≤ 0.05] were identified as upregulated genes.
Ascorbate Peroxidase Assay
Five-day-old A. thaliana plants treated with 0.3% (51 mM) ethanol for 24 h, with or without a subsequent treatment of 100 mM NaCl for 12 h were used for an APX assay. The experiment was conducted using three biological replicates. Proteins were extracted from 30 plants and the APX assay was conducted as previously described (Bradford, 1976;Mostofa et al., 2015). The protein content was determined using bovine serum albumin as a standard.
Staining to Detect the Superoxide Anion and Hydrogen Peroxide
Five-day-old A. thaliana plants treated with 0.3% (51 mM) ethanol for 24 h, with or without a subsequent treatment with 100 mM NaCl for 12 h were stained using a modified version of a published method (Kumar et al., 2014;Mostofa et al., 2015). To detect O •− 2 , plants were stained for 30 min with 0.05% NBT (w/v) in 50 mM potassium phosphate, pH 7.0. To detect H 2 O 2 , plants were stained for 5 h with 0.1% DAB in 10 mM potassium phosphate, pH 7.0. Samples were stained under light at room temperature, after which they were cleared with an ethanol:acetic acid (96:4) solution until photographed by a digital microscope (VHX-5000, Keyence). The experiment was conducted using three biological replicates. A total of 30 plants were used for each treatment.
For the rice experiments, 14-day-old O. sativa L. cv. Nipponbare plants treated with or without 0.3% (51 mM) ethanol for 24 h were exposed to 100 mM NaCl for 24 h. The second leaf was stained with NBT or DAB as previously described (Mostofa et al., 2015) and then photographed using the M165 FC fluorescent stereo microscope (Leica).
Measurement of Hydrogen Peroxide
Four-day-old A. thaliana plants were treated with 0.3% (51 mM) ethanol for 24 h and then exposed to 100 mM NaCl for 72 h. H 2 O 2 content was then measured as described previously (Ivanchenko et al., 2013). The experiment was conducted with three biological replicates.
For the rice experiments, 14-day-old O. sativa L. cv. Nipponbare plants were treated with or without 0.3% (51 mM) ethanol for 24 h and then exposed to 100 mM NaCl for 24 h. The second leaf was sampled for H 2 O 2 content as described previously (Ivanchenko et al., 2013). The experiment was conducted with three biological replicates.
Ethanol Enhances High-Salinity Stress Tolerance in Arabidopsis thaliana
We examined the effects of five organic solvents on A. thaliana high-salinity stress tolerance ( Figures 1A,B, Figure S1A). Wildtype plants grown in liquid culture medium were treated with an organic solvent or water for 24 h, with or without a subsequent treatment with 100 mM NaCl for 4 days ( Figure S1A). We observed that the ethanol treatment enhanced A. thaliana highsalinity stress tolerance (Figures 1A,B), although the plants appeared slightly yellow under the non-stress condition. In contrast, plants treated with acetone, methanol, DMF, or DMSO did not exhibit any significant morphological differences compared with the control plants under the non-stress condition. Additionally, plants were unable to survive under the highsalinity stress condition (Figures 1A,B). The effects of various concentrations of organic solvents were also tested as shown in Figure S1. The data showed that the other organic solvents except for ethanol could not rescue plants from high-salinity stress condition. Consistent with these results, we observed that the chlorophyll content was higher in ethanol-treated plants than in the untreated plants under the high-salinity stress condition ( Figure 1C).
Because the ethanol-treated plants became slightly yellow, we also evaluated the survival rate of salinity-stressed plants after a 7-day recovery period, during which the plants were transferred to MS liquid medium ( Figure S2). A higher recovery rate was observed for ethanol-treated plants (93%) than for untreated seedlings (50%; Figure S2). Our data confirmed that ethanol enhances A. thaliana tolerance to high-salinity stress.
Microarray-Based Identification of Candidate Genes Associated with Ethanol-Mediated High-Salinity Tolerance
We analyzed the ethanol-induced gene expression levels associated with A. thaliana high-salinity stress tolerance using a microarray. Four-day-old plants treated with 0.3% (51 mM) ethanol or water for 24 h, with or without a subsequent treatment with 100 mM NaCl for 2 h were examined (Figure 2A). We observed that 1,293 genes were more highly expressed in ethanoltreated plants than in the untreated control plants in the absence of high-salinity stress (Figure 2A, Table S1). Among these genes, FIGURE 1 | Ethanol enhances high-salinity stress tolerance in Arabidopsis thaliana. (A) Phenotype of A. thaliana seedlings treated with 0.3% (51 mM) organic solvent, with or without a subsequent treatment with 100 mM NaCl for 4 days. Water was used as a negative control. Bars = 1 cm. (B) Survival rate under high-salinity condition in the presence or absence of various organic solvents. The survival rate of 20 plants was calculated on 4 days after the NaCl treatment. The experiment was conducted using three biological replicates. Error bars represent the mean ± standard deviation (SD). Significance was determined according to Student's t-test. ***P < 0.00001. (C) Chlorophyll content in 0.3% (51 mM) ethanol-treated and untreated plants under high-salinity condition. Error bars represent the mean ± SD, three independent biological repeats were performed. Statistical significance was determined by ANOVA, followed by post-hoc Tukey's tests. Means that differed significantly (P < 0.05) are indicated by different letters. 240 exhibited upregulated expression following a 2 h NaCl treatment in the absence of ethanol (Figure 2A, Table S2). We observed that 1,323 genes were more highly expressed in plants treated with NaCl in the presence of ethanol than in plants treated with NaCl in the absence of ethanol (Figure 2A, Table S3). Of these genes, 169 overlapped with salinity stress-upregulated genes (Figure 2A, Table S4), while 888 genes overlapped with ethanolupregulated genes in the absence of NaCl (Figure 2A, Table S5). Furthermore, we detected 134 NaCl-inducible genes that were more highly expressed in ethanol-treated plants than in untreated controls, with or without NaCl treatment (Figure 2A, Table S6).
For a more detailed analysis, we focused on the 994 overlapping genes highlighted in red in the Venn diagram presented in Figure 2A. Of these genes, 35 were related to ROS signaling (Table 1), including genes encoding ROS-scavengers [e.g., APX, CAT, glutathione peroxidase (GPX), peroxiredoxin (PrxR), glutathione S-transferase (GST), and alternative oxidase (AOX)], ROS-scavenging signaling molecules (e.g., ferritin and blue copper proteins, which inhibit the production of HO • , glutathione reductase, dehydroascorbate reductase, and glutaredoxin). In contrast, the expression of SOD genes encoding O •− 2 scavengers was unaffected by ethanol under high-salinity condition.
The expression of several transcription factor family genes involved in ROS signaling was also induced by ethanol in salt-stressed plants. These genes encoded C 2 H 2 zinc finger proteins (AtZAT6_AT5G04340, AtZAT10_AT1G27730, and AtZAT12_AT5G59820), WRKY proteins (AtWRKY6_ AT1G62300, AtWRKY25_AT2G30250, and AtWRKY33_ AT2G38470), a DREB protein (AtDREB19_AT2G38340), a heat shock factor protein (AtHsfA4A_AT4G18880), and NAC proteins (ANAC019_AT1G52890, ANAC102_ AT5G63790, and ANAC032_AT1G77450) (Tables S2, S5, S6). The AtZAT10 and AtZAT12 transcription factor genes were further analyzed by qRT-PCR. The expression levels of these two genes increased following ethanol and NaCl treatments ( Figure 2B). These observations suggest that the salt tolerance conferred by ethanol might be due to the increased production of ROS-related proteins and ROS signaling-related transcription factors, such as ZAT10 and ZAT12. (B) Relative AtZAT10 and AtZAT12 expression levels during a salinity stress treatment for 0 and 2 h in the presence or absence of 0.3% (51 mM) ethanol. The expression level of the unstressed plants treated with water was set as 1, and the ACT2 gene was used as an internal standard. Each treatment was analyzed using 30 plants. Three biological repeats were performed. Error bars represent the mean ± SD. Significance was determined according to Student's t-test. *P < 0.05; **P < 0.01; ***P < 0.001.
Ethanol Enhances the Detoxification of ROS under High-Salinity Stress Condition
To characterize the molecular functions of ethanol treatments, the expression of the ZAT10/12-related ROS scavenger genes was analyzed by qRT-PCR. We confirmed that AtAPX1 and AtAPX2 expression levels increased in response to ethanol and NaCl treatments ( Figure 3A). To clarify whether ethanol regulates cytosolic APX activity, an APX enzyme assay was performed. At 12 h after the NaCl treatment, total APX activity was higher in ethanol-treated plants than in untreated controls under the highsalinity stress condition ( Figure 3B). In contrast, no significant differences were observed between ethanol-treated and untreated control plants under the non-stress condition. Our data indicated that ethanol induces the transcription of APX2 and APX1, and enhances APX activity.
The The staining data were further confirmed by H 2 O 2 content in shoots. The results were consistent with the DAB staining data. Under the control condition, there were no significant differences between ethanol-treated and non-treated plants ( Figure 4B). However, after 12 h NaCl stress condition, the ethanol-treated plants could maintain the H 2 O 2 level as stable as those in control condition, while the ethanol-nontreated plants showed higher concentration of H 2 O 2 compared with the plants in the control condition and ethanol-treated plants under high-salinity stress condition ( Figure 4B). These data indicate that ethanol enhances salinity stress tolerance by ROS detoxification in A. thaliana.
Ethanol Treatment Enhances High-Salinity Tolerance by Decreasing the Accumulation of ROS in Rice
To confirm whether ethanol enhances the tolerance of monocots to salt stress condition, 14-day-old rice seedlings were treated with several ethanol concentrations. We observed that the leaves of untreated plants turned slightly yellow on 5 days after the NaCl treatment ( Figure 5A). However, the leaves of salinitystressed plants treated with 0.3 and 0.6% (51 and 103 mM, The expression level of the unstressed plants treated with water was set as 1, and the ACT2 gene was used as an internal standard. Each treatment was analyzed using 30 plants. Three biological repeats were performed. Error bars represent the mean ± SD. Significance was determined according to Student's t-test. *P < 0.05; **P < 0.001. (B) The APX activity during a 12 h salinity stress treatment in the presence or absence of 0.3% (51 mM) ethanol. Each treatment was analyzed using 30 plants. Three biological repeats were performed. Error bars represent the mean ± SD. Significance was determined according to Student's t-test. **P < 0.001. respectively) ethanol remained green, suggesting that ethanol enhances salinity stress tolerance in rice as well as A. thaliana (Figures 1, 5A).
Ethanol treatments enhanced ROS detoxification and improved the salt stress tolerance of A. thaliana plants. We used DAB and NBT staining to verify that salinity stress tolerance in rice is due to the detoxification of ROS. Rice leaves were more extensively stained by DAB under salinity stress condition than under control condition ( Figure 5B). However, the intensity of the DAB staining decreased in ethanol-treated plants under salt stress condition, suggesting that ethanol inhibited ROS accumulation ( Figure 5B). Additionally, there was no clear difference in the NBT staining of ethanol-treated and control plants under high-salinity condition ( Figure 5B). The H 2 O 2 content of rice leaves was measured and the data showed the highest concentration was detected in plants treated with NaCl for 24 h. In contrast, the plants treated with both NaCl and ethanol showed lower concentration of H 2 O 2 , which is similar with that of control condition ( Figure 5C). These results confirmed our DAB staining data, implying that ethanol increases salinity tolerance in rice by inhibiting ROS accumulation, similar to its effects in A. thaliana plants.
DISCUSSION
Our findings indicated that ethanol enhances high-salinity stress tolerance in A. thaliana and rice by detoxifying ROS (Figures 1, 5). Microarray and qRT-PCR analyses of A. thaliana revealed that ethanol upregulates the expression of AtZAT10, AtZAT12, AtAPX1, and AtAPX2 genes encoding transcription factors and the ROS scavenger under high-salinity condition (Figures 2B, 3A). Ethanol also increases APX activity in salt-stressed plants (Figure 3B), resulting in decreased H 2 O 2 levels (Figures 4B, 5C).
In addition to APX, various H 2 O 2 -scavenging enzymes (e.g., CAT, GPX, GST, and PrxR) are reportedly involved in decreasing excess H 2 O 2 generated under stress conditions . Our microarray data indicated that the expression of several genes encoding H 2 O 2 -scavengers was upregulated by ethanol under high-salinity stress conditions (Figure 2A, Table 1), including CAT2, GPX6 and 7, GSTs, and PrxR D. These enzymes convert H 2 O 2 to water to decrease excess H 2 O 2 content Dixon, 2010). The upregulation of these ROS-scavenger-encoding genes may accelerate the decrease in toxic ROS content. However, during the conversion of H 2 O 2 to water, glutathione, and ascorbate are used as enzyme co-substrates, and are recycled by GR, GLR, and DHAR . The upregulated expression of GR, GLR, and DHAR may ensure there is a sufficient supply of ascorbate and glutathione for enzymatic reactions . Glutathione-S-transferase catalyzes lipid hydroperoxides and prevents ROS-induced cell membrane damage (Dixon, 2010). Our microarray data confirmed that the expression of 20 AtGST genes is upregulated by ethanol (Table 1). Of these genes, AtGSTU4 and AtGSTU19 help mediate highsalinity stress tolerance (Sharma et al., 2014;Xu et al., 2015). The overexpression of OsGSTU4 and AtGSTU19 in transgenic A. thaliana improves salinity stress tolerance by inhibiting the accumulation of ROS and increasing GST activity (Sharma et al., 2014;Xu et al., 2015). A previous study concluded that GST activity in pumpkin plants is highly induced by 50 mM ethanol, which is equivalent to 0.3% ethanol (Fujita and Hossain, 2003). This finding supports our observation that ethanol upregulates the expression of GST genes ( Table 1) to potentially increase the abundance of GST. These results suggest that the application of exogenous ethanol regulates the expression of genes encoding H 2 O 2 -scavenging enzymes and their related signaling proteins in salt-stressed A. thaliana plants.
The maintenance of the steady state between H 2 O 2 and O •− 2 levels, which is crucial for many molecular mechanisms in plant cells, is regulated by an appropriate balance between the associated scavenging activities (Miller et al., 2010). Our microarray data revealed that AtAOX1A expression is upregulated by ethanol under salinity stress condition ( Table 1). In contrast, ethanol does not upregulate the expression of AtSOD genes, which encode enzymes responsible for catalyzing the dismutation of O •− 2 to H 2 O 2 and oxygen . These observations are consistent with our NBT staining results, which indicated the differences in O •− 2 content between ethanoltreated and untreated plants are minimal under salt stress condition (Figure 4A), and our DAB staining results, which revealed clear differences in H 2 O 2 abundance (Figures 4A, 5B).
Thus, ethanol influences the elimination of H 2 O 2 rather than O •− 2 . The expression of several ROS-scavenging signaling genes is also upregulated by ethanol under salt stress condition ( Table 1). The upregulation of the genes encoding ferritin and blue copper proteins, which prevent the formation of the highly toxic HO • (Miller et al., 2010), might also contribute to ethanol-mediated ROS detoxification mechanisms.
In this study, the effects of five organic solvents have been tested (including ethanol, acetone, methanol, DMF, and DMSO). Among them, only ethanol enhanced the high-salinity stress tolerance (Figures 1A, B). This phenotype might be caused by a specific function of ethanol that leads to enhancement of high-salinity tolerance. Previous studies reported that spraying methanol and ethanol to tomato leaves enhanced plant growth under normal condition and that root applications of 5% ethanol and methanol caused severe plant damage (Rowe et al., 1994). It is expected that combination of application method and concentration of organic solvents has various effects on plant growth. Further experiments are necessary to analyze the detailed effect of organic solvents on plant growth.
The high-salinity stress tolerance test of Arabidopsis plants using the 24-well-plate system might cause hypoxia that leads to ethanol fermentation (Kato-Noguchi and Kugimiya, 2001). It raises a question whether high-salinity stress tolerance caused by ethanol is related to hypoxia or not. When we treated the rice seedlings grown on soil (not soaked rice seedlings) with ethanol for the salt stress test, ethanol enhanced salt stress tolerance ( Figure 5A). These data showed that increased high-salinity tolerance by ethanol is independent of hypoxia effect.
We observed that the application of exogenous ethanol enhances rice tolerance to salt stress condition via the detoxification of H 2 O 2 (Figure 5). In summary, the enhancement of high-salinity tolerance due to ethanol treatments might be conserved in dicot and monocot plants. Ethanol is a simple and inexpensive compound. Thus, it may be very useful for protecting important crops from highsalinity stress. Integrative omics-based studies may reveal additional factors affecting ethanol-mediated high-salinity stress tolerance.
AUTHOR CONTRIBUTIONS
HN, KS, AM, and MS designed the study. HN, KS, AM, MM, CH, YS, MT, and YH conducted the experiments. HN, KS, and AM analyzed the data. HN, KS, AM, LT, YH, and MS reviewed the data and wrote the manuscript. | 2017-07-14T18:52:46.046Z | 2017-07-03T00:00:00.000 | {
"year": 2017,
"sha1": "1aef10b8be2deb6383b421b51b25ad1f802d5e8b",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2017.01001/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1aef10b8be2deb6383b421b51b25ad1f802d5e8b",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
236628571 | pes2o/s2orc | v3-fos-license | The generating matrices of the bivariate Balancing and Lucas-Balancing polynomials
The objective of this paper is to express the bivariate Balancing and Lucas-Balancing polynomials in terms of determinants of tridiagonal matrices. In addition, we obtained the inverses of the tridiagonal matrices. We finalized the general results to construct families of the tridiagonal matrices whose determinants generate arbitrary linear subsequence with positive and negative indices of the bivariate Balancing and Lucas-Balancing polynomials.
Introduction
The study of number sequences has been a source of attraction to the mathematicians since ancient times. Since then many of them are focusing their interest on the study of the fascinating triangular numbers. Behera and Panda in 1999, introduced the notion of Balancing numbers () nn B as solutions to a certain Diophantine equation: ( 2) ( ), n n n n r + + + − + + + + + + for some positive integer r which is called balancer or cobalancing number (Behera and Panda, 1999 (Panda, 2009 In the recent years many number theorists from all over the world are taking interest in this beatiful number system and studying the generalizations of this numbers. Interested reader may follow (Frontczak, 2019;Ozkoc, 2015;Ozkoc and Tekcan, 2017;Patel et. al., 2018;Ray, 2017;Ray, 2018;Yilmaz, 2020 (Cahill and Narayan, 2004) The determinant of ( ) (Usmani, 1994a;1994b) The inverse of a non-singular tridiagonal matrix ( ) where i and i satisfy the following recurrence relations: with the initial conditions 0 =1 and 11 = a . Theorem 1.1 is one special case of this one. Observe that There are many relations between Fibonacci, Lucas, Balancing, Lucas-Balancing numbers and tridiagonal matrices. (Chen, 2020;Falcon, 2013;Feng, 2011;Goy, 2018;Nalli and Civciv, 2009;Ozkoc, 2015;Ray, 2012;Ray and Panda, 2015;Taskara et. al., 2011;Trojovsky, 2016;Yilmaz and Kirklar, 2015). As a brief antecedents, the Fibonacci numbers as tridiagonal matrix determinants were construct by Strang (Strang 1997;1998), and further authors generalized results to establish families of tridiagonal matrices whose determinants create any linear subsequence k F + or k L + of the Fibonacci and Lucas numbers in (Nalli and Civciv, 2009). Additionally, in (Ray and Panda, 2015), the authors give the linear subsequence of the Balancing and Lucas-Balancing numbers by using the determinant of tridiagonal matrices.
Under these conditions, the main aim of this work is to construct the bivariate Balancing and Lucas-Balancing polynomials with positive and negative indices in the meaning of determinants of tridiagonal matrices. Then, we formulate the inverses of some of these matrices.
Main results
In this section, by a different approximation, we present the bivariate Balancing and Lucas-Balancing polynomials with the determinant of the tridiagonal matrices. To do that, firstly, we give the proposition in the following.
In the following proposition, we reveal the relationship between the bivariate Balancing and Lucas-Balancing numbers.
Proof. () i From the Equations
Then, by using the equality The proof of the part () ii is omitted as it is analogous to the part () i .
In the following theorem, we extend the results to construct families of tridiagonal matrices whose determinants generate any arbitrary linear subsequence of the bivariate Balancing and Lucas-Balancing polynomials.
Proof. We prove () i , since the proof of () ii can be showed similar to it. We use the principle of finite induction on n to prove the Equation (7) x For the inverse matrix of () En, if we choose In the following theorem, it give us the bivariate Balancing and Lucas-Balancing polynomials with negative indices in terms of the determinants of the special tridiagonal matrices.
Proof. The proof can be seen similar to Theorem 2.1 .
Conclusion
In this paper, to obtain the bivariate Balancing and Lucas-Balancing polynomials, we define the tridiagonal matrices and present the determinants and inverses of these matrices. By the results in Sections 2 of this paper, we have a major chance to crosscheck and acquire some new properties over these sequences. This is the key goal of this paper. Thus, we enlarge some recent result in the literature. That is, the Balancing and Lucas-Balancing numbers are a special case of ( , ) x k y = .
In the future studies on the tridiagonal matrices for number sequences, we hope that the following topic will bring a new comprehension. Also, it would be interesting to research the tridiagonal matrices for generalized Balancing, bivariate cobalancing and Lucas-Balancing polynomials. | 2021-08-02T00:06:17.240Z | 2021-05-03T00:00:00.000 | {
"year": 2021,
"sha1": "9827138308e08c642e7c3de87e1e8ea2e9a9e847",
"oa_license": null,
"oa_url": "https://dergipark.org.tr/en/download/article-file/1449591",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "b32ef0cdbe76fee0671b661858f0a39164a0c574",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
211988803 | pes2o/s2orc | v3-fos-license | Collective vibrations of a hydrodynamic active lattice
Recent experiments show that quasi-one-dimensional lattices of self-propelled droplets exhibit collective instabilities in the form of out-of-phase oscillations and solitary-like waves. This hydrodynamic lattice is driven by the external forcing of a vertically vibrating fluid bath, which invokes a field of subcritical Faraday waves on the bath surface, mediating the spatio-temporal droplet coupling.By modelling the droplet lattice as a memory-endowed system with spatially nonlocal coupling, we herein rationalise the form and onset of instability in this new class of dynamical oscillator. We identify the memory-driven instability of the lattice as a function of the number of droplets, and determine equispaced lattice configurations precluded by geometrical constraints. Each memory-driven instability is then classified as either a super- or sub-critical Hopf bifurcation \emph{via} a systematic weakly nonlinear analysis, rationalising experimental observations. We further discover a previously unreported symmetry-breaking instability, manifest as an oscillatory-rotary motion of the lattice. Numerical simulations support our findings and prompt further investigations of this nonlinear dynamical system.
Introduction
Classifying emergent properties of many-body systems, both passive and active, is a central theme of contemporary soft matter physics [1,2]. In recent years, myriad systems have emerged whose complex dynamics at the macroscale originate from the properties of their constituent particles, such as their shape, polarity, and activity or locomotion. Examples include the complex flows arising from active stresses exerted by bacteria suspended in fluid [3,4]; vortex generation and nonlinear wave propagation in colloidal fluids [5,6,7,8]; and phonon-like excitations in crystals of particles and droplets confined to microfluidic channels [9,10,11,12,13]. In non-fluidic systems, theoretical models of active nonlinear lattices are shown to exhibit instabilities in the form of out-of-phase oscillations and solitary waves [14,15,16,17,18,19], prompting experimental analogues in the form of active electronic circuits [20,21,22,23].
We here focus on rationalising the instability of a new class of active oscillator [24], whose active units are self-propelled millimetric droplets bouncing on the surface of a vertically vibrating bath of viscous fluid [25,26]. In the absence of drops, the fluid interface remains flat below the Faraday threshold, the critical vibrational acceleration at which Faraday waves spontaneously appear on the bath surface. When placed on the surface, a millimetric droplet may bounce periodically in resonance with its self-generated, subcritical, subharmonic Faraday wave field (with a characteristic wavelength λ F ), exciting waves at each impact. Above a critical vibrational acceleration, the droplet destabilises to small, lateral perturbations. In this regime, dissipative effects due to drag are overcome by propulsive forces enacted by the slope of the droplet's guiding wave field, its so-called pilot wave (Figures 1(a) and (b)). Herein lies the active component of the droplet motion: self-propulsion is achieved by continual exchange of energy of the droplet with its environment, in this case the vibrating bath. Moreover, as the bath's vibrational acceleration is increased progressively, the decay time of the pilot wave lengthens and the droplet is thus influenced by more of its past, increasing the so-called memory time of the droplet's motion [27]. Memory and self-propulsion are thus intimately connected, the former regulating the strength of the propulsive wave force required to overcome dissipation.
As demonstrated in [24], when confined to a submerged annular channel, droplets may couple to form an effectively one-dimensional lattice (Figure 1(c)). Each droplet in the lattice bounces in periodic synchrony with period T F , generating a quasi-monochromatic wave field whose superposition over all the droplets mediates the spatio-temporal coupling of the lattice. For sufficiently large forcing, experiments show that the droplet lattice exhibits collective vibrations in the form of small-amplitude out-of-phase oscillations or solitary-like waves, depending on the proximity of neighbouring droplets. Moreover, the transition to each of the aforementioned states may be characterised by the onset of a super-and sub-critical Hopf bifurcation, respectively.
Oscillations of the lattice emerge from the competition between droplet self-propulsionarising through the interaction of each droplet with the slope of its own wave field-and wave-mediated coupling between droplets. This latter phenomenon represents a distinguishing feature of this new class of coupled oscillator: the waves produced at each droplet impact with the bath give rise to an effective self-generated, dynamic coupling potential between droplets (vis-à-vis lattices subject to an externally-applied potential). The existence of this dynamic potential has far-reaching consequences; for example (i) there exist lattice equilibria in which the droplets sit at the local maxima of the potential and (ii) the dynamic potential encodes the memory of the system, storing information regarding each droplets' past trajectory. While oscillators containing spatial and temporal nonlocality have received significant theoretical attention [28,29,30,31,32], experimental, mechanical analogues are relatively scarce, limited to networks of coupled pendula [33,34].
Through a systematic analysis of the simplest possible model of the hydrodynamic lattice-introduced in §2-we herein in seek to delineate the conditions under which qualitatively different bifurcations can arise, elucidating the key mechanims underpinning the emergent, collective properties of this new class of dynamical oscillator. In §3, we identify the memory-driven instability of the lattice, as well as equispaced configurations rendered uniformly unstable at all memory by the form of the lattice wave field. We rationalise the latter in the low-memory regime through an energy-like quantity that the system attempts to minimise. A weakly nonlinear analysis ( §4) then prescribes to each memory-driven instability a super-or sub-critical Hopf bifurcation, rationalising experimental observations [24]. Further, we show that symmetry-breaking leads to a previously unreported self-induced, oscillatory-rotary motion of the lattice. Our analysis concludes in §5 with an exploration into how the transition to each of the aforementioned instabilities is controlled by the droplet separation distance. We note that, while our study is focused on the regime where droplet motion is confined to an annular channel, the results of § §2-5 are also appropriate for describing azimuthal oscillations of droplet rings in free-space [35], allowing us to rationalise behaviour in this related system. Finally, the results of our paper are summarised in §6, along with proposals for future work.
Active lattice model
We begin by constructing a model of the droplet lattice, valid below and near to the onset of instability, by first introducing some simplifying assumptions. In the physical regimes of interest, the time scale of the droplets' vertical motion, T F , is typically much less than the time scale of horizontal motion. We thus approximate the wave field generated at impact by each droplet as the time-periodic wave field generated by a single, stationary, bouncing droplet at a horizontal position x = x p . The wave field is normalised to account for the exponential decay time, T M , of the Faraday waves [36]. We next exploit the periodicity (and synchrony) of the droplets' vertical motion to average the droplets' horizontal motion over one bouncing period, T F , giving rise to the stroboscopic wave field kernel H [37,38]. We note that the presence of the annular subsurface topography in experiments means that H is not translationally invariant; the wave kernel depends on where the droplet resides in the channel, x p , and so H = H(x; x p ). However, the geometry of the system is rotationally invariant; thus, in polar coordinates (whose origin coincides with the centre of the annulus) we have H = H(r; r p ; θ − θ p ) (see Figure 2(a)). The linear superposition of the waves generated by N droplets yields the global wave field, the dynamic potential, which mediates the spatio-temporal coupling of the lattice.
The droplets are propelled by the wave field generated over all prior impacts and their motion is countered by a linear drag [37]. For N droplets at positions x 1 (t), . . . , x N (t) and global wave field h(x, t), amalgamating the foregoing assumptions yields the many-droplet stroboscopic model [38,39], generalised to account for submerged topography: ∂h ∂t where m is the droplet mass, D is a drag coefficient, g is acceleration due to gravity, and dots denote differentiation with respect to time t.
In experiments, it is observed that the deviation from circumferential droplet motion is small [24]. Using this fact, we recast the two-dimensional horizontal droplet motion in terms of the arc length x along a circle of constant radius R (which, in experiments, is controlled by the radius of the submerged channel). The projection of the stroboscopic bouncer wave field, H, onto this circle is then H(x) = H(Re r (x); Re r (0)), where the radial unit vector e r (x) is defined as e r (x) = (cos(x/R), sin(x/R)). Further, the rotational invariance of the system renders H = H(x−x p ), where the wave kernel H(x) is periodic with period L = 2πR. The model (1) then becomes Our model is closed by selecting a particular form of the wave kernel H(x) = H(x + L). In principle, H may be generated using the fluid mechanical model of Durey et al. [40], in which the fluid evolution over the submerged annular channel is explicitly calculated (see Figure 2(a)). However, there are many subtle aspects of the wave field that obfuscate the key mechanisms underlying the dynamics of the hydrodynamic lattice. These include variations in the wave-field amplitude arising due to the particular bouncing phase of the drops [36], and memory-dependent changes to the exponential decay-length of the pilot wave [41,42,43,44]. Our theory is instead developed for a general, flexible, periodic wave-kernel, H(x). For exploration and demonstration purposes, we simply define a wave kernel that exhibits the fundamental aspects of the fluid system: namely, a quasi-monochromatic wave field endowed with exponential spatial-decay. Moreover, as arises from the physics of each bouncing droplet [37,38], the stroboscopic wave kernel, H, exhibits a peak at the droplet position, which ultimately leads to stable lattice configurations wherein each droplet sits at a local maximum of the dynamic potential, h. An application-using the results of this study-to the specific fluid system explored in experiments is subject to future work.
To inform our choice of candidate wave kernel, H(x), we observe numerically that topographically induced deviations in H(x; x p ) from an axisymmetric wave kernel about x p are weak (see Figure 2(a)). For simplicity, we thus consider H(x; x p ) = F 0 (|x − x p |) for our simulations, where a candidate axisymmetric wave form F 0 (r) is Here, J 0 is the zeroth-order Bessel function, A 0 is the wave amplitude, k F = 2π/λ F is the Faraday wavenumber, and l d is a tunable spatial decay-length, which in turn determines the strength of the nonlocal droplet-coupling. The wave kernel, H(x), is then defined as the radial cut of H(x; x p ) along a circle of constant radius R, specifically as demonstrated by the black circle in Figure 2(b). A further advantage of (4) is that it allows us to easily explore the implications of varying the lattice radius, R, continuously ( §5). For peace of mind, we tested this generic wave kernel against the numerically computed wave kernel [40], which demonstrated sufficient qualitative similarities to still capture the dynamical motion of the active lattice. Finally, we emphasise that the analysis presented herein is independent of the particular choice of H, provided that H is periodic and sufficiently smooth, with oscillations arising on a length scale similar to the Faraday wavelength.
The dimensionless parameters are l = l d /λ F , r 0 = R/λ F , ν = t 0 /T M > 0, and A = A 0 /H 0 , while the dimensionless circumference of the lattice is Λ = L/λ F = 2πr 0 . Unless stated otherwise, for the numerical results presented herein, we fix l = 1.6, A = 0.1, and r 0 = 5.4, based on typical experimental values.
To summarise, waves are generated about the current position of each droplet and superpose to form the global wave field, h(x, t). The spatio-temporal evolution of h(x, t) is regulated by (5b) and depends on the trajectories of each droplet, imprinting a path-memory on the system. Each droplet is then driven by the local gradient of h(x, t) through (5a). The effects of memory in our dimensionless system (5) are encoded through the dimensionless dissipation rate ν ∼ 1/T M . While ν is convenient algebraically, we will interpret our results using a more natural measure of memory M = 1/ν, where larger M means that past dynamics play a more prominent role.
Memory-driven and geometric instability
To determine the critical value of the memory M = M c at which the wave force promotes self-propulsion of the droplets, we analyse the linear stability of (5) to small perturbations about a static, equispaced lattice configuration, coinciding with experiments [24]. Initially, the droplets are positioned at x n = nδ, where δ = Λ/N . Equation (5b) then yields the corresponding free-surface elevation By symmetry, the free-surface gradient beneath each droplet vanishes in this steady configuration. Furthermore, we note from (7) that one effect of increasing the memory, M = 1/ν, is to increase the wave amplitude of the steady lattice. Upon consideration of a small perturbation to each droplet position, x n , and the concomitant perturbation to the wave field, h, we set where 0 < η 1. By substituting this form into (5) and linearising, we obtain where we have dropped the carets on the perturbed variables, and primes denote differentiation with respect to x. The difficulty in analysing (8) rests in the temporal nonlocality of the free-surface perturbation, h(x, t). To circumnavigate this issue and project the dynamics entirely onto the droplet trajectories, x n (t), we use the form of (8b) to define the auxiliary variables x n (t) satisfyingẋ n + νx n = x n for n = 1, . . . , N, a procedure that we also apply throughout the weakly nonlinear analysis presented in §4. It then follows from (8b) that the evolution of the perturbed free-surface elevation, h(x, t), may now be expressed in terms of the auxiliary variables, x n (t). Specifically, we obtain the particular solution The homogeneous solution to (8b) decays exponentially in time, and thus plays no role in either the linear or weakly nonlinear stability ( §4) of the system. Having expressed h in terms of the auxiliary variables x n , we recast (8) as a reduced dynamical system for the variables x n and x n .
Substituting (10) into (8a), and using (7), we find that L n (x) = 0, where x = (x 1 , · · · , x N ). The linear operator L n is defined as which, together with equations (9), constitute a linear system of 2N ordinary differential equations describing the evolution of x n from the steady state. Equations (9) and (11) may be solved using standard eigenmode methods, the periodicity of the steady lattice prompting the ansatz where the form of x n follows from (9). Here we have defined the angular spacing parameter α = 2π/N , imaginary unit i, and complex amplitude A, where c.c. denotes complex conjugation of the preceding term. By symmetry considerations, we restrict our attention to the wavenumbers k = 0, . . . , N , where N = N/2 . By substituting (12) into (11), we obtain The real constants c k are defined as which may be interpreted as the discrete cosine transform coefficients of the even and periodic function H (x), arising from the discrete convolution in equation (11). Upon rearranging D k (λ k ; ν) = 0 and writing ν = 1/M , the eigenvalues, λ k , describing the asymptotic linear stability of (8), satisfy the cubic polynomial whose roots we typically compute numerically. The uniform lattice is asymptotically unstable if, for some wavenumber k, there exists an eigenvalue λ k satisfying Re(λ k ) > 0. A fundamental property of this lattice is the rotational invariance, characterised by the eigenvalue λ 0 = 0, which may allow for a slow drift of the lattice in the weakly nonlinear regime (see §4).
It transpires that the lattice may destabilise via two distinct mechanisms: (i) a memoryindependent instability, where the lattice wave field destabilises the uniform configuration for all memory M > 0 (so M c = 0); and (ii) a memory-driven instability when the wave force exceeds the drag force, prompting propulsion of the droplets for M > M c > 0 (to be determined). Physically, case (i) arises from geometrical frustration of the lattice wave field: for particular lattice configurations, the lattice wave field precludes the formation of a stable, equispaced lattice, forcing the droplets to occupy a nearby equilibrium configuration. Henceforth, we will thus refer to case (i) as a geometric instability. The memory-driven instability (ii) may be further decomposed into two subcategories: instability via a real eigenvalue, giving rise to the steady rotation of the lattice, analogous to the dynamics of a single walking droplet [38]; and oscillatory instabilities arising for droplets in close proximity, akin to those observed in experiments of this lattice system [24]. We discuss each of the foregoing cases and their implications in the following three sections.
Geometric instability
For short memory M 1 (equivalently, ν 1), the polynomial (14) admits three real roots: Thus, if there exists a wavenumber k * such that c k * > c 0 then λ k * > 0 and the lattice is unconditionally unstable for M 1. In fact, the lattice is unconditionally unstable for all M > 0 when c k > c 0 , the negativity of the constant term in (14) always giving rise to a positive real root. While the asymptotic results (15) provide a mathematical rationale for this memory-independent instability, they provide little in the way of physical intuition, which we presently provide.
We first introduce the rescaled variables τ = M t and H = h/M and substitute into equations (5). We then consider the low-memory limit M 1, leading us to neglect the highest-order time derivative in each equation. The result is overdamped, gradient-driven motion for each droplet described by where the wave field is H(x, τ ) = N m=1 H(x − x m (τ )). If all but one of the droplets were static, then the remaining droplet would evolve in response to a fixed potential prescribed by the wave field. However, due to the spatially nonlocal coupling of the lattice, this motion in turn alters the wave force acting on all other droplets, prompting propulsion. As such, it is necessary to consider the global dynamics of the system. We characterise the dynamics by the mean height of the wave field beneath each droplet Using (16) and that H(x) is an even, periodic function, it is readily verified that indicating that the mean wave-height, H , decreases along droplet trajectories (except at lattice equilibria). This phenomenon was also observed numerically in simulations of twodimensional lattices [35]. Indeed, the foregoing argument readily generalises to this latter case, and thus may be applied to this wider class of system [45,46,47,48]. We now provide an alternative interpretation of the geometric instability in terms of the energy-like quantity H . When perturbing about the equispaced lattice, with x n (τ ) = nδ + ηx n (τ ) (with 0 < η 1), we compute the change in H from the steady state H 0 = N n=1 H(δn). By substituting this ansatz into (17a) and Taylor expanding in powers of η, we obtain wherex = (x 1 , . . . ,x n ) T , I is the identity matrix, and A is the symmetric, circulant matrix with entries A nm = H (δ(n − m)). The stability of the equispaced lattice thus depends entirely on the definiteness of the matrix (c 0 I − A), where negative eigenvalues indicate instability since H can decrease in the direction of the corresponding eigenvector. Due to the form of A, its eigenvalues are c k (for k = 0, . . . , N − 1) with corresponding eigenvectors v k whose n th element is e iknα (consistent with the normal modes ansatz in (12)). Hence, the energy may decrease if there exists k * such that c 0 − c k * < 0 (recall (15)), rendering the steady lattice a saddle. When multiple wavenumbers satisfy this condition, the lattice rearranges in the direction of steepest descent of this energy-like quantity, H , prescribed by the wavenumber k * that minimises c 0 − c k . When N is even and k * = N/2, symmetry dictates that the droplets approach a lattice whose separation distances alternate in size. For other values of k * , more exotic, asymmetric lattice structures emerge, such as the 31-droplet lattice presented in Figure 3. In fact, such an instability can even affect two well-separated droplets (specifically, δ l), despite the waves felt by the other droplet being exponentially small, emphasising the extent of the spatially nonlocal coupling.
Onset of collective walking
If the lattice configuration is not geometrically unstable, the system may destabilise to collective walking when the memory, M , increases above a critical threshold (to be determined).
One such instability arises when a real eigenvalue of (14) passes through the origin, going from negative to positive. Such an instability typically arises when k = 0 and λ 0 becomes a double eigenvalue at the instability threshold, although it is also possible in the exceptional case when there exists k = k * such that c k * = c 0 (since λ k * = 0 is then also a trivial root of (14)). By factorising (14) when c k = c 0 , the non-trivial eigenvalues satisfy the quadratic polynomial M λ 2 0 + (M + 1)λ 0 + c 0 M 2 + 1 = 0.
Since M > 0, the stability of the system depends on the sign of the constant term in (18), where c 0 determines the local curvature beneath each droplet in the steady lattice. When c 0 < 0, corresponding to each droplet sitting on a peak of the free surface, both roots of (18) are real. The system destabilises at the critical threshold M = 1/ √ −c 0 , at which one eigenvalue transitions from a stable node to a saddle, characteristic of a pitchfork bifurcation. Physically, this is the threshold at which the wave force dominates the drag force, promoting unidirectional self-propulsion, and thus is the many-droplet analogue of the walking threshold described in [38], for which a supercritical pitchfork bifurcation arises [27].
In contrast, when c 0 > 0, each droplet sits in a trough of the free surface, a highly stable configuration. As such, Re(λ 0 ) < 0 for all M , where the system may return to its rest state via either overdamped or underdamped oscillations. Such a regime may arise when the troughs about the central peak of the wave kernel, H, are sufficiently deep (akin to the numerical example presented in Figure 2(a)) and the droplets lie in close proximity.
Oscillatory instability
Of particular relevance to the experiments described in [24] is the case where the lattice undergoes instead an oscillatory instability, wherein the real part of a complex-conjugate pair of eigenvalues transitions from negative to positive as memory is increased (indicative, but not conclusive, of a Hopf bifurcation; see §4). We note that a lattice may have consecutive pitchfork and oscillatory instabilities as M is varied, in which case the form and threshold of the instability is determined by the smallest such critical M .
In Figure 4, we demonstrate the possible forms of an oscillatory instability through consideration of the real and imaginary parts of the dominant eigenvalue, λ k , for three consecutive droplet configurations: N = 19, N = 20, and N = 21. As the memory, M , is increased, a single wavenumber k c ∈ [0, N ] touches Re(λ kc ) = 0 for some critical value of M = M c (determined below), while Im(λ kc ) = 0, corresponding to an oscillatory instability. As M is increased further, beyond M c , yet more wavenumbers destabilise as their corresponding eigenvalues cross the imaginary axis. Of particular interest is the case N = 20 (Figure 4(b)), where the critical wavenumber is k c = N/2 in the current parameter regime. According to the linear stability analysis, each droplet position is described by x n = nδ + (−1) n [A exp(iω kc t) + c.c.] at M = M c in this case, corresponding to out-of-phase oscillations of adjacent droplets, akin to experimental observations [24, Fig. 2]. We also observe a notable departure in k c from k = N for N = 19 (Figure 4 critical eigenvalue is λ k = iω k , where ω k > 0 without loss of generality. Upon substituting into (14) and taking real and imaginary parts, we obtain two equations for ω k , namely By eliminating ω k , we find that M k satisfies p(M k ) = 0, where p(M ) = c 0 M 3 +c k M 2 +M +1. We identify M k as the smallest real root of p for each wavenumber k and define the critical lattice instability threshold as M c = min k M k , the minimum memory over all wavenumbers. The corresponding critical angular frequency ω c = ω kc may then be determined from (19), namely ω c = c 0 M c + 1/M c .
Summary
The form of each instability outlined in § §33.1-3.3 is summarised in Figure 5, where we present M c for each droplet configuration consisting of N droplets. Lattice configurations are either geometrically unstable (M c = 0), or instability is triggered by memory effects (M c > 0) through a pitchfork bifurcation or an oscillatory instability. (Foreshadowing the results of §4, the oscillatory instabilities may be categorised as either supercritical or subcritical Hopf bifurcations). We observe no apparent pattern in the distribution of memory-driven or geometric instabilities, principally due to the fact that N is varied discretely. As portrayed in §5, a more illuminating view is afforded when changing the lattice radius, r 0 , continuously and fixing N .
Weakly nonlinear oscillations and self-induced drift
The results of §3 suggest that, for geometrically stable lattice configurations (excluding the case N = 1), the lattice destabilises via a Hopf bifurcation. (Technically, these are not Hopf bifurcations in the strict sense, as-due to translational invariance of the lattice-the fixed point is non-isolated.) In general, the Hopf bifurcation is one of two types: supercritical, in which stable, small-amplitude oscillations arise beyond the instability threshold, propelled by the slope of the droplet wave field (see Figure 6); or subcritical, where the system jumps to a distant attractor (such as a solitary-like wave [24]). We proceed to perform a weakly nonlinear analysis in the vicinity of the bifurcation point, ν = ν c = 1/M c , allowing us to distinguish between these two contrasting dynamics of this spatially-and temporally-nonlocal system. When ν is only slightly less than ν c , we set ν c − ν = ε 2 , where 0 < ε 1, corresponding to M slightly above the critical threshold, M c . We then pose the following asymptotic multiple-scales expansions where T = ε 2 t is the slow time-scale. The O(1) drift, D(T ), may give rise to a net rotation of the lattice. The origin of this drift may be traced back to the k = 0 mode of the dispersion relation (14), corresponding to translational invariance. (We note that the Stokes' drift in surface gravity waves arises in a similar manner [49].) The path-memory stored within the free surface (entering through equation (5b)) means that the requisite calculations involved in this procedure are non-standard and (unfortunately) lengthy, with full details outlined in Appendix A. Stated succinctly, the outcome of this analysis is that each droplet evolves according to where the slowly varying complex amplitude A is governed by the Stuart-Landau equation The accompanying equation governing the evolution of the drift, D, is The coefficients γ 1 , γ 2 ∈ C and γ 3 ∈ R are defined in terms of the system parameters in Appendix A.
When the critical wavenumber of instability is k c = N/2, corresponding to out-of-phase oscillations of the droplets (recall §33.3), we find that γ 3 = 0 and hence D = constant. Otherwise the lattice is endowed with a non-zero drift velocity superimposed on top of individual droplet oscillations, a phenomenon also observed in free-space rings [35]. Further, we observe that when A is constant, D is linear in T and the lattice rotates with constant speed.
Furnished with equation (21), we now specify to which variety of Hopf bifurcation each lattice configuration destabilises. By recasting A in (21) in polar form A = ρ exp(iφ) (where ρ(T ) ≥ 0 and φ(T ) are both real) and equating real and imaginary parts, we find where r i = Re(γ i ) and s i = Im(γ i ). Consistent with the linear stability analysis performed in §3, r 1 satisfies r 1 > 0, rendering the fixed point ρ = 0 (corresponding to a stationary lattice) unstable. If r 2 > 0, there is a second, stable, fixed point ρ * = r 1 /r 2 , corresponding to a supercritical Hopf bifurcation. From (24), the corresponding phase is φ(T ) = φ * T where φ * = s 1 − s 2 r 1 /r 2 (we set the constant of integration, corresponding to an arbitrary phaseshift, to zero by temporal invariance). Conversely, if r 2 < 0 then no additional fixed points emerge beyond the instability threshold, characteristic of a subcritical bifurcation. We return our attention to Figure 5, in which we mark supercritical (r 2 > 0) and subcritical (r 2 < 0) Hopf bifurcations for varying N . Although the precise details of this figure depend on the form of the wave kernel, H, we see that it portrays one of the fundamental features of the experiments [24], namely the transition from a super-to sub-critical Hopf bifurcation as the number of droplets is doubled from N = 20 to N = 40. Physically, stable, small-amplitude oscillations arise for M > M c when the Hopf bifurcation is supercritical, yet a distant attractor is approached for a subcritical Hopf bifurcation (which, in experiments, takes the form of a solitary wave [24]). The subtle implications of instead varying the ring radius, r 0 , while keeping the number of droplets, N , fixed, are discussed in §5. The numerical and analytical predictions are virtually superimposed when ε is sufficiently small: as ε increases, however, the oscillations undergo a second bifurcation, beyond which the dynamics become chaotic in this case.
Finally, we justify the efficacy of (21) and (22) in describing the dynamics of the system (5) beyond a supercritical bifurcation point. There is a stable, periodic regime in which each droplet evolves according to where the drift d(t) = v(t − t 0 ) + O(ε) proceeds with velocity v = γ 3 (r 1 /r 2 )(ν c − ν). The amplitude of the oscillations is a = 2ρ * √ ν c − ν and Ω = φ * (ν c − ν) + ω c is the angular frequency. Each oscillation has an associated phase shift Ψ n = k c nα.
In Figure 7, we compare direct numerical simulations of (5) with our predictions for a, Ω, and v in the supercritical case N = 23. Our numerical solutions were computed using a Fourier spectral method, with code and documentation provided in the Supplementary Material. For 0 < ν c − ν 1, our analytical and numerical predictions are virtually superimposed. If ν c −ν becomes too large, however, a significant discrepancy arises, which we attribute to the onset of a second bifurcation wherein the oscillations themselves destabilise. It is then necessary to take into account spatial, as well as temporal, variations of the amplitude A(T ), which will be explored elsewhere [50].
Control of stability
When the lattice radius, r 0 , is fixed and the number of droplets, N , is varied discretely, Figure 5 shows no discernible pattern in the distribution of geometric and memory-driven instabilities. However, the discrete variation of N accounts only for a finite subset of the infinite family of possible droplet separations. To elucidate the underlying structure of the geometric and memory-driven instabilities, we instead consider how the stability of a lattice of N = 20 droplets changes as r 0 is varied continuously. Herein we characterise the dynamics by the dimensionless separation distance, δ = Λ/N = 2πr 0 /N . As we shall see, regions of geometric and memory-driven instability in fact arise quasi-periodically, with a period close to the Faraday wavelength. (We recall that all lengths are normalised by λ F .) We note that the wave kernel, H(x), depends on r 0 through equation (6).
Geometric instability
We first study the origins of geometric instability, which we recall from §33.1 occurs if there exists an integer k * such that c k * > c 0 . To aid the following discussion, we plot the coefficients c k for k ∈ [0, N = 10] in Figure 8(c) as a function of δ. The oscillatory form of the coefficients c k with increasing δ (or r 0 ) arises due to the quasi-monochromatic form of H(x), the oscillation length-scale being close to the Faraday wavelength (unity in dimensionless units). Moreover, c 0 and c N tend to oscillate out of phase and bound the oscillations of c 1 , . . . , c N −1 . The result is that regions of geometric instability typically begin and end at those values of δ where c N crosses c 0 from below and above, respectively, thus selecting k * = N . Moreover, each successive region of geometric instability is separated by a Faraday wavelength (δ ≈ 1). This effect is preserved even when the droplets are well-separated and interact weakly, specifically when δ greatly exceeds the wave kernel's decay-length (δ l). In this limit, the coefficients c k decay exponentially (to the asymptotic value H (0), that of a single isolated droplet) over a length scale close to l, yet remain oscillatory.
Memory-driven instability
In intervals of δ where the system undergoes a memory-driven Hopf bifurcation, Figure 8(a) demonstrates that the instability threshold, M c , varies smoothly, except at points where k c switches between different integer values (see Figure 8(b)). It appears the Hopf bifurcation is supercritical only when k c is N or N − 1 and subcritical otherwise (recall Figure 4). When the droplets lie less then than a Faraday wavelength apart (δ 1), the supercriticality region is very narrow, explaining the prevalence of subcritical bifurcations when the droplets are tightly packed. We note for δ 0.7 in the current parameter regime (Figure 8(d)), the steady lattice wave field exhibits imperceptible oscillations, which present a marked contrast to when δ 1 (Figures 8(e) and (f)), for which supercritical Hopf and geometric instabilities are prevalent.
Summary
The regions of geometric instability thus provide quantised separation distances prescribed by the Faraday wavelength at which oscillations arise beyond the instability threshold. Moreover, we infer that subcritical bifurcations (and hence solitary-like waves) are more likely to arise when the droplets are either packed closely together (δ 1) or when δ is near the boundaries of geometric instability. However the solitary wave dynamics are not captured by our highly simplified model, likely due to the omission of several potentially significant fluid mechanical effects that arise in this regime (see §2). Nevertheless, the foregoing observations are entirely consistent with experiments [24], where small-amplitude, out-of-phase oscillations and solitary-like waves are observed when the droplet spacings are around 1.8λ F and 0.9λ F , respectively. Finally, the foregoing results point to the sensitivity of the experimental system [24]: we infer from Figure 8 that a small change in ring radius, r 0 , or the addition/subtraction of a droplet may adversely affect the stability of the lattice, for example by pushing the lattice into a region of geometric instability.
Conclusion
In this paper, we considered the onset and stability of collective vibrations in an active hydrodynamic lattice. This new class of dynamical oscillator is characterised by two features atypical of traditional oscillator systems: spatially nonlocal coupling, mediated in this case by the droplet wave field; and memory-driven self-propulsion of the lattice components. A consequence of the spatially nonlocal coupling is the possibility of geometric instability, where the lattice wave field acts to propel the droplets for all values of the memory parameter, M . Otherwise, linear stability predicts the system typically destabilises to an oscillatory instability beyond a critical memory.
The fate of the system close to the point of instability was then studied via a systematic weakly nonlinear analysis. At the onset of a memory-driven instability, the lattice system undergoes a Hopf bifurcation, which can be either supercritical, prompting stable, smallamplitude oscillations beyond the instability threshold, or subcritical, where the system jumps to a distant attractor, manifest in experiments as the excitation of a solitary-like wave [24]. In the supercritical regime, an oscillatory-rotary motion may arise when the critical wavenumber k c = N/2, a phenomenon observed in confined and free-space rings [24,35]. Moreover, when the lattice radius, r 0 , was fixed, our weakly nonlinear analysis corroborated one of the fundamental features of the experiments [24], namely the transition from a super-to sub-critical Hopf bifurcation as the number of droplets was doubled from 20 to 40.
The dependence of the stability of the lattice on the system parameters was probed yet further when we allowed the ring radius to vary continuously, while keeping the number of droplets fixed. Here we found quantised separations distances similar in size to the Faraday wavelength, separating regions of geometric and memory-driven instability. Generalising our result of the transition from a super-to sub-critical Hopf bifurcation, we infer that subcritical bifurcations are more likely to arise when droplets are more tightly bound (less than a Faraday wavelength apart), while supercritical and geometric instabilities prevale when the droplet spacing is increased. These observations are consistent with the experimental results obtained for the droplet spacings reported in [24].
The potential avenues for investigation prompted by the results of this paper are numerous, and so we name only a few of the most salient here. Perhaps the most demanding is a description of the fully nonlinear dynamics of the solitary wave regime, specifically extending our model to consider variations in the droplets' vertical bouncing phase [36], or two-dimensional effects imposed by the geometry of the submerged annular channel [40]. Indeed, the fact that the wave force vanishes for tightly packed, equispaced lattices (Fig-ure 8(d)) is a portent that the simplified model (5) is insufficient to capture the solitary wave dynamics. Inspired by mathematical models of traffic flow [51], another avenue is to course-grain the discrete model (5) to derive a system of partial differential equations for the macroscopic droplet density and velocity. Such models provide a natural framework in which to study the propagation of nonlinear waves.
Our system also begs further attention when viewed as a dynamical system. In the future [50], we will show that taking into account spatial variations in the droplet oscillation amplitude, A, genralises equation (21) to a complex Ginzburg-Landau equation, allowimg us to rationalise the onset of the second bifurcation alluded to in Figure 7. The techniques employed in this study may also, in principle, be extended to two-dimensional lattices [45,46,47,48,35]. Finally, the spatio-temporal nonlocality present in our system also renders it a tantalising experimental [52,53,54,55] and theoretical [29,56,30,31,32] candidate to investigate so-called chimera states in coupled oscillators.
Appendix A Derivation of the Stuart-Landau and drift equations
In this section, we provide details of the multiple-scales expansion leading to the amplitude and drift equations (21) and (22). The basic recipe is thus: first, we substitute the asymptotic expansions (20) into the stroboscopic model (5) and gather successive powers of ε. At each order, we suppress resonant terms (those that are either constant in t or proportional to e iφn(t) , where φ n = k c nα + ω c t), which are thereby solutions of the linear problem (11). This step is facilitated by introducing auxiliary variables to solve for the free surface h, akin to §3. This procedure gives rise to the Stuart-Landau equation (21) at O(ε 3 ) and the drift equation (22) at O(ε 2 ). At leading order we obtain a similar system to (7): where n (T ) = nδ + D(T ). By symmetry of h (0) , all odd-derivatives vanish beneath each droplet at equilibrium, a fact we will make repeated use of in simplifying forthcoming terms in our expansion.
At O(ε), we obtain an analogous problem to (8) for x (1) n and h (1) . Hence, after defining the auxiliary variables X n satisfying we obtain the particular solution and L n x (1) = 0, where the definition of the linear operator, L n , is the same as in (11) (with ν = ν c ) and N ). (Recall that the homogeneous part of h (1) decays exponentially in time ( §3).) We now seek a solution to L n x (1) = 0 of the form where A is a complex amplitude and B is a correction to the drift, D. This solution may be verified by recalling that the dispersion relation, D k , satisfies D kc (iω c ; ν c ) = 0 and D 0 (0; ν) = 0.
At O(ε 2 ), we have the following system for x (2) n and h (2) : ∂h (2) ∂t To solve for h (2) , we introduce two further auxiliary variables, Y n and Z n , akin to the procedure adopted in (9). By the form of the inhomogeneity in (29), we pose that Y n and Z n satisfy ∂Y n ∂t + ν c Y n = 1 2 x (1)2 n , ∂Z n ∂t + ν c Z n = x (2) n .
A particular solution of (29) is then found: Using the form of x (1) n , we find from the first equation in (30) that Substituting (31) and (32) into (28), and using (25), then yields L n x (2) = −α 0 dD dT + a 1 A 2 e 2iφn + c.c. + a 2 |A| 2 , where the coefficients α 0 , a 1 , and a 2 are included in the summary at the end of this section. For a bounded solution of (33), we require that the constant secular terms on the righthand side vanish, yielding an equation governing the drift D: As terms proportional to e ±2iφn(t) in (33) are non-secular, the general solution of (33) is then x (2) n = C(T )e iφn + c.c. + a 1 A 2 D 2kc (2iω c ; ν c ) e 2iφn + c.c. + E(T ), where C and E are corrections to complex amplitude, A, and drift, D, respectively. Thus, from the second equation in (30), we find Z n = ã 1 A 2 ν c + 2iω c e 2iφn + c.c. + C ν c + iω c e iφn + c.c. + 1 ν c E, whereã 1 = a 1 D 2kc (2iω c ; k c ) .
At O(ε 3 ), we have a system for x In an identical procedure to that carried out at O(ε 2 ), introducing three further auxiliary variables allows us to find a particular solution of (36) governing h (3) . Then, after substituting h (3) into (35), eliminating secular terms (either constant in t or proportional to e iφn(t) ) yields two equations governing the complex amplitude, A, and real drift, B, namely in addition to equation (34) governing the drift, D. The notation A * denotes the complex conjugate of A. We cannot determine the higher-order corrections B, C, and E without proceeding to O(ε 4 ) and higher. Nevertheless, a satisfactory approximation is obtained by considering A and D alone, as made clear by Figure 7. | 2020-03-05T02:41:36.272Z | 2020-03-04T00:00:00.000 | {
"year": 2020,
"sha1": "92874b56aaac65a9b2e7f8be00eb9a7f94329f7b",
"oa_license": null,
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7426053",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "92874b56aaac65a9b2e7f8be00eb9a7f94329f7b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Physics",
"Mathematics"
]
} |
246051416 | pes2o/s2orc | v3-fos-license | The Association between Serum Uric Acid Levels and 10-Year Cardiovascular Disease Risk in Non-Alcoholic Fatty Liver Disease Patients
Non-alcoholic fatty liver disease (NAFLD) and serum uric acid (SUA) levels are risk factors for developing cardiovascular disease (CVD). Additionally, previous studies have suggested that high SUA levels increase the risk of having NAFLD. However, no study has investigated the relationship between SUA and CVD risk in NAFLD. This study analyzed the relationship between SUA and CVD in NAFLD. Data for this study used the 2016–2018 Korean National Health and Nutrition Examination Survey, which represents the Korean population. A total of 11,160 NAFLD patients were included. Participants with hepatic steatosis index ≥ 30 were considered to have NAFLD. Ten-year CVD risk was estimated using an integer-based Framingham risk score. Estimated 10-year CVD risk ≥ 20% was considered high risk. Multiple logistic regression was conducted to calculate the odds ratios (ORs) associated with SUA level and CVD risk. High CVD risk OR increases by 1.31 (95% CI 1.26–1.37) times per 1 mg/dL of SUA. After adjustment, SUA still had an increased risk (OR 1.44; 95% CI 1.38–1.51) of CVD. Compared with the lowest SUA quartile group, the highest quartile group showed a significantly higher risk of having CVD before (OR 2.76; 95% CI 2.34–3.25) and after (OR 4.01; 95% CI 3.37–4.78) adjustment. SUA is independently associated with CVS risk in NAFLD.
Introduction
Non-alcoholic fatty liver disease (NAFLD) is defined as the accumulation of adipose cells in the liver, which promotes liver damage ranging from hepatic steatosis alone to cirrhosis, without alcohol consumption [1]. NAFLD is one of the most common chronic liver diseases globally, with an estimated prevalence of 10-20% in eastern countries and 20-30% in western countries [2]. Therefore, the substantial global economic burden could be caused by NAFLD, expected to be about 1 trillion dollars in the US and 334 billion euros in Europe [3]. Many factors are known to affect the development of NAFLD. Some genetic predispositions, including PNPLA3 and TM6SF2, are known to be associated with NAFLD [4,5]. Obesity [6], type 2 diabetes [7], and metabolic syndrome [8] are commonly known to be the risk factors for NAFLD. A variety of risk factors were proposed to develop NAFLD, and some recent studies suggested that serum uric acid is among them [9][10][11].
The disease burden caused by NAFLD is not only from the disease itself but also from indirect and elusive costs related to NAFLD. Chronic kidney disease, liver cancer, and cardiovascular disease could be associated with NAFLD. Among many diseases associated with NAFLD, cardiovascular diseases are responsible for most mortalities in NAFLD patients [12]. Cardiovascular diseases still remain one of the leading causes of mortality among Organization for Economic Co-operation and Development (OECD) member countries [13]. Many risk factors are known to be associated with the development of circulatory Int. J. Environ. Res. Public Health 2022, 19, 1042 2 of 12 diseases. High blood pressure, dyslipidemia status, diabetic status, metabolic syndrome, smoking, and many other factors were suggested as the risk factors for cardiovascular disease [14]. Among many of these factors, serum uric acid was suggested as the important risk factor for vascular diseases. However, the question of whether the serum uric acid itself is the independent risk factor remains controversial [15].
Uric acid is the final metabolite of purine metabolism. Both prevalence and incidence of hyperuricemia are increasing in developed countries [16]. High serum uric acid levels are associated with various health conditions. Koudstaal et al. revealed that a high serum uric acid level increased the risk of myocardial infarction and stroke in the Rotterdam study [17]. Additionally, Obermayr et al. suggested elevated serum uric acid level as the risk factor for kidney disease [18]. There is substantial evidence suggesting that uric acid increases cardiovascular risk. However, it is not certain that uric acid per se is the risk for cardiovascular disease, or that is increases the risk of having mediating factors for developing cardiovascular disease. The Framingham heart study group contended that the cardiovascular outcome from serum uric acid attributed to other risk factors [19]. However, several attempts, including Santos et al. showed that uric acid is an independent risk factor for cardiovascular outcomes [20,21]. Additionally, Targher et al. suggested that uric acid can independently predict cardiovascular outcomes in type 2 diabetes [22].
Non-alcoholic fatty liver disease is one of the most common liver diseases globally, and the major burden of this disease is from the cardiovascular outcomes. Given that uric acid can be lowered using a uricosuric agent, and that it is a risk factor for both NAFLD and cardiovascular disease, the question of whether uric acid increases the risk of developing cardiovascular disease or not is important in NAFLD patients. However, no previous study has suggested the possible association between uric acid and cardiovascular disease in NAFLD patients. Therefore, we aimed to reveal the relationship between uric acid and cardiovascular disease in NAFLD patients in this study.
Study Participants
This study is based on the 2016-2018 Korea National Health and Nutrition Examination Survey (KNHANES) data. KNHANES is a cross-sectional study conducted by Korean Centers for Disease Control and Prevention (KCDC). The survey population is comprised of representatives of non-institutionalized citizens of Korea. A multi-stage clustered probability design was adopted to sample the representative population [23]. From 2016 to 2018, a total of 24,269 individuals were the study participants of the KNHANES. Among those individuals, 5290 participants with positive hepatitis B surface antigen, positive hepatitis C antibody, or those who have missing data for assessing infectious status were excluded. Additionally, 2207 participants were classified as high-risk drinkers (male: average alcohol consumption more than 7 units more than 2 times per week; female: average alcohol consumption more than 5 units more than 2 times per week) by KNHANES guidelines were excluded for excessive alcohol consumption [24]. After the exclusion, hepatic steatosis index (HSI) was used to evaluate the liver status. HSI was calculated according to the following equation: hepatic steatosis index (HSI) = 8 × AST⁄ALT ratio + BMI (+2 if DM; +2 if female).
Since an HSI level under 30 can rule out the possibility of NAFLD [25], 14,667 individuals whose HSI level was 30 or over were considered to have NAFLD. Final 11,160 participants were eligible for the analyses after the additional exclusion for those missing data for assessing Framingham risk score, serum uric acid, and other covariates.
Framingham Risk Score and the Cardiovascular Disease Risk Groups
Study participants' 10-year cardiovascular disease (CVD) risk was estimated using Framingham risk score (FRS) according to D'agostino et al. [26]. Age, serum high-density lipoprotein cholesterol (HDL-C), total serum cholesterol (TC), and systolic blood pressure (SBP), according to the participants' treatment status, smoking, and diabetic status, were used to calculate FRS. According to this integer-based score [26], study subjects whose 10year cardiovascular disease risk was over 20% (FRS ≥ 15 points for male; FRS ≥ 18 points for female) were classified as a high CVD risk group, and the others were classified as a low CVD risk group. Serum HDL-C level was measured using the homogeneous enzymatic colorimetric method and serum TC level using the enzymatic method with a Hitachi
Framingham Risk Score and the Cardiovascular Disease Risk Groups
Study participants' 10-year cardiovascular disease (CVD) risk was estimated using Framingham risk score (FRS) according to D'agostino et al. [26]. Age, serum high-density lipoprotein cholesterol (HDL-C), total serum cholesterol (TC), and systolic blood pressure (SBP), according to the participants' treatment status, smoking, and diabetic status, were used to calculate FRS. According to this integer-based score [26], study subjects whose 10-year cardiovascular disease risk was over 20% (FRS ≥ 15 points for male; FRS ≥ 18 points for female) were classified as a high CVD risk group, and the others were classified as a low CVD risk group. Serum HDL-C level was measured using the homogeneous enzymatic colorimetric method and serum TC level using the enzymatic method with a Hitachi Automatic Analyzer 7600-210 (Hitachi/JAPAN). Participants were asked "Are you currently treated for hypertension?" to assess the treatment status of hypertension. SBP was measured three times, and the mean SBP of the second and third values was used. Those who smoked 100 cigarettes or more in a lifetime and were currently smoking were current smokers, and those who smoked 100 cigarettes or more in a lifetime and were currently not smoking were past smokers, and the others were grouped into never smokers. Current smokers were classified as smokers in calculating FRS, while past smokers and never smokers were non-smoker in calculating FRS. Fasting plasma glucose (FPG) level was assessed to evaluate the diabetic status via hexokinase UV using a Hitachi Automatic Analyzer 7600-210 (Hitachi / JAPAN). Participants whose FPG level was 126 mg/dL or higher or who were diagnosed with diabetes mellitus by a doctor or were taking medications for diabetes were identified as diabetes mellitus.
Confounding Variables
During the study, sociodemographic and lifestyle-related health factors were obtained with standardized health interview questionnaires and standardized health examinations [23].
Age was categorized into 30-39, 40-49, 50-59, 60-69, and ≥70 years. Participants were categorized into four groups by educational status. Household income quartiles were also applied to measure the socioeconomic status of participants.
The physically active group included those who engaged in at least 150 min of moderate-intensity physical activity, 75 min of high-intensity physical activity, or an equivalent combination of physical activity per week [27].
Body mass index (BMI) was used to evaluate the obese status of participants. Participants with BMI < 25 kg/m 2 was considered normal, while BMI ≥ 25 kg/ m 2 was considered to be obese [28]. Hypertension was defined as (1) SBP ≥ 140 mmHg, (2) diastolic blood pressure (DBP) ≥ 90 mmHg, or (3) those who were taking antihypertensive medications. Subjects were asked "Were you diagnosed to be dyslipidemia by a doctor?" to assess the dyslipidemia status. Additionally, the chronic kidney disease status was appraised with the following question: "Were you diagnosed to have chronic kidney disease by a doctor?" Participants were also assessed for having metabolic syndrome (MetS) according to the International Diabetes Federation (IDF). According to IDF criteria, a person should have central obesity (defined as a waist circumference of ≥90 cm for males, ≥80 cm for females), and any 2 additional factors out of 4 factors were diagnosed Mets. These 4 factors were the following: increased triglyceride level (≥150 mg/dL), reduced HDL-C (<40 mg/dL for males, <50 mg/dL for females), raised blood pressure (SBP ≥ 130 mmHg or DBP ≥ 85 mmHg, or taking antihypertensive medication), and raised FPG (≥100 mg/dL or previously diagnosed DM) [29].
Statistical Analysis
All categorical variables are presented with frequencies and weighted percentages. Row percentage was suggested when comparing cardiovascular disease risk groups. Continuous variables are presented with means ± SD or medians and interquartile range (Q1-Q3). Rao-Scott χ2 test was applied to analyze the difference between categorical values since KNHANES was a complex sample design. t-test and analyses of variances (ANOVA) were conducted to estimate the differences in numerical values among interested groups. Multiple logistic regression was conducted to estimate the odds ratios (ORs) and 95% confidence interval (CI) and evaluate the association between serum uric acid level and having a high CVD risk. Serum uric acid level was evaluated as both continuous values and quartiles for logistic regression. The association between serum uric acid level and high CVD risk was assessed before and after adjusting confounding and mediating variables. Model 1 is the model which is adjusted for possible confounders, including educational status, household income, physical activity, obesity, and chronic kidney disease. Additional adjustment for possible mediators was also made. Variables used to calculate Framingham risk score, including age, gender, smoking status, hypertension, diabetic status, and dyslipidemia, were adjusted in model 2. SAS 9.4 (SAS Institute Inc., Cary, NC, USA) was used for all statistical analyses. A two-sided p-value of 0.05 was set to estimate the statistical significance.
Ethics
This study was carried out following the latest version (2013) of the Code of Ethics of the World Medical Association (Declaration of Helsinki) for research involving humans, and the Institutional Review Board of Eulji University approved the study protocol (EUIRB2020-061).
Baseline Characteristics among Serum Uric Acid Quartiles
Baseline characteristics of participants according to serum uric acid quartiles are described in Table 1. The total number of participants was 11,160. The fourth quartiles are younger than the other quartiles. The fourth quartile contains individuals as follows: more men, highly educated people, physically active people, obese people, current smokers, higher household income, and hypertension patients. However, the fourth quartile is less likely to have been diagnosed with dyslipidemia and have diabetes mellitus. The proportion of the cardiovascular disease risk over 20% was increased with serum uric acid quartiles (9.9%-Quartile 1; 12.8%-Quartile 2; 17.4%-Quartile 3; 23.3%-Quartile 4). Table 2 shows the characteristics of the participants according to 10-year cardiovascular disease risk. The low CVD risk group was defined as an estimated cardiovascular disease risk of less than 20% (<20%). The high CVD risk group was defined as an estimated cardiovascular disease risk of equal to or more than 20% (≥20%) [30]. The high CVD risk group contains individuals of older age, more men, less educated, lower household income, less physically active, obese, and smokers. Additionally, the high CVD risk group shows more people with diagnosed chronic kidney disease, diabetes mellitus. The median serum uric acid level was higher in the high CVD risk group (5.3 mg/dL) than the low CVD risk group (4.7 mg/dL). Additionally, the high CVD risk group has more people in the highest quartile than the low CVD risk group.
The Association between Serum Uric Acid Level and Having a High CVD Risk
The association between serum uric acid level (per 1 mg/dL) and having a high CVD risk accessed by logistic regression are shown in Table 3. Model 1 showed the association after adjusting educational status, household income, physical activity, obesity, and chronic kidney disease. Although the Framingham risk score estimation formula already considered age, gender, smoking status, dyslipidemic status, and hypertensive and diabetic status, we ran additional adjustments. Additional adjustment for the factors related to Framingham risk score (possible mediating factors: age, gender, smoking status, dyslipidemic status, and hypertensive and diabetic status) was conducted, and the results are shown on model 2. In unadjusted analysis, serum uric acid level (per 1 mg/dL) was significantly associated with increased odds of having a high CVD risk (OR 1.31, 95% CI 1.26-1.37, p < 0.001). In model 1, the odds of having a high CVD risk were 1.44 (95% CI 1.38-1.51, p < 0.001) after adjustment of confounders. After the additional FRS component adjustment, model 2 showed odds of high CVD risk of 1.10 (95% CI, 1.02-1.19; p = 0.018). Table 4 showed the association between serum uric acid quartile groups and having a high CVD risk. Compared with the 1st serum uric acid quartile group (Q1), the ORs of having a high CVD risk for the 2nd, 3rd, and 4th quartile groups (Q2, Q3, Q4) were statistically significant. After adjusted the confounders in model 1, OR was 1.43 (95% CI 1.18-1.72, p < 0.001) at Q2, 2.26 (95% CI 1.90-2.69, p < 0.001) at Q3, and 4.01 (95% CI 3.37-4.78, p < 0.001) at Q4 compared with Q1. The strength of the association (ORs) between serum uric acid quartile and having a high CVD risk were increased as the uric acid quartile increase in model 1 and the unadjusted model. Associations were slightly weakened after the additional adjustment for the factors related to the Framingham risk score. The ORs of having a high CVD risk compared with the highest serum uric acid quartile (4Q) are shown in Supplementary Table S1. The association was statistically significant at Q1, Q2, and Q3 compared with Q4 in model 1 and the unadjusted model. The strength of the association (ORs) between serum uric acid quartile and having a high CVD risk were decreased as the uric acid quartile decreased in model 1 and the unadjusted model. After the additional adjustment for the factors related to the Framingham risk score, all significance was lost except Q1. However, Q2 and Q3 showed lower OR for having a high CVD risk than Q4.
Discussion
We examined the association between high CVD risk and serum uric acid levels in non-alcoholic fatty liver disease patients in this study, using population-representative data of Korea. The odds of having CVD risk ≥ 20% increased by 1.31 times for each 1 mg/dL of serum uric acid level. This result maintained the adjustment of the possible confounding factors. After adjusting for confounding factors, the odds increased 1.44 times for each 1 mg/dL of serum uric acid level. Although the Framingham risk score estimation formula already considered age, gender, smoking status, dyslipidemia, hypertension, and diabetic status, we ran additional adjustments. The odds of CVD risk ≥ 20% increased 1.10 times for each 1 mg/dL of serum uric acid level after excluding mediating effect. Additionally, compared with the first serum uric quartile group, the risk of having high CVD risk increased 1.43 times with the 2nd quartile group, 2.26 times with the 3rd, and 4.01 times with the 4th quartile group, after the adjustment of possible confounding factors.
This was the first study to present that serum uric acid is an independent risk factor for cardiovascular disease in non-alcoholic fatty liver patients. In the general population, the relationship between uric acid and cardiovascular disease still remains undetermined [15]. In 1999, The Framingham heart study group contended that serum uric acid is not the independent risk factor for the development of cardiovascular disease but rather associated with other risk factors [19]. However, several recent studies suggested that serum uric acid might be the risk factor for cardiovascular disease risk. Gang et al. proposed that baseline serum uric acid level could be an independent predictor of cardiovascular disease mortality, especially in men [31]. A prospective study with a 21-year follow-up suggested that serum uric acid is an independent predictor for major cardiovascular death in older women [32]. There was also an attempt to prove the causal relationship between serum uric acid and cardiovascular events using a mendelian randomization study design [33]. Some subgroup studies prove that serum uric acid is an independent risk factor for cardiovascular disease. Evidence suggests serum uric acid as a marker for cardiovascular disease risk in a healthy population [34]. The subgroup analysis for rheumatoid arthritis patients also suggested that serum uric acid might be an independent risk factor for cardiovascular disease in this group [35].
There might be some plausible explanations that can convince this association in NAFLD patients. Uric acid is the final product of purine metabolism. Therefore, uric acid might be a marker for purine metabolic activity. CARES trial revealed that allopurinol had a more protective effect over febuxostat [36]. Given the fact that allopurinol is a purine analog, febuxostat is a non-purine xanthine oxidase inhibitor, and allopurinol has a broader effect among purine metabolism [37,38], active purine metabolism per se or other by-products of purine metabolic activity might have affected the cardiovascular risk in NAFLD patients. Additionally, it is known that xanthine oxidoreductase shows the most vigorous activity in the human liver [39]. Therefore, oxidative stress generated by uric acid [40] might affect NAFLD's progression [41]. Given that NAFLD can exert a causal effect on cardiovascular disease via insulin resistance or systemic inflammation [42], and that NAFLD and cardiovascular disease share many parts of underlying mediators and risk factors [43][44][45], the progression of NAFLD might have increased the cardiovascular disease risk in high serum uric acid group.
Liver biopsy is the gold standard for diagnosing NAFLD [46], but it is an invasive procedure, and can have the complication of harming patients, which is dependent on the operator's experience [47]. Therefore, non-invasive ultrasound is widely used and recommended in diagnosing NAFLD [48,49]. However, the diagnosis with ultrasound also largely depends on the practitioner's experience, and not all clinics are equipped with ultrasound. In addition, ultrasound can underestimate the prevalence of NAFLD when the patients' fat occupies the liver under 20% [50]. We adopted the hepatic steatosis index (HSI) to diagnose NAFLD in this study. HSI uses simple parameters, such as gender, diabetic status, BMI, and liver enzymes, to screen NAFLD with the area under the receiver-operated characteristics curve of 0.812 [25]. Therefore, it can be easily and convincingly used in primary clinics with general practitioners. We adopted the Framingham risk score for predicting 10-year cardiovascular outcomes for the participants. Among many other measures which can reflect cardiovascular risks, Framingham risk score, an integer-based risk assessment tool, can be easily used in primary settings [26]. In 2013, the American Heart Association invented the "2013 ACC/AHA Guideline on the Assessment of Cardiovascular Risk" to predict cardiovascular outcomes more accurately [51]. However, this current guideline did not provide the stratified score for Asian populations. Some previous studies suggested that pooled cohort risk scores from the 2013 ACC/AHA guideline might overestimate the cardiovascular risk in Asian populations [52,53]. Framingham general cardiovascular risk score accurately estimates the cardiovascular outcome among Asian populations [54,55]. We adopted the Framingham risk score for this current study for those reasons above. Additionally, several previous studies applied this scoring system to predict the 10-year cardiovascular disease risk. Using the Framingham risk score, Al-Khalidi et al. revealed the relationship between vitamin D and cardiometabolic disease risk [56]. Faerch et al. described the possible association between cardiovascular risk and hyperglycemia concerning insulin resistance using the Framingham risk score [57]. Additionally, some approaches were associated with sarcopenia with cardiovascular risk, adopting this score [58][59][60]. Our study is in line with those previous studies, seeking out cardiovascular disease risk factors.
This present study has some strengths. First of all, various parameters and scores we used in this study design, such as the hepatic steatosis index, serum uric acid, and the Framingham risk score, which can be widely adopted in primary care without much difficulty. Second, this is the first study describing the association between serum uric acid and cardiovascular risk in relation to the progression of non-alcoholic fatty liver disease using nationally representative data. Third, the study results were consistent before and after adjusting the possible confounders and mediating factors, including metabolic syndrome and each factor of the Framingham risk score. Therefore, we could describe the effect of serum uric acid on cardiovascular risks by means of the integrated effect of mediators and independent effects.
Meanwhile, some limitations still exist in this study. It is based on the observational, cross-sectional survey; therefore, we cannot claim direct causality for these results. We used the Framingham risk score for the evaluation of cardiovascular risk. This index might be imprecise compared with other cardiac evaluation measures, such as coronary calcium score or cardiac sonography. Likewise, hepatic steatosis index cannot precisely diagnose non-alcoholic fatty liver disease compared with ultrasound or liver biopsy. Therefore, this study might overestimate the non-alcoholic fatty liver disease populations. Moreover, the study population in this report can represent the Korean population, but the results might not be generalizable to other ethnicities. Further prospective studies covering other ethnicities are needed to confirm these study results.
Conclusions
This study suggests that serum uric acid level correlated with statistical significance with 10-year cardiovascular risk in non-alcoholic fatty liver disease patients. Moreover, high serum uric acid quartiles were also associated with a high 10-year cardiovascular disease risk. The association was maintained after the adjustment of possible confounders and mediators. Further multi-ethnic prospective studies are needed to validate these study results.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/ijerph19031042/s1, Table S1: The association between serum uric acid quartile and having a high CVD risk.
Institutional Review Board Statement:
This study has been carried out following the latest version (2013) of the Code of Ethics of the World Medical Association (Declaration of Helsinki) for research involving humans, and the Institutional Review Board of Eulji University approved the study protocol (EUIRB2020-061).
Informed Consent Statement: Informed consent was obtained from all subjects involved in the Korea National Health and Nutrition Examination Survey.
Data Availability Statement: Korean National Health and Nutrition Examination Survey is publicly available dataset. This data can be found here: https://knhanes.kdca.go.kr/knhanes/sub03/sub03_ 02_05.do/accession (accessed on 13 December 2021) number not required.
Conflicts of Interest:
The authors declare no conflict of interest. | 2022-01-20T16:11:52.152Z | 2022-01-18T00:00:00.000 | {
"year": 2022,
"sha1": "54d9a56cd70a81bf52aef0749f6878913f42ae11",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/19/3/1042/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "460fff7501b2fd27140c8ff1f3163558ed4b6046",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
13067117 | pes2o/s2orc | v3-fos-license | Pancreatitis Secondary to Bile Duct Cyst in a 36-year-old Pregnant Woman: Case Report
Pancreatitis in pregnancy has a prevalence of 1.5/1500-4500 cases, constituting one of the most common acute abdomen wedges, with biliary origin in 70% of cases, triglycerides in 20% and other causes in the remaining 10%, including choledochal cyst (CC) as a rare cause with three previous reports in literature, which may have a fatal outcome with fetal loss in some cases. We report the case of a 25-year-old patient with 30.4 weeks of gestation (WOG) that arrived to emergency room with right upper quadrant and epigastric pain for the last 8 hours, associated with nausea and vomiting. No pathological background referred. At physical examination with jaundice, gravidic abdomen for 30.4 WOG pregnant, fetal movements presents, Murphy (+) and epigastric pain on deep palpation. Laboratories report total bilirubin (TB) 3.9 mg/dl and direct bilirubin (DB) 3.69 mg/dl Alkaline phosphatase (AP) 2038 IU/L Amylase 280 IU/L Lipase 1938 IU/L. Pancreatitis is confirmed and abdominal ultrasound (US) is requested to determine biliar origin. USG reports gallbladder of 9×4 cm, thin walls without filling defects, dilated intrahepatic bile duct and common bile duct cyst. Cholangiopancreatography Resonance Imaging (CPMR) concludes Todani I choledochal cyst of 17×9 cm, with displacement of duodenum, colon and páncreas. Due to gestation ongoing appropriate medical management with fluids and analgesics was started until remission of pancreatitis 72 hours later. After delivery at 34 WOG, cholecystectomy was performed with hepático-jejunum Roux-Y anastomosis successfully. Histopathologic analisis reports inespecific inflammation without displasia or metaplasia. At four months follow-up patient is asymptomatic. Pancreatitis in pregnancy is a common cause of acute abdomen, rarely associated with choledochal cysts as the cause. Surgical resolution once pregnancy is over must be done as soon as possible by the high risk of adenocarcinoma degeneration and recurrent pancreatitis.
Introduction
Acute pancreatitis is a relatively common disease in pregnant women (1/1500-4500 pregnancies), 70% secondary to gallstones and hypertriglyceridemia in 20%.
Other less common causes such as hyperparathyroidism, autoimmunity or toxics can trigger acute pancreatitis in pregnant women with fetal loss in up to 4.7% [1]. Choledochal cyst (CC) are rare congenital intra or extrahepatic biliary tract dilatations. The incidence in North America is 1: 150,000 live births. They may be responsible for complications such as biliary stasis, stone formation, cholangitis, pancreatitis and malignant degeneration [2]. The association of pancreatitis secondary to choledochal cyst in pregnant woman has been reported only in other three cases [3][4][5].
Case Presentation
We report the case of a 25-year-old patient with 30.4 weeks of gestation (WOG) of her first pregnancy, without pathological background. She arrived to emergency room with right upper quadrant and epigastric pain for the last 8 hours, associated with nausea, vomiting and respiratory distress. The last 24 hours she noticed jaundice. At physical exam with 95 heartbeats per minute, 23 breaths per minute, arterial tension 130/80 mmHg, jaundice, globular abdomen at expense of pregnant uterus, absent peristalsis, Murphy sign and epigastric pain on deep palpation. Fetal wellbeing was confirmed by Obstetric Ultrasonography (US). Laboratories report total bilirrubin 3.9 mg/dl and direct bilirrubin 3.69 mg/dl alkaline phosphatase 2038 IU/L Amylase 280 IU/L Lipase 1938 IU/L. Pancreatitis is confirmed and abdominal US is requested to determine the probable biliar origin, reporting: gallbladder of 9×4 cm, thin walls without filling defects. Dilated intrahepatic bile duct and common bile duct with cystic dilatation of 17×9 cm. Cholangiopancreatography Resonance Imaging (MRCP) is performed and reports Todani 1 choledochal cyst with displacement of adjacent structures, predominantly pancreas (Figures 1 and 2). Ongoing pregnancy does not make it a candidate for cyst excision and biliodigestive bypass to the high risk of complications associated with pregnancy in the third trimester. For this reason medical management with fasting, fluids and paracetamol was established. The patient had adequate response to medical treatment, with pancreatitis referral within 72 hours. It is discharged pending resolution of pregnancy. After delivery at 34 WOG without negative effects or cesarean requirement, abdominal computed tomography (CT) scan is performed to anatomical delineation and surgical planning. Two months later cholecystectomy and total cyst excision is performed with hepaticojejunal Roux-Y anastomosis (Figures 3-5), without complications and with uneventful postoperative course. Histopathology report Todani 1 cyst and inespecific inflammation. At four months follow-up patient is asymptomatic.
Discussion
Clinical evaluation of pregnant patients with acute abdominal pain is confounded by physiologic and anatomic changes related to pregnancy, representing a diagnostic and therapeutic challenge. Mild leukocytosis, anemia, and elevated alkaline phosphatase are considered normal during pregnancy [6]. Acute pancreatitis complicates approximately 1 of every 3300 pregnancies, most commonly occurring during the third trimester and postpartum; Pregnancy itself induces symptoms secondary to hormonal effects on biliary function, compression by the gravid uterus, and increased intra-abdominal pressure. During pregnancy CC are atypical and represent an additional challenge, including fetal cyst-related risk from complications [7]. The incidence of choledochal cysts is 1 in 150,000 live births, of which 20% is detected in adulthood, affecting women with a ratio of 4:11 [8]. CC is an expansion of the intrahepatic or extrahepatic biliary ducts, classified by Todani into five types, which can lead to complications like biliary stasis, stone formation, cholangitis, pancreatitis and malignant degeneration [1]. There are many theories about its origin, but the most accepted is an abnormal biliopancreatic duct connection, with retrograde flow, inflammation and pancreatic enzymes injury favoring cystic degeneration [9]. Patients usually present with pancreatic or biliary symptoms rather than cholangitis as seen in children [10]. US are used to detect cholelithiasis and bile duct dilatation but can't optimally evaluate the distal common bile duct and pancreas. When biliary obstruction is clinically suspected, MRCP can display the biliary anatomy and detect small stones in the common bile duct with a high sensitivity and specificity. Rare causes for biliary obstruction, such as Mirizzi syndrome, CC or intrahepatic biliary stones can also be successfully diagnosed using MRCP [11]. Once the diagnosis is established, patients should be referred to specialized centers for treatment, bearing in mind maternal and fetal well-being, as well as the likelihood of cystrelated complications. The reported incidence of malignancy in choledocal cyst is age related, reported in 14.3% above the age of 20 [12]. For this reason the treatment consist in complete excision and Roux-en-Y hepaticojejunostomy anastomosis [13,14]. In pregnancy, however, a more conservative approach may have to be adopted, treating pancreatitis and planning surgery until the second trimester or after delivery, when the risk is lower. Care Checklist 2013.
Conclusion
Pancreatitis is a serious condition that occurs frequently in pregnant women and may represent a diagnostic and therapeutic challenge by physiological changes as well as the limitations in therapeutic resources due to ongoing pregnancy. For this reason a multidisciplinary approach should be established to ensure the best outcome for the fetus and mother. The presentation of choledochal cyst in adults and pregnancy is rare, so medical team need to know the best approach to provide optimal treatment, taking into account the stage of pregnancy and disease progression. Surgical treatment need to be performed as soon as possible without compromising maternal and fetal well-being by the risk of complications associated with choledochal cyst as repetitive pancreatitis and malignant degeneration. Once pancreatitis resolved, surgical resection must be performed. It should be performed in the second trimester or wait for pregnancy resolution depending on the patient status. | 2019-03-16T13:07:39.708Z | 2015-07-27T00:00:00.000 | {
"year": 2015,
"sha1": "769be43800d70bd8fdcb67f0541424619787a71b",
"oa_license": "CCBY",
"oa_url": "https://www.omicsonline.org/open-access/pancreatitis-secondary-to-bile-duct-cyst-in-a-36yearold-pregnant-woman-case-report-2165-7548-1000272.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "5b3ec49fa8df2a7e9d34d236c1ab4f925bdcdfd8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119497110 | pes2o/s2orc | v3-fos-license | Polarization resolved Cu $L_3$-edge resonant inelastic x-ray scattering of orbital and spin excitations in NdBa$_{2}$Cu$_{3}$O$_{7-\delta}$
High resolution resonant inelastic x-ray scattering (RIXS) has proven particularly effective in the determination of crystal field and spin excitations in cuprates. Its strength lies in the large Cu $L_{3}$ resonance and in the fact that the scattering cross section follows quite closely the single-ion model predictions, both in the insulating parent compounds and in the superconducting doped materials. However, the spectra become increasingly broader with (hole) doping, hence resolving and assigning spectral features has proven challenging even with the highest energy resolution experimentally achievable. Here we have overcome this limitation by measuring the complete polarization dependence of the RIXS spectra as function of momentum transfer and doping in thin films of NdBa$_{2}$Cu$_{3}$O$_{7-\delta}$. Besides confirming the previous assignment of $dd$ and spin excitations (magnon, bimagnon) in the antiferromagnetic insulating parent compound, we unequivocally single out the actual spin-flip contribution at all dopings. We also demonstrate that the softening of $dd$ excitations is mainly attributed to the shift of the $xy$ peak to lower energy loss. These results provide a definitive assessment of the RIXS spectra of cuprates and demonstrate that RIXS measurements with full polarization control are practically feasible and highly informative.
High resolution resonant inelastic x-ray scattering (RIXS) has proven particularly effective in the determination of crystal field and spin excitations in cuprates. Its strength lies in the large Cu L3 resonance and in the fact that the scattering cross section follows quite closely the single-ion model predictions, both in the insulating parent compounds and in the superconducting doped materials. However, the spectra become increasingly broader with (hole) doping, hence resolving and assigning spectral features has proven challenging even with the highest energy resolution experimentally achievable. Here we have overcome this limitation by measuring the complete polarization dependence of the RIXS spectra as function of momentum transfer and doping in thin films of NdBa2Cu3O 7−δ . Besides confirming the previous assignment of dd and spin excitations (magnon, bimagnon) in the antiferromagnetic insulating parent compound, we unequivocally single out the actual spin-flip contribution at all dopings. We also demonstrate that the softening of dd excitations is mainly attributed to the shift of the xy peak to lower energy loss. These results provide a definitive assessment of the RIXS spectra of cuprates and demonstrate that RIXS measurements with full polarization control are practically feasible and highly informative.
I. INTRODUCTION
Since the discovery of copper-based high temperature superconductors (HTS) 1 , huge experimental and theoretical efforts have been deployed in the quest for the understanding of microscopic mechanism of unconventional superconductivity. Despite a number of crucial findings, a conclusive and generally accepted explanation of high temperature superconductivity (SC) in cuprates is still lacking 2 . HTS are layered materials where superconducting CuO 2 planes are stacked with the so-called charge reservoirs of variable structure and composition. Undoped parent compounds are Mott insulators, with spin-1 /2 Cu 2+ ions forming an antiferromagnetic (AF) square lattice within the planes. These planes are, in turn, weakly coupled, so that 3D AF order sets only at temperatures much lower than the mean-field temperature set by the in-plane exchange interactions. Superconductivity arises when the number of mobile carriers (holes or electrons) is altered via chemical substitution or by varying the oxygen content in the charge reservoirs. A rich and complex phase diagram, in the dopingtemperature-magnetic field (p − T − B) phase space results from the peculiar combination of low dimensionality and strong electronic correlations in cuprates 2 . The interplay between antiferromagnetism and SC is intriguing: whereas magnetic fields are known to suppress SC, AF fluctuations are considered to possibly act as glue for the Cooper pairs 3 . Consequently the full characterization of spin excitations is of paramount importance for a more conclusive explanation of HTS.
Although optical spectroscopies and inelastic neutron scattering provided most of the experimental basis in this field for many years, more recently resonant inelastic x-ray scattering (RIXS) has brought very significant advances 4 . In particular, thanks to remarkable technical improvements over the last two decades, high resolution RIXS revealed that short-range AF correlations persist in cuprates up to very high doping levels, across and above the superconductivity dome, despite the loss of longrange AF order [5][6][7][8] . RIXS is an orbital-and site-selective energy loss spectroscopy technique where the incident photons are tuned to a core-level absorption resonance to amplify the signal and exploit the large spin-orbit interaction of core levels. When performed at the Cu L 3 -edge (2p 3/2 → 3d transition) it enables momentumdependent studies of low-and medium-energy excitations of superconducting cuprates [5][6][7][9][10][11][12] . RIXS spectra contain a variety of excitations spanning over a wide range of energies, from phonons below 100 meV [13][14][15] , to magnetic excitations up to 500 meV in cuprates 5,6 , to orbital (dd, crystal field) 9,11,16-22 and charge transfer 23-26 excitations in the eV range. Moreover, the elastic and quasi-elastic intensities collected in RIXS spectra carry information on charge density waves or charge orders and associated excitations 10,[27][28][29] .
In this framework the ERIXS spectrometer at the beam line ID32 30 of the ESRF -The European Synchrotron offers a special combination of experimental capabilities: very high resolution (down to 30 meV of total instrumental bandwidth at the Cu L 3 -edge), diffractometer-quality sample manipulation and scattering geometry flexibility, full control of the polarization of the incoming x-rays (linear horizontal, vertical and circular) and polarization analysis of the scattered photons 31 . The latter capability adds extra selectivity that can be decisive in the assignment of spectral features, in particular when intrinsic broadening or quasi degeneracy of peaks makes high energy resolution partly ineffective. The polarization analysis has been applied to systems other than the cuprates, as in the case of the assignment of the energy and symmetry of f f excitations in CeRh 2 Si 2 32 .
Regarding cuprates, polarimetric measurements have confirmed the charge nature of the ordering discovered in the overdoped region of (Bi,Pb) 2 (Sr,La) 2 CuO 6+δ 28 and have clarified the origin of interesting RIXS signals measured in electron doped cuprates: on the one hand they confirmed the charge nature of the zone-center fastdispersing excitations 33,34 in La 2−x Ce x CuO 4 35 , while,on the other, they assigned to spin-excitations the enhanced dynamic response at the charge order wave vector of Nd 2−x Ce x CuO 4 36 . Polarization analysis of the scattered radiation can give valuable insight into the doping evolution of various excitations: both dd and spin excitations get broader upon doping and this analysis can help disentangling contributions overlapping in energy due to their intrinsic width. This has been recently done for paramagnons in Refs. 37 and 38. Despite their relatively high energy, dd excitations hold an interest for the comprehension of high T c superconductivity as they are related to the admixture of the 3d 3z 2 − r 2 orbital character in the x 2 −y 2 ground state and they can gauge the degree of two-dimensionality of the electronic structure which has empirically be related to T c [39][40][41][42] .
In this manuscript we report a systematic polarizationresolved high-resolution RIXS study of low-energy spin and lattice vibrational excitations and high-energy orbital excitations in high-T c superconducting cuprates. We also compare the experimental findings to the calculations of the theoretical RIXS cross sections within a single-ion picture by including in the calculations the polarization dependence of all possible RIXS final states, which greatly helps the assignment of spectral features in the polarization-resolved experimental spectra. In particular, we are able to describe the different spectral contributions in terms of Stokes parameters 43,44 . The present manuscript is organized as follows: in Section II we discuss the experimental details, with particular emphasis on the polarimeter. In Section III we discuss the interpretation of the experimental data within the framework of the single-ion model and in terms of Stokes parameters. Finally, in Section IV we show the experimental results and discuss separately the low-energy collective and high-energy intra-ionic excitations.
A. Samples
The NdBa 2 Cu 3 O 7−δ (NBCO) thin films used in this work belong to the "123" family of high T c superconducting cuprates and have the same crystal strucure of YBa 2 Cu 3 O 7−δ (YBCO). Cuprates belonging to the "123" family crystallize in a centrosymmetric orthorombic unit cell, where CuO 2 bilayers are separated by insulating blocks composed of BaO layers and CuO chains whereas the neighboring CuO 2 sheets are separated by a Nd ion each. Because the lattice parameters a and b are nearly identical in epitaxial films grown on tetragonal substrates (3.9Å) and c = 11.7Å, we can neglect the orthorhombicity and adopt a tetragonal description. Similarly to YBCO, by altering the oxygen content in the CuO chains and the excess of Nd at the Ba sites (Nd 1+x Ba 2−x Cu 3 O 7 ) it is possible to change the carrier density (holes) in the CuO 2 planes and obtain superconductivity with T c up to 95 K at optimal doping. In this work we have measured undoped (AF), underdoped (UD, T c = 63 K and hole concentration p = 0.11) and optimally doped (OP, T c = 90 K and p = 0.17) NBCO epitaxial films deposited by high oxygen pressure diode sputtering on a (001) SrTiO 3 single crystal substrate with an almost perfect in plane matching of the lattice parameters. More details on samples growth and characterization have already been given in Refs. 45 and 46. B. RIXS measurements RIXS spectra have been acquired with an overall energy resolution of ∼ 80 meV, as determined by measuring the non-resonant response of silver paint placed on a corner of the sample surface. The incident photon energy was tuned to the Cu L 3 -edge (∼ 931 eV) and the polarization of the radiation could be set either parallel (π, horizontal) or perpendicular (σ, vertical) to the scattering plane. The scattering geometry is sketched in Fig. 1(a), where the scattering angle (2θ) was fixed at 149.5 • in order to maximize the momentum transfer. Note that the momentum transfer is expressed in terms of its projection onto the CuO 2 plane (q ), the relevant quantity in quasitwo dimensional (2D) cuprates. The scattering occurs in the sample ac plane, such that q could be changed by simply rotating the sample around its b-axis; if δ is the angle between the sample c-axis and the total momentum transfer q, then q = q sin δ. Throughout our paper, we will present RIXS intensities (corrected for self-absorption as explained in Section II E) in units of eV −1 srad −1 , i.e. by normalizing the spectra to the collection solid angle and incident photon flux, and by taking into account the sampling frequency (energy, in our case) and the spectrometer efficiency. The collection solid angle of ERIXS (5·10 −5 srad) is given by the product of the angular acceptance of the grating (∼ 2.5 mrad) and of the collimating mirror (20 mrad). The photon flux was estimated by measuring and calibrating the drain current generated by the beam in the last optical element before the sample; it amounts to approximately 10 12 photons/s in a bandwidth of 45 meV. The sampling frequency of the RIXS spectra is 10 meV, as determined by the pixel pitch of the detector and the dispersion rate of the grating. Finally, the spectrometer efficiency is limited by the reflectivity of the grating and amounts to approximately 0.1 for the 1400 mm −1 grating 30 used in the present work. We believe that presenting RIXS intensities in terms of "scattering probability" will facilitate the comparison of data taken with different experimental setups (different synchrotron sources, beamlines, spectrometers, etc.).
C. Polarimeter
The advantages of performing polarization resolved measurements have been demonstrated with the prototype of a polarization-selective optical element (hereafter referred to as polarimeter) previously installed on the AXES spectrometer at the ID08 beamline of the ESRF 31,37 . Similarly, the new polarimeter installed on the ERIXS spectrometer at ID32 is based on a W/B 4 C multilayer mirror. Unlike AXES, however, ERIXS delivers horizontally collimated radiation, e.g. the incidence angle on the multilayer is the same for all photons. We therefore adopted a graded multilayer, which period changes linearly along one direction in its surface (consistently with the photon energy dispersion of the spectrometer at the position of the multilayer), keeping the reflectivity constant over several eVs. The nominal working angle of the multilayer mirror is ∼20 • , corresponding to reflectivities r π = 0.085 and r σ = 0.140 for the π and σ components of the polarization relative to the multilayer scattering plane. The average efficiency of the device is, therefore, r 0 = (r σ + r π )/2 ≈ 11.2%. Note that the full suppression of one polarization component (r π = 0) could be obtained at the Brewster angle (45 • ). However, the reflectivity of the unsuppressed polarization component is also very much reduced to r σ = 0.012, too low for practical usage of the polarimeter. Still, a consistent decomposition of RIXS spectra in terms of outgoing photon polarization components can be done, as explained below. In the present case, polarization-resolved RIXS spectra with good statistics could be acquired in 180 minutes of accumulation time, while polarization-unresolved RIXS spectra were counted for 30 minutes.
In panel (c) of Fig. 1 we show the linear components of the incident and scattered photon polarization, giving rise to four distinct scattering geometries: ππ , πσ , σσ and σπ , where the unprimed (primed) symbols refer to the incident (scattered) photon polarization. In the following, we will refer to σπ and πσ as the crosspolarization channels and to σσ and ππ as the noncross-polarization channels.
D. Polarization analysis and Poincaré-Stokes parameters
The polarization state of the electromagnetic radiation scattered by the sample in the RIXS process is described by the (complex) components E σ = |E σ |e ıδ σ and E π = |E π |e ıδ π of the electric field in the coordinate axes σ and π , perpendicular to the propagation direction k . Unfortunately, these are not directly accessible, because the phase information is lost when measuring intensities. Instead, it is often convenient to adopt a description of the polarization based on the so-called Stokes parameters (see Appendix A), which are measurable quantities 43,44 .
As already mentioned earlier, the polarization analysis is achieved by exploiting the difference in the reflectivity of a multilayer mirror for the σ and π components of the radiation. However, the multilayer does not completely suppress any of the two photon polarizations, but rather attenuates one more than the other (r π ≈ r σ /2). A polarization-resolved RIXS spectrum is, therefore, obtained by combining two independent measurements of the intensity of the polarization-unresolved or "direct" beam (I) and the intensity of the beam after being reflected by the multilayer mirror (I M ). We introduce the Stokes vector for the direct beam where S 0 = |E σ | 2 + |E π | 2 = I is the total intensity of the scattered radiation as measured on the detector. In addition, one can define the Poincaré-Stokes parameters as P i = S i /S 0 , with i = 1, 2, 3. In case of fully polarized radiation, as in our case, S 2 1 + S 2 2 + S 2 3 = S 2 0 (P 2 1 + P 2 2 + P 2 3 = 1) and the Poincaré-Stokes parameters define a point on the Poincaré sphere of unit radius (see Fig. 10 in Appendix A). The Stokes parameters of the beam reflected by the multilayer can be calculated from S M = MS, where M is the Müller matrix for reflection by an optical elements (see Appendix A). We obtain where S M,0 = I M is the total intensity of the beam after being reflected by the multilayer mirror as measured on the detector. It is important to point out that, in general, all polarization states contribute to I M via their projections on the σ and π coordinate axes. In the following, we assume that the scattered radiation is fully linearly polarized, implying that |S 1 | = S 0 (|P 1 | = 1) and S 2 = S 3 = 0 (P 2 = P 3 = 0). In Section III A, we will show that this assumption is well justified in all, but a very few cases for Cu 2+ RIXS cross-sections in the adopted scattering geometry. Under this assumption, with and is the multilayer polarization sensitivity (A ≈ 0.25), equivalent to the Sherman function of Mott detectors for spin-resolved photoemission experiments 47 .
In Fig. 2 we show the polarization-resolved RIXS spectra of AF NBCO at q = 0.2 rlu for π incident photon polarization (error bars are calculated as explained in appendix B). The RIXS spectra are dominated by intense features in the energy loss window between -2.5 and -1.0 eV, which are usually ascribed to crystal field excitations. In the low-energy region, other excitations are also clearly visible and highlighted in the inset. One can distinguish a (quasi-)elastic line at zero energy loss and magnetic excitations below 0.5 eV. A quick inspection of the polarization-resolved RIXS spectra immediately shows that different excitations have their own polarization dependences. In particular, we note that magnetic excitations occur mainly in the πσ polarization channel, while the (quasi)elastic line is observed in the non-cross polarization channel. Crystal-field excitations also have a clear polarization dependence, which will be exploited in the following when discussing the case of doped cuprates.
E. Self-absorption corrections
The decomposition of the RIXS spectra in terms of outgoing photon polarization components offers the possibility to correct self-absorption effects in a more reliable way than in unpolarized RIXS. Indeed, self-absorption alters the shape of the spectrum because photons are more strongly re-absorbed at small energy losses than at large ones due to the resonance. Since the absorption coefficient of a material depends on both the photon energy and polarization, so does the self-absorption. The knowledge of the scattered photon polarization is, therefore, an important ingredient for accurate self-absorption corrections.
We here take full advantage of the polarization resolution and apply self-absorption corrections to all experimental RIXS spectra shown in this paper. Following the procedure discussed in detail in the Supplemental Mate-rial of Ref. 37, we define a correction factor C , (ω 1 , ω 2 ), which depends on the energy ω 1 (ω 2 ) and polarization ( ) of the incident (scattered) photons, such that the corrected RIXS intensity I corr (ω 2 ) is related to the measured intensity I meas (ω 2 ) by In particular, the correction factor is given by where u = cos(θ in )/ cos(χ) is a geometrical factor depending on the photon angles of incidence (θ in ) and scattering (χ) as measured from the normal to the sample surface (see Fig. 1(a)), and where α 0 and α (ω) are the non-resonant and resonant part of the absorption coefficient, respectively, which can be experimentally determined. Given the large orbital anisotropy of the cuprates, the absorption coefficient α (ω) varies enormously depending on the orientation of with respect to the sample crystallographic directions. At the L 3 -edge resonance, the absorption is minimized (maximized) for c ( ⊥ c) 48 . For this reason, the knowledge of the polarization of the scattered photons allows one to determine the self-absorption correction coefficient in a precise way. We show in Fig. 3 the correction factors at q = 0.2 rlu and at q = 0.4 rlu for both σ and π polarizations of the light. Clearly, the self-absorption correction affects mostly the low energy spectral range, leaving the dd lineshape unchanged. For all the cases, the difference between the two possible polarization states of the scattered light does not exceed ≈ 10 − 20%.
A. Single-ion model for the calculation of the RIXS cross-sections
To help the interpretation of polarization-resolved RIXS spectra, we adopt a single-ion model that allow us to calculate L 3 -edge RIXS cross-sections. The case of cuprates is particularly straightforward, because a Cu 2+ (3d 9 ) ion in D 4h symmetry has initial, intermediate and final states with one hole filling the x 2 − y 2 orbital, one of the four spin-orbit coupled 2p 3/2 orbitals and one of the five crystal-field-split 3d orbitals, respectively. Therefore, the Kramers-Heisenberg formula for the calculation of the RIXS amplitudes A of the various final states can be easily implemented.
The use of the Cu 2+ single-ion model has already been proved quite powerful. It predicted the possibility to measure single spin-flip excitations due to the strong spin-orbit coupling of the 2p 3/2 core hole 49,50 and found immediate experimental confirmation 6,51 . The single-ion model was also useful to assign crystal-field excitations in RIXS data of cuprates based on their dependence on incident photon polarization and scattering geometry; a detailed discussion can be found, for example, in Ref. 9. Here, we use the explicit dependence of the RIXS cross sections of the scattered photon polarization to calculate the Stokes vector and the corresponding Poincaré-Stokes parameters P (P i = S i /S 0 , i = 1, 2, 3). The latter are reported in Tables I and II for the scattering geometry depicted in Fig. 1(a) and (c) for π and σ incident photon polarization, respectively. The ground state is assumed to be x 2 − y 2 ↓, such that x 2 − y 2 ↓ indicates elastic scattering (the arrow direction indicates the spin state). Excited states may involve a spin flip (e.g., x 2 − y 2 ↑), an orbital change (e.g., 3z 2 − r 2 ↓) or both (e.g., 3z 2 − r 2 ↑).
It is interesting to note that, when the scattering occurs in the sample ac (xz) plane with full σ or π incident photon polarization, the scattered radiation is also fully σ or π polarized (P 1 = 1 or -1, respectively, and P 2 = P 3 = 0) for all excited states, but two: yz, ↓ (yz, ↑) and 3z 2 − r 2 , ↓ (3z 2 − r 2 , ↑) for σ (π) incident photon polarization. Here, S 3 = 0 (P 3 = 0) and the polarization analysis described in Section II D might be ambiguous. For example, if we assume that the scattered radiation is fully circularly polarized, then |S 3 | = S 0 (|P 3 | = 1) Table II. Poincaré-Stokes parameters of scattered radiation calculated within the Cu 2+ single-ion model with π incident photon polarization for the various final states. θo is the angle between the scattered photons and the normal to the sample surface. and S 1 = S 2 = 0 (P 1 = P 2 = 0): the intensity recorded on the detector after reflection from the multilayer mirror will be S M,0 = r 0 S 0 . Eqs. (3) and (4) will provide I σ = I π = S 0 /2, i.e. the scattered intensity is equally distributed between the two polarization channels.
In the upper panel of Fig. 4 we show the calculated RIXS cross-sections for various final states with polarization resolution as a function of q along the [1 0 0] direction at a fixed scattering angle of 149.5 • . These are summed over spin-flip and non-spin-slip states, which eventually leads us to consider only three excited states, e.g. the xy, the xz/yz and the 3z 2 − r 2 states. In the bottom panel, we simulate the RIXS spectra of AF NBCO at q = 0.2 rlu by taking the energy position and the Lorentzian lifetime broadening of the three excited state from the literature: in particular, the transition to the xy state is found at -1.52 eV, the xz/yz state at -1.75 eV and the 3z 2 − r 2 state at -1.98 eV 9 . In order to facilitate the comparison with the experiment (Fig. 2), the simulated RIXS spectra are convoluted with a Gaussian function with full width at half maximum of 80 meV that takes into account the finite experimental energy resolu- tion. The agreement between measured and calculated RIXS spectra is remarkable, including the cases with polarization resolution of the scattered photons. The main experimental features are well reproduced, in particular the fact that with π incident photon polarization the transitions to the xz/yz and the 3z 2 − r 2 excited states mainly occur in the ππ polarization channel, while the transition to the xy excited state occurs mainly belongs to the πσ polarization-channel.
The agreement between measurements and calculations shows that the single-ion model captures the symmetry of the ground and excited states of the system and provides a good description of the RIXS process. Therefore, we will use it extensively in the following to discuss in detail the photon polarization and doping dependence of spectral features in NBCO. Fig. 5 shows some selected polarization-resolved RIXS spectra of AF and OP NBCO collected at q = 0.2 and 0.4 rlu, with both σ and π incident photon polarization. The intensity of crystal-field excitations shows both polarization and momentum transfer (scattering geometry) dependence, whereas their position does not. At lower energy losses, in the mid-infrared region, the spectra exhibit a resolution-limited (quasi-)elastic line and dispersive inelastic features, related to magnetic excitations (magnon, bimagnon, multi-magnons). In the following, we will detail our analysis on each of these features. Fig. 6 focuses on the low-energy region of the RIXS spectra. In the case of AF NBCO (top panels), we notice that the (quasi-)elastic peak mainly belongs to the non-cross-polarization channels. Nevertheless, it retains some sizable contribution in the cross-polarization one. This observation apparently contradicts the predictions of the single-ion model, according to which (top panel of Fig. 7) spin-conserving (∆S = 0) elastic scattering does not occur in the cross-polarization channels. This apparent inconsistency can simply be explained by the presence of low-energy excitations. In the case of cuprates, the two main phonon modes that are accessible by RIXS are the so-called buckling (≈ 35 meV) and breathing (≈ 70 meV) modes 15 ; the former is related to vibrations of the planar oxygen atoms in the direction perpendicular to CuO 2 planes, the latter to the Cu-O bond-stretching vibrations. While the experimental energy resolution of the present experiment is not sufficient for an accurate study of phonons, polarization analysis of the scattered photons provides clear evidence of their presence and possibly adds information about their symmetry. For example, we note that their occurrence in the cross-polarization channel is consistent with Raman measurements 52 . A detailed investigation of phonons, including the polarization analysis of their RIXS cross-section and dependence upon doping is out of the scope of this paper. However, here we would like to emphasize that the polarization analysis of the scattered photons, besides energy resolution, could provide crucial information about the symmetry of the phonon modes and eventually greatly contribute to uncover the nature of the electron-phonon coupling in undoped and superconducting cuprates.
B. Magnetic Excitations
Starting from the case of AF NBCO (top panels of Fig. 6), we notice that magnons (the sharp peaks found at ≈ −250 meV at 0.2 rlu and at ≈ −300 meV at 0.4 rlu) occur predominantly in the cross-polarization channels. However, some sizeable contribution in the noncross-polarization channels is visible. Again, this observation apparently contradicts the prediction of the single-ion model that (top panel of Fig. 7) spin-flip excitations (∆S = 1) occur exclusively in the cross-polarization channels 49 . We envisage two possible explanations for this discrepancy: i) the bimagnon continuum (see below) leaks spectral weight in the energy range of the spin-flip excitations; ii) the ground state of NBCO is not purely x 2 − y 2 , but it is mixed with the 3z 2 − r 2 state, a scenario that has been already considered before 41 . In order to check the consistency of the latter hypothesis, we calculated in the bottom panel of Fig. 7 the corresponding RIXS cross-sections and verified that spin-flip transitions are allowed in the ππ , but not in the σσ polarization channel. Although both effects could be simultaneously at play, the bimagnon contribution seems more likely to dominate.
In the RIXS spectra measured with σ polarized incident photons, a shoulder of the spin-flip excitations is usually ascribed to the bimagnon continuum, e.g. the excitations of two interacting magnons. Unlike O Kedge, Cu L 3 -edge RIXS probes a dispersive branch of the bimagnon continuum 53 and therefore is more visible at large q . At q = 0.4 rlu the bimagnon is found at an energy loss of approximately 450 meV, that is slightly above the single magnon peak. Contrary to single magnons, bimagnons mostly occur in the non-crossed-polarization σσ channel.
The bottom panels of Fig. 6 show polarization-resolved RIXS spectra of optimally doped NBCO. Compared to the undoped sample, RIXS features in doped NBCO are broader and sit on an electron-hole pair excitation continuum, which mainly belongs to the non-cross polarization channel. We notice that magnons are heavily damped (which is why they are usually called paramagnons), but persist when mobile holes are added to the system, as it was already pointed out earlier 6,7,14,37,[54][55][56] . In addition, here we show that they preserve the polarization dependence of spin-flip excitations, as it is better evidenced in the case of π incident photon polarization.
C. Crystal-field excitations
In the previous section we discussed the polarization dependence of crystal-field or dd excitations in AF NBCO. The absence of energy-momentum dispersion and the remarkable agreement with simulated RIXS spectra underline the localized (non-collective) nature of crystalfield excitations in NBCO. It is therefore interesting to study their evolution when charge carriers are introduced Figure 5. Polarization resolved RIXS spectra of AF (top panels) and OP (bottom panels) NBCO at q = 0.2 rlu and q = 0.4 rlu taken with σ (panels a,c,e,g) and π (panels b,d,f,h) polarization of the incident light. In the decomposed spectra symbols represent raw data while continuous lines are smoothed data on 7 points.
into the system. In Fig. 8 (top panels) we show the doping dependence (AF, UD and OP) of dd excitations with no polarization analysis of the scattered photons for q = 0.2 and 0.4 rlu and for both σ and π incident photon polarizations. As the hole doping is increased we notice two major effects: i) the peaks corresponding to the different orbital excitations broaden and increasingly overlap with each other and ii) the energy position of the resulting broad distribution moves to lower energy loss. In order to quantify this latter effect we determined the center of mass of the distribution of dd excitations and estimated the position of the lowest crystal field excitation (xy) by taking the second derivative of the spectra. We did not use the results of a multi-peak fitting procedure of the RIXS spectra of doped NBCO because it is affected by a very large uncertainty of the fitting parameters, but in the case of AF NBCO the energy of the transition to the xy state obtained with the second derivative method perfectly agrees with the value (-1.52 eV) from Ref. 9. The analysis is summarized in the bottom panel of Fig. 8: the average softening of dd excitations amounts to approximately 50 meV in the explored doping range and seems to be mostly caused by a substantial shift (150 meV) of the xy state to lower energy losses.
In Fig. 9 we take advantage of the polarization resolution to further investigate this effect in AF and OP NBCO. We focus in particular on the RIXS spectra at q = 0.4 rlu in the cross-polarization channels: according to the calculated RIXS cross-sections, the πσ polarization channel enhances the contribution of the xy excited state and the σπ polarization channel that of the xz/yz states in this scattering geometry, implying that we could follow the evolution of the different states independently. The fact that the simulated RIXS spectra in panel (b) nicely reproduce the experimental data of AF NBCO in panel (d) supports the feasibility of this approach. Panel (f) shows that the main effect of doping on the xy and xz/yz excited states is broadening. On top of that, additional spectral weight on the low-energy side of the xy state can be seen in the πσ (blue) curve; on the contrary, the σπ (red) curve shows additional spectral weight on the highenergy side of the xz/yz states. The effect is smaller and opposite than that for the xy excited state, so that the center of mass of the whole dd excitations distribution on average moves to lower energy losses. The RIXS spectra measured at q = 0.2 rlu (panels (c) and (e)) also provide evidence for the softening of the xy excited state. The shift of the crystal-field states can be explained by the partial screening of the (negative) oxygen charges by doping holes thus reducing the effective crystalline electric field. That the effect is larger for in-plane (xy) than for out-of-plane (xz/yz) orbitals is consistent with the formation of Zhang-Rice singlets 57 , which mostly live in Figure 7.
In-plane momentum dependence of the polarization-resolved RIXS cross-sections within the singleion model for excitations without orbital character. Top (bottom) panel assumes a ground state with pure x 2 −y 2 (3z 2 −r 2 ) symmetry. The scattering angle (2θ) has been fixed to 149.5 • .
the CuO 2 planes, thereby providing a better screening of the in-plane oxygen charges.
V. CONCLUSIONS
In this work we provided a comprehensive polarizationresolved RIXS study of NBCO as a function of doping. We looked at a number of features, including phonons, single spin-flip excitations (magnons), bimagnons and dd excitations and studied their full polarization dependence in RIXS. We confirmed that single-ion RIXS cross-section calculations reproduce most of the experimental observations, even including the polarization resolution of the scattered photons. By exploiting the differences in their RIXS cross-sections, we tracked the evolution of the various excitations as a function of doping. In particular, we reported a broadening and shift of the center of mass of the whole distribution of dd excitations to lower energy losses. The analysis of the RIXS spectra in the crosspolarization channel allowed us to discriminate between xy and xz/yz excited states and confirm that the shift of dd excitations is mainly driven by a softening of the xy state.
Finally, we emphasize that precautions should be taken in order to obtain a correct interpretation of polarizationresolved RIXS spectra. This was discussed with the introduction of the Stokes (and Poincarè-Stokes) parameters: in particular, it turns out that the decompositions in σ and π components is rigorous only when the scattered radiation is fully σ or π polarized, namely |S 1 | = S 0 (|P 1 | = 1). Single-ion model calculations show that this is the case for most of the excited states. In general, we believe that a systematic use of polarization-resolved RIXS could add important information on the nature of the excitations; for example, it may help to discriminate phonon modes with different symmetries. We note that only the knowledge of the polarization of the scattered photons permits a proper correction for self-absorption effects. This might be crucial when investigating little intensity differences, especially in the low-energy region of the RIXS spectra where phonons are observed. In addition, polarization resolution of the scattered photons mitigates issues related to the intrinsic broadening of the RIXS features upon doping, for which pushing energy resolution does not necessarily help, e.g. for magnons, bimagnons and electron-hole pair excitations overlapping in the same energy range or for crystal-field excitations, which broaden and merge into a single feature. Fully polarized electromagnetic radiation is described by the (complex) components of the electric field in the coordinate axes, say σ and π , perpendicular to the propagation direction k. These can be regarded as the components of a vector known as the Jones vector. The action of any optical element on the properties of the electromagnetic field is described by the so-called Jones matrix J . For reflection, it reads and the Jones vector of the reflected radiation is therefore given by Alternatively, the polarization state of the radiation can be described by the four Stokes parameters, forming the Stokes vector S. The relation to the components of the Jones vector can be written in several, equivalent ways S 0 is the total intensity of the electromagnetic radiation, S 1 and S 2 describe the degree of linear polarization and S 3 that of circular polarization. In addition, one can define the Poincaré-Stokes parameters P, where P i = S 3 S 2 S 1 Figure 10. Schematic representation of the Poincaré sphere. The Cartesian coordinates represent the three Stokes parameters S1, S2 and S3. S i /S 0 (i = 1, 2, 3). In general, S 2 1 + S 2 2 + S 2 3 ≤ S 2 0 (P 2 1 + P 2 2 +P 2 3 ≤ 1), where the equality holds for fully polarized radiation, in which case the Poincaré-Stokes parameters defines a point on the so-called Poincaré sphere of unit radius, as shown in Fig. 10 .
The propagation of electromagnetic radiation characterized by S through an optical element is described by the Müller matrix M, which is related to the elements of (r σ + r π ) 1 2 (r σ − r π ) 0 0 1 2 (r σ − r π ) 1 2 (r σ + r π ) 0 0 0 0 √ r σ r π 0 0 0 0 √ r σ r π (A6) and its action on the Stokes vector S determines how the properties of the electromagnetic radiation change after reflection (A7) Again, S r,0 is the total intensity of the reflected electromagnetic radiation and similarly for the other parameters.
Appendix B: Error bars
In the following, we report the procedure we adopted to calculate error bars associated with the intensity of the RIXS spectra with polarization analysis of the scattered photon polarization. Since the polarizationresolved RIXS spectra are derived indirectly from two independent measurements of the RIXS intensities in the "direct" beam (I) and past the multilayer mirror (I M ), the error propagation is non-trivial. In the following, we calculate error bars with the same approach used for spin-resolved photoemission with Mott detectors 47 .
The two RIXS spectra I and I M are typically measured with different acquisition times (τ = τ M ) in order to compensate for the low efficiency of the multilayer mirror. The total numbers I and I M of incident photons on the two detectors are: I = (n π + n σ )τ = nτ (B1) I M = (r π n π + r σ n σ )τ M = = n M τ M , where r π (r σ ) is the multilayer mirror reflectivity for the π (σ) polarization channel and n π (n σ ) is the energydependent number of π (σ ) polarized scattered photons hitting the detector per unit time.
Having introduced the Sherman constant for the polarimeter A, the reflectivity of the multilayer mirror can be written as r σ,π = r 0 (1 ∓ A) (see the main text for the definitions of A and r 0 ). In order to proceed with the calculation of the error bars of the polarization-resolved RIXS intensities, we define the degree of photon polarization P = n σ − n π n σ + n π (B3) and the polarization state of the scattered photons by the sample Considering that the total number of detected photons is given by I + I M and assuming a Poisson statistical distribution of uncertainties, the error bar for both polarization channels is given by where ∆I (∆I M ) is the error bar on the intensity of the direct beam (the beam past the multilayer). Now, rewriting n π and n σ (ω 2 ) as n π ,σ (ω 2 ) = (1 ∓ P ) From Eq. B8 we notice that if the total number of accumulated counts is roughly the same for the direct beam and the beam past the multilayer, we have I I M , implying that τ M τ /r 0 and ∆I π ,σ n(ω 2 )/(A √ 2). | 2019-02-14T16:09:57.000Z | 2019-02-14T00:00:00.000 | {
"year": 2019,
"sha1": "08151781be92954bdd3d748e54f2ed0413010564",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1902.05471",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "08151781be92954bdd3d748e54f2ed0413010564",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
253125546 | pes2o/s2orc | v3-fos-license | Dissociable effects of oxycodone on behavior, calcium transient activity, and excitability of dorsolateral striatal neurons
Opioids are the most common medications for moderate to severe pain. Unfortunately, they also have addictive properties that have precipitated opioid misuse and the opioid epidemic. In the present study, we examined the effects of acute administration of oxycodone, a μ-opioid receptor (MOR) agonist, on Ca2+ transient activity of medium-sized spiny neurons (MSNs) in freely moving animals. Ca2+ imaging of MSNs in dopamine D1-Cre mice (expressing Cre predominantly in the direct pathway) or adenosine A2A-Cre mice (expressing Cre predominantly in the indirect pathway) was obtained with the aid of miniaturized microscopes (Miniscopes) and a genetically encoded Cre-dependent Ca2+ indicator (GCaMP6f). Systemic injections of oxycodone (3 mg/kg) increased locomotor activity yet, paradoxically, reduced concomitantly the number of active MSNs. The frequency of Ca2+ transients was significantly reduced in MSNs from A2A-Cre mice but not in those from D1-Cre mice. For comparative purposes, a separate group of mice was injected with a non-Cre dependent Ca2+ indicator in the cerebral cortex and the effects of the opioid also were tested. In contrast to MSNs, the frequency of Ca2+ transients in cortical pyramidal neurons was significantly increased by oxycodone administration. Additional electrophysiological studies in brain slices confirmed generalized inhibitory effects of oxycodone on MSNs, including membrane hyperpolarization, reduced excitability, and decreased frequency of spontaneous excitatory and inhibitory postsynaptic currents. These results demonstrate a dissociation between locomotion and striatal MSN activity after acute administration of oxycodone.
Opioids are the most common medications for moderate to severe pain. Unfortunately, they also have addictive properties that have precipitated opioid misuse and the opioid epidemic. In the present study, we examined the effects of acute administration of oxycodone, a µ-opioid receptor (MOR) agonist, on Ca 2+ transient activity of medium-sized spiny neurons (MSNs) in freely moving animals. Ca 2+ imaging of MSNs in dopamine D1-Cre mice (expressing Cre predominantly in the direct pathway) or adenosine A2A-Cre mice (expressing Cre predominantly in the indirect pathway) was obtained with the aid of miniaturized microscopes (Miniscopes) and a genetically encoded Cre-dependent Ca 2+ indicator (GCaMP6f). Systemic injections of oxycodone (3 mg/kg) increased locomotor activity yet, paradoxically, reduced concomitantly the number of active MSNs. The frequency of Ca 2+ transients was significantly reduced in MSNs from A2A-Cre mice but not in those from D1-Cre mice. For comparative purposes, a separate group of mice was injected with a non-Cre dependent Ca 2+ indicator in the cerebral cortex and the effects of the opioid also were tested. In contrast to MSNs, the frequency of Ca 2+ transients in cortical pyramidal neurons was significantly increased by oxycodone administration. Additional electrophysiological studies in brain slices confirmed generalized inhibitory effects of oxycodone on MSNs, including membrane hyperpolarization, reduced excitability, and decreased frequency of spontaneous excitatory and inhibitory postsynaptic currents. These results demonstrate a dissociation between locomotion and striatal MSN activity after acute administration of oxycodone.
Introduction
Opioids, in particular oxycodone, are currently the treatment of choice for moderate to severe pain (Kibaly et al., 2021). However, the increase in the number of oxycodone prescriptions has led to an opioid epidemic and a consequent increase in the number of opioid-related deaths (Vearrier and Grundmann, 2021). An intimate knowledge of the mechanisms of opioid actions in the central nervous system is a prerequisite for the design of rational therapies against opioid addiction.
Pain relief by opioids is mediated primarily by the µopioid receptor (MOR) subclass of G-protein-coupled receptors (GPCRs). In the case of oxycodone, this opioid also binds, albeit with much lower affinity, to κand δ-opioid receptors (Beardsley et al., 2004). Binding to MORs inhibits adenylate cyclase and cAMP production (Sesena et al., 2014). One of the main effects of opioids is to inhibit neurotransmitter release by blocking voltage-gated P/Q-type (Cav2.1) and N-type (Cav2.2) calcium (Ca 2+ ) channels (Eggermann et al., 2011). Inhibition of Cav2.1 and Cav2.2 channels via GPCRs is determined by direct interaction of Gβγ with the channel pore formed by the α1A or α1B Cav2.1 or 2.2 subunit (Zamponi and Currie, 2013). In addition, opioids open G-protein-coupled inwardly rectifying potassium channels resulting in hyperpolarization and overall reduction of neuronal excitability (Law et al., 2000).
µ-opioid receptors are abundant in various brain regions, including the ventromedial (VMS) and the dorsolateral striatum (DLS). However, little is known about potentially differential effects of opioids on direct and indirect pathway mediumsized spiny neurons (MSNs), which can be identified by the selective expression of dopamine D1 and D2 or, more accurately, adenosine A2A receptors. In a previous study, we demonstrated differential effects of DAMGO, a selective MOR agonist, on spontaneous synaptic transmission, and excitability of D1 and D2 dopamine receptor-expressing MSNs (Ma et al., 2012). Thus, while acute application of this opioid reduced the frequency of spontaneous excitatory and inhibitory postsynaptic currents in both regions, the effect was greater in the VMS, in particular in the nucleus accumbens (NAc) shell, where excitatory currents from D2 cells and inhibitory currents from D1 cells were decreased by the largest amount. DAMGO also increased cellular excitability in the VMS, as shown by reduced threshold for evoking action potentials (APs). These results indicated that the VMS is a critical mediator of DAMGO effects (Ma et al., 2012). In a subsequent study, we examined the membrane and synaptic properties of MSNs in the NAc of mice trained to self-administer the opioid remifentanil. We found that prior opioid exposure did not alter the basic membrane properties nor the kinetics or amplitude of miniature excitatory postsynaptic currents (mEPSCs). However, when challenged with DAMGO, the characteristic inhibitory profile was reduced in D1-but not in D2-MSNs from mice receiving remifentanil, suggesting a D1 pathway-specific effect associated with the acquisition of opioid-seeking behaviors (James et al., 2013). While the NAc has received particular attention in addiction research, changes in the DLS could be implicated in the transition to habitual behavior after acquisition of drug-seeking (Yin et al., 2004;Malvaez and Wassum, 2018;Lipton et al., 2019). Thus, more studies on the role of the DLS, specifically designed to tease apart potentially differential effects of opioids on direct and indirect pathway MSNs are needed, as both pathways are implicated in the control of voluntary actions (Cui et al., 2013).
Until recently, most electrophysiological studies aimed to address this question have used mouse brain slices. However, these in vitro reduced preparations do not allow correlation of electrophysiological changes with addiction behaviors. In recent years, new imaging techniques have provided a rich armamentarium to study brain activity and behavior simultaneously. Ca 2+ influx produced by action potential firing can be visualized with fluorescent Ca 2+ indicators and is commonly used as a proxy of neuronal activity. Two-photon laser-scanning microscopy (2PLSM) has been traditionally the method of choice to examine Ca 2+ transients evoked by AP. Usually, the head of the animal is fixed and the body rests on a styrofoam ball, which limits the rich behavioral repertoire and may alter neuronal activity (Donzis et al., 2020). Miniaturized microscopes (Miniscopes) are becoming a powerful tool to examine neuronal activity in more ethological, freely moving conditions (Ghosh et al., 2011;Stamatakis et al., 2021).
In the present study, we combined viral expression of a genetically encoded Cre-dependent Ca 2+ indicator, GCaMP6f, with the use a Miniscope to image spontaneous Ca 2+ activity in direct and indirect pathway MSNs in the DLS, or cerebral cortex of freely moving mice before and after systemic injection of oxycodone or saline as control. Following the in vivo studies, mouse brains were dissected and used for in vitro Ca 2+ imaging or electrophysiological recordings. Convergent data demonstrated overall inhibitory effects of oxycodone on MSNs from both direct and indirect pathways. In contrast, cortical pyramidal neuron activity was increased.
Mice and surgery
Mice aged 3-4 months were obtained from our breeding colonies at the University of California Los Angeles (UCLA). All experimental procedures were performed in accordance with the United States Public Health Service Guide for Care and Use of Laboratory Animals and were approved by the Institutional Animal Care and Use Committee at UCLA. Every effort was made to minimize pain, discomfort, and the number of mice used. Animal housing conditions were maintained under a standard 12 h light/dark cycle (light cycle starting at 6 AM and Frontiers in Neural Circuits 02 frontiersin.org ending at 6 PM) and at a temperature of 20-26 • C. The animals had ad libitum access to food and water.
To examine the effects of opioids on direct and indirect pathway MSNs, D1-(n = 3) and A2A-Cre (n = 3) mice of either sex were injected bilaterally at stereotaxic coordinates, relative to Bregma (in mm): +1 A/P, ±2 M/L, -3.3 D/V (dorsal striatum) with 1 µl of an adeno-associated virus (AAV) containing GCaMP6f. After 2 weeks, the cortical tissue overlying the striatum was aspirated to allow implantation of a GRIN lens (1.8 mm, Edmund Optics, Barrington, NJ, USA) with dental cement (Supplementary Figures 1B,C). 1-2 weeks later, a baseplate to hold the open-source UCLA Miniscope (V3) was affixed to the skull with dental cement. For comparison, to examine the effects of opioids on cortical pyramidal neurons (CPNs), a separate group of mice (3 month-old) were injected with 0.5 µl of an AAV containing GCaMP6f expressed using either the pan-neuronal Syn promoter (n = 2 mice) or the more selective for excitatory neurons CaMKII promoter (n = 2 mice) at stereotaxic coordinates, relative to Bregma (in mm): +0.5 A/P, +1 M/L, and -0.6 D/V, corresponding to the motor M1 region (Supplementary Figure 1A). After 2 weeks, a thin layer of cortical tissue was aspirated to allow implantation of the GRIN lens (1.8 mm, see above). 1-2 weeks later a baseplate to hold the Miniscope was affixed to the skull.
Behavioral assessments and Miniscope imaging
Mice were placed in a cylindrical behavioral chamber (30 cm in diameter and 30 cm high) after a Miniscope was attached to the baseplate. They were allowed to acclimate to wearing the Miniscope and moving freely in the chamber (1 hr for three consecutive days). Both Ca 2+ transients acquired with a V3 Miniscope and behavioral (webcam) data were collected using the Data Acquisition Software (DAQ, UCLA, Los Angeles, CA, USA) simultaneously for ∼5 min. Ca 2+ transients were captured at 20 Hz with maximum exposure and gain. Behavioral videos were captured between 20 and 22 Hz due to the webcam frame rate. After baseline data were acquired, mice were injected with either oxycodone (3 mg/kg, IP) or saline as control. The final volume ranged between 0.2 and 0.3 ml. Recordings were taken at 5 and 15 min after oxycodone or saline administration, but mostly data from 15 min are reported. As we were only interested in the acute effects of systemic administration of oxycodone, each mouse was used only 1-2 times to avoid opioid receptor sensitization.
Behavioral and Ca 2+ transient analyses
The number of mice, videos, and cells used for behavioral and Miniscopes data analyses are shown in Supplementary Table 1.
Behavioral data were processed using the Python-based ezTrack video analysis pipeline (Pennington et al., 2019). Using the location tracing module, an animal's center of mass (COM) was tracked across each session. A reference frame was defined, with each frame compared to the reference frame to locate the animal's COM. The COM was saved, and the module created both traces and a heat map of the final output. Distance traveled was calculated in the software by measuring a specific distance in the module (in pixels) and converting this pixel distance to actual length (cm) based on the dimensions of the behavioral chamber. Animal movement was also analyzed manually through visual examination of each frame of the behavioral videos, receiving a score of 0 (non-movement) and 1 (movement). Lastly, the percentage of movement was calculated by dividing the number of movement frames by the total number of frames for each behavioral video.
Ca 2+ transient activity from Miniscope video recordings was processed using MiniAn, an open-source Miniscope analysis pipeline (Dong et al., 2022). This pipeline performs background subtraction, rigid motion correction using the NormCorre algorithm, and source extraction using the constrained non-negative matrix factorization (CNMF) algorithm. MiniAn produces the estimated footprint, Ca 2+ activity trace and the deconvolved activity trace of each neuron. To examine the same neurons across different recording sessions before and 15 min after drug administration we used the Python-based program Cross-reg, which is part of the MiniAn package. This software aligned the spatial footprints of the neurons and considered the neuron to be the same across the recordings if the footprints were within a five pixel threshold of movement between sessions. The Cross-reg program then outputs the neurons that are matched between sessions, and across all sessions in total. A threshold was manually set for each Ca 2+ trace to identify peaks. The minimum threshold was set to the RMS of the trace and increased if the threshold did not appropriately remove noise. Neurons with non-physiological Ca 2+ traces, e.g., continuous shifts of the baseline, also were manually removed. In addition, since after viral injection not only cell bodies but also the neuropil show fluorescence, we set up several criteria to select only what the program and our best estimates indicated high probability of detecting neuronal somata. These included a circular or ovoid shape, an area ranging from 200 to 400 µm 2 (average ∼294 ± 23 µm 2 from a random sample of n = 10 cells), and only objects exhibiting changes in fluorescence (Donzis et al., 2020). Thus, in contrast to fiber photometry, which detects mainly changes in the neuropil (Legaria et al., 2022), the Miniscopes allowed for more selective measurements of neuronal activity after elimination of spurious objects.
Two-photon laser-scanning microscopy and electrophysiology in slices
After the last Miniscope imaging recording, some mice were sacrificed and brain slices containing cerebral cortex and striatum (350 µm) were obtained for electrophysiological recordings and Ca 2+ imaging in vitro using 2PLSM. Ca 2+ imaging was performed using a Scientifica 2PLSM equipped with MaiTai laser (Spectra-physics). To image Ca 2+ signals, the laser was tuned at 920 nm wavelength. The laser power measured at the sample was less than 30 mW with a 40 × waterimmersion objective lens (Olympus, Tokyo, Japan). Ca 2+ transients were sampled at a rate of 3.81 Hz using SciScan software (Scientifica, Uckfield, UK). Optical signal amplitudes were expressed as F/F, where F represents the resting light intensity at the beginning of the optical trace, and F represents the change in peak fluorescence during the biological signal. These values were then multiplied by 100 to obtain percentage values. ImageJ (NIH) was utilized to process raw fluorescence values, which were then exported to Excel (Microsoft, Redmond, WA, USA) and Clampfit software to calculate % F/F.
Electrophysiological recordings
The effects of oxycodone on membrane and synaptic properties were examined in labeled and unlabeled MSNs. Only the intact (non-aspirated) side was used. In a separate group of control mice not used for Miniscope recordings, wholecell patch clamp recordings were obtained from unidentified MSNs. Coronal slices containing cerebral cortex and striatum were perfused with oxygenated ACSF and maintained in the chamber at room temperature. Whole-cell patch clamp recordings from MSNs were obtained in voltage clamp mode using Cs-methanesulfonate as the pipette solution, or in current clamp mode using K-gluconate as internal solution (Holley et al., 2022). After determination of basic membrane properties, including cell membrane capacitance, input resistance, and decay time constant, spontaneous synaptic activity was recorded. At -70 mV holding potential, most synaptic events are glutamatergic (excitatory post-synaptic currents, sEPSCs) and mediated by α-amino-3-hydroxy-5methyl-4-isoxazolepropionic acid (AMPA) receptors. At a holding potential of +10 mV, most events are GABAergic (inhibitory post-synaptic currents, sIPSCs) and mediated by GABA A receptors. At this holding potential (+10 mV), after 3-5 min recording in control conditions, oxycodone (1 µM) was added to the external ACSF solution. The recording at this potential continued for at least 8 min and then the potential was reverted to -70 mV to examine the effects of oxycodone on glutamatergic activity. Measures of basic membrane properties used the Clampfit program and the frequency of spontaneous synaptic activity was calculated with the miniAnalysis program.
Statistics
There were three levels of analysis in our study, mice, video recording sessions, and cells. Most animals were tested with oxycodone only once to prevent opioid receptor sensitization. One A2A mouse and one Syn mouse were tested twice to determine reproducibility of the response to the drug. The effects of oxycodone were assessed by comparing number of cells and their frequencies per video (session) analyzed. Values are expressed as mean ± SEM. Statistical significance was calculated using SigmaPlot (v. 14). Data were analyzed by Student's t-tests, paired or unpaired, as well as one-and two-way ANOVAs with or without repeated measures (RM). Pairwise multiple comparison procedures used appropriate posthoc tests. Differences between or among groups were considered significant if p < 0.05.
Mouse behavior before and after acute systemic injection of oxycodone or saline
A single dose of oxycodone (3 mg/kg, IP) was selected for these studies. This is a relatively high dose more likely to replicate both analgesic and addictive properties of oxycodone in mice. In a previous study, we found that this dose induces hyperkinetic behavior and, when repeated for three consecutive days, it also induces locomotor sensitization, which does not occur in mice lacking MORs in striatal neurons from the direct pathway (D1 receptor-expressing) (Severino et al., 2020). Another study found that doses between 2 and 4 mg/kg (subcutaneous) produce maximal analgesia starting at ∼3 min after systemic injection (Nasseef et al., 2019).
One week following baseplate implantation the mice were allowed to acclimate to the recording chamber and to the Miniscope. They quickly adapted to both and the weight of the Miniscope did not perturb the ability of the mice to move and explore freely ( Figure 1A). Before oxycodone administration, mice explored the behavioral arena, reared, and groomed normally. After oxycodone administration, typical signs of MOR activation occurred. These included the typical Straub tail starting ∼3 min after injection, increased locomotion after 5-6 min, and circling behavior ( Figure 1B). In addition, the mice moved swiftly in a tiptoeing manner. Rearing episodes occurred at least once before oxycodone, whereas they were completely absent after oxycodone. Behavior of an A2A-Cre mouse before (Pre) and 15 min after injection of saline. The movement patterns were similar. Traces of the same A2A-Cre mouse before (Pre) and 15 min after injection of oxycodone (3 mg/kg). Mouse continuously circled the outer perimeter of the arena after oxycodone. In addition, we observed a Straub tail ∼3 min after injection and absence of rearing. (C) Graphs indicate the % of frames that the mouse was moving (left graphs) before and 15 min after injection of saline or oxycodone. In saline, there was a statistically non-significant reduction in movement. After oxycodone, movement increased significantly. Middle graphs indicate total distance traveled during the recording session. The distance increased significantly after oxycodone and the velocity (right graphs) increased likewise. Numbers within the bar graph indicate the number of videos examined. Data were analyzed by two-way RM ANOVA with Holm-Sidak post-hoc t-test, **p < 0.01, ***p < 0.001.
To determine the time course of oxycodone effects, as a first approximation, we took videos before (Pre), and at 5 and 15 min after systemic injection of oxycodone. We noticed that, while behavioral changes (e.g., hyperkinesia), were already apparent 5 min after oxycodone, changes in Ca 2+ activity were more progressive and effects were more pronounced at 15 min compared with 5 min after oxycodone (Supplementary Figure 2). Thus, for subsequent statistical analyses we compared only Pre and 15 min after oxycodone or saline. This time coincides with maximal effects of oxycodone, based on pharmacokinetics in rodents (Huang et al., 2005;Nasseef et al., 2019).
We first measured the total amount of time the mouse displayed in movement versus non-movement before and 15 min after saline or oxycodone, regardless of Cre expression (D1-or A2A-Cre) or brain region injected with the Ca 2+ reporter (striatum or cortex) (n = 5 sessions for saline and n = 8 for oxycodone). For that purpose, we determined the number of frames where movement was detected and divided by the total number of frames to obtain a percentage of movement. The duration of each frame was 45-50 ms based on webcam framerate. There was a statistically significant interaction between treatment and time (two-way ANOVA, F 1 , 22 = 31.9, p < 0.001). The percent of time moving increased Frontiers in Neural Circuits 05 frontiersin.org significantly after oxycodone administration (Pre versus 15 min, p < 0.001, Holm-Sidak post-hoc t-test), whereas after saline, if anything, there was a non-significant decrease in movement (Pre versus 15 min, p = 0.08, Holm-Sidak post-hoc t-test) ( Figure 1C). The total distance mice traveled before and 15 min after oxycodone administration also was determined. Similar to percent movement, there was a statistically significant interaction between treatment and time (two-way ANOVA, Oxycodone reduces the number of fluorescent medium-sized spiny neurons from direct and indirect pathways After oxycodone injection, the overall fluorescence in the field of view was reduced significantly. The major contributor of this reduction was a decrease in the number of visible cells. In contrast, after saline injection only a small, nonsignificant decrease was detected (Figures 2A,B) and appeared to be correlated with less overall movement (see Figure 1C). When grouping together D1 and A2A MSNs, there was a statistically significant difference in the mean values between Pre and post injection conditions when the saline versus oxycodone treatments were taken into consideration (two-way ANOVA, F 1 , 26 = 10.6, p = 0.003). After oxycodone there was a statistically significant reduction in the number of visible neurons (p = 0.008, Holm-Sidak post-hoc t-test), whereas after saline there was no change (p = 0.1, Holm-Sidak post-hoc t-test). Figure 4). Further, before oxycodone and before or 15 min after saline injection, multi-neuronal Ca 2+ activity was tightly associated with movement (Figures 3A-C and Supplementary Figures 3A, 4). To determine this association, we calculated the frequency of Ca 2+ transients during epochs of movement versus non-movement. Movement was defined as a clear and measurable displacement of the body within the arena. Small head movements were not included in the movement category. Regardless of treatment, prior to injection the mice spent most of the time in nonmovement compared with movement (79 ± 5% versus 21 ± 5%). During episodes of immobility, the frequency of Ca 2+ transients was significantly reduced compared with episodes of movement (Supplementary Figure 3A). We then compared the effects of oxycodone on the frequency of Ca 2+ transients Pre and 15 min after injection. There was an overall reduction in frequency after oxycodone (D1 + A2A, p = 0.0048, Student's t-test). However, while the reduction in frequency observed in D1-MSNs was not statistically significant (p = 0.20, Student's t-test), the reduction in A2A-MSNs Ca 2+ transient frequency was statistically significant (p = 0.021, Student's t-test) (Figure 3D). This suggests that the overall reduction in frequency of striatal neurons after oxycodone is driven primarily by MSNs constituting the A2A adenosine receptor-tagged indirect pathway. In agreement, even though the frequency of Ca 2+ transients was not significantly reduced in D1-MSNs after oxycodone, their activity still maintained a positive correlation between number of Ca 2+ transients during movement versus non-movement, this correlation was lost in A2A-MSNs (Supplementary Figure 3A). We also noticed that after oxycodone there was a dissociation between Ca 2+ transient activity and behavior, which became more stereotyped compared to the more random behavior in Pre oxycodone conditions (see Figure 3B).
Effects of oxycodone on Ca 2+ transient activity in cortical pyramidal neurons
To determine if opioid administration causes generalized depression of neuronal activity and as a comparison with striatal changes, we also examined the effects of oxycodone on CPNs. Mice were injected with the Ca 2+ indicator using a Syn (n = 2) or a CaMKII (n = 2) promoter to more selectively infect excitatory CPNs. The number of CPNs infected with the Ca 2+ indicator was consistently higher than that in striatal MSNs. The overall frequency of Ca 2+ transients also was consistently higher than that of MSNs. Similarly, the number of CPNs infected with the Syn promoter was higher than that of CPNs infected with the CaMKII promoter (Figure 4). As changes induced by oxycodone in CPNs infected with either promoter were similar, and considering that CPNs are the most abundant cell types in the cerebral cortex, data were analyzed separately or pooled (A) Maximum intensity projection panels illustrate fluorescent medium-sized spiny neurons (MSNs) visualized with the aid of a Miniscope before (Pre) and 15 min after IP injection of oxycodone or saline. Notice the overall reduction in fluorescence after oxycodone in D1-and A2A-Cre mice. In contrast, fluorescence intensity was only slightly reduced after saline injection. (B) Graphs indicate that the main driver of reduced fluorescence intensity was a significant reduction in the number of MSNs detected by the Miniscope in both D1-and A2A-Cre mice. Saline injection induced a non-significant reduction in the number of neurons, likely reflecting reduced movement as shown in Figure 1C. Graphs on the left show the comparison between saline and oxycodone treatment after grouping together D1 and A2A MSNs (data were analyzed by the two-way ANOVA). Graphs on the right show the effects of oxycodone with D1 and A2A MSNs separated. After oxycodone the reduction in the average number of fluorescent MSNs was statistically significant (for D1 MSNs, p = 0.049 and for A2A MSNs, p = 0.0066, Student's t-test). *p < 0.05, **p < 0.01. Numbers inside bars indicate number of recording sessions.
together. Notably, the number of CPNs visualized Pre and 15 min after oxycodone injection did not change significantly when all CPNs (Syn + CaMKII) were considered (p = 0.577, Student's t-test) or when separated (Syn p = 0.618, CaMKII p = 0.723, Student's t-test), indicating that the depression of Ca 2+ activity observed in striatum did not occur in the cortex (Figures 4A,B). More remarkably, the frequency of Ca 2+ transients was significantly increased after oxycodone (all CPNs Frontiers in Neural Circuits 07 frontiersin.org (A) Traces represent multi-neuronal Ca 2+ transients recorded in D1-and A2A-Cre mice before (Pre) and 15 min after oxycodone. Each trace represents the activity of 1 medium-sized spiny neuron (MSN). Notice that the number of active cells is reduced after oxycodone. (B) Upward deflections indicate the mouse was moving. In control conditions the behavior appeared more random, whereas after oxycodone it became more stereotyped. (C) Some representative Ca 2+ transient traces of single MSNs. Notice the tight correlation between Ca 2+ transients and behavior, more noticeable in A2A Pre. After oxycodone injection, this correlation was lost, suggesting decoupling between striatal neuron activity and movement. (D) Graphs illustrate changes in frequency of Ca 2+ transients before and after injection of oxycodone or saline in striatal MSNs. When analyzed separately, D1 MSNs only displayed a discrete, non-significant reduction in frequency. In contrast, the frequency in A2A MSNs was significantly reduced. When grouped together, cells from D1-and A2A-Cre mice displayed a significant decrease in frequency after oxycodone. *p < 0.05, **p < 0.01, Student's t-test.
combined p = 0.003, Syn CPNs p = 0.028, and CaMKII CPNs p = 0.029, Student's t-test) (Figure 5), suggesting that DLS is an unlikely substrate of behavioral hyperactivity after oxycodone. Interestingly, the correlation between Ca 2+ transient activity and behavior was less evident in cortex compared to stratum. Thus, while the Ca 2+ transient frequency was slightly reduced during non-movement compared to movement, this correlation was lost after oxycodone, during which the number of Ca 2+ transients was practically equal during movement and nonmovement (Supplementary Figure 3B). Some examples of individual MSNs and CPNs before and after oxycodone can be seen in Supplementary Figure 5. In addition, raw videos (without motion correction or background subtraction) illustrating effects of oxycodone in striatum and cortex are available in the Supplementary Videos 1-6.
Slice electrophysiology confirmed Ca 2+ imaging findings in medium-sized spiny neurons After completing the Miniscope recordings, some mice were sacrificed, the baseplate and lens removed, and the brains extracted and sliced to determine the location and extent of the viral injection. Visual and microscopic examination confirmed the location above the dorsal striatum, with no damage extending into this region. However, as in these mice the GCaMP6 viral injections were bilateral, only the intact side was used for slice electrophysiology (Supplementary Figure 1D). Most of the recordings were from fluorescent-positive cells (n = 7 from 6 different mice). In the case of D1-or A2A-Cre, if a cell was not fluorescent, it was considered as belonging to the other pathway, i.e., a negative cell from a D1-Cre animal was assumed to belong to the indirect pathway and vice versa. Data from Cre animals were pooled. In current clamp mode, oxycodone administration in brain slices caused a small membrane hyperpolarization (2-3 mV) in all but 1 MSN. This hyperpolarization was enough to cause an increase in inward rectification and reduction in membrane input resistance. As a consequence, cell excitability decreased in all but 1 cell (7 ± 2 APs in control conditions and 5.2 ± 1.6 after oxycodone, with 1 s duration pulses) ( Figure 6A). The difference however, did not reach statistical significance (p = 0.17, paired t-test). Finally, similar to the previous voltage clamp recordings in intact animals, the spontaneous synaptic activity recorded at resting membrane potential was reduced significantly 10 min after oxycodone application (3.46 ± 0.4 Hz in control conditions and 2.48 ± 0.2 Hz after oxycodone, p = 0.024, paired t-test).
Electrophysiological data on the effects of bath application of oxycodone (1 µM) also were collected from intact WT mice (n = 5) not used for Miniscope imaging. Under infrared videomicroscopy, cells had the typical size, and shape of striatal MSNs (n = 10). In addition, basic membrane properties measured in voltage clamp also corresponded to those of MSNs and had an average cell membrane capacitance of 136 ± 13 pF, input resistance of 47.6 ± 4.3 M , and a decay time constant of 2.3 ± 0.2 ms. The spontaneous synaptic currents recorded at -70 mV (putative EPSCs) were reduced after acute application of oxycodone (3.2 ± 0.4 Hz control and 2.0 ± 0.3 Hz after oxycodone, p = 0.004, paired t-test) (Figures 6B-D). At +10 mV, the average frequency of spontaneous synaptic currents (putative IPSCs) also was significantly reduced by acute application of oxycodone (3.1 ± 0.4 Hz in control conditions and 2.1 ± 03 Hz after the drug, p = 0.008, paired t-test). Overall, the electrophysiological data confirmed a generalized reduction of MSN excitability and synaptic inputs in DLS.
Effects of oxycodone on Ca 2+ activity visualized with two-photon laser-scanning microscopy in striatal slices As oxycodone increased locomotor activity, we expected to observe an increase in Ca 2+ transient activity in D1 MSNs. In order to confirm results from Miniscope imaging, some slices from D1-Cre mice also were used for 2PLSM imaging after in vivo recordings were completed. Because striatal MSNs in vitro do not fire APs spontaneously due to highly hyperpolarized resting membrane potentials, cells were activated with the K + channel blocker 4-aminopyridine (4-AP, 100 µM). By blocking A-type K + currents, 4-AP increases neurotransmitter release and induces spontaneous membrane oscillations. Alternatively, APs were evoked by intracellular injection of depolarizing currents. In all cases, oxycodone caused a reduction in both Ca 2+ transient frequency (from 5.75 events to just 1/600 s, p = 0.05, paired t-test) and amplitude (52% reduction, p = 0.008, paired t-test) of MSNs in D1-Cre mice (n = 7 cells, in 6 slices from 3 D1-Cre mice) (Figure 7 and Supplementary Videos 7, 8).
Discussion
The Miniscope technology applied to freely behaving animals is fast becoming the method of choice to correlate neuronal activity and behavior. It is also helping to elucidate the neurobiological mechanisms of addiction (Vickstrom et al., 2021). Ca 2+ transients are a bona fide representation of neuronal firing, albeit with inherent limitations due to the slow kinetics of fluorescent Ca 2+ reporters. Notwithstanding this limitation, the wealth of information provided by Miniscope Ca 2+ imaging is substantial. In combination with available Creexpressing mice, this technology also allows examination of neuronal activity of discrete cell populations. In the present study, we used D1-and A2A-Cre mice to express the fluorescent Ca 2+ indicator in MSNs of either the direct or indirect striatal output pathways, respectively. At a dose of 3 mg/kg, oxycodone increased locomotor activity. This is in line with our previous (Severino et al., 2020) and other studies (Liu et al., 2005). Given the increase in locomotor activity following oxycodone, our initial hypothesis was that cells of the direct pathway would be more active after oxycodone administration, whereas cells of the indirect pathway would be unaffected or perhaps inhibited. To our surprise, MSNs in both pathways demonstrated attenuated activity as manifested by significantly reduced numbers of detectable cells after oxycodone. The frequency of Ca 2+ transients was significantly reduced in indirect pathway MSNs, whereas the reduction in direct pathway MSNs was more subtle and did not reach statistical significance. In addition, while the frequency of Ca 2+ transients was higher during movement versus non-movement before oxycodone, this correlation was lost in indirect pathway MSNs but not in direct pathway MSNs after opioid administration. Opposite effects of oxycodone on behavior and Ca 2+ transient activity suggest a dissociation between striatal activity and behavior. Interestingly, a pharmacological MRI study demonstrated that even a single dose of oxycodone (2 mg/kg, IP) results in generalized reduction of functional connectivity in mice (Nasseef et al., 2019). Our results also suggest that other brain areas, including the cerebral cortex, are associated with increased locomotor activity after oxycodone. In particular, recent studies indicate that CPNs encode information related to the onset and offset of motor sequences and this information is relayed to different striatal circuits (Nelson et al., 2021). In the case of opioid administration, it is likely that this information transfer is blunted by presynaptic inhibition of glutamate release by MORs.
Our electrophysiological and 2PLSM studies in slices corroborated primarily inhibitory effects of oxycodone on MSNs. Activity-dependent Ca 2+ transients induced by (A) Traces represent multi-neuronal Ca 2+ transients recorded in the cortex (layers II/III) of mice with viral injections of GCaMP6f expressed using Syn or CaMKII promoters before and 15 min after oxycodone. Each trace represents the activity of 1 neuron. Notice that, in contrast to striatum, the number of active cells was not reduced after oxycodone. (B) Upward deflections indicate the mouse was moving. As shown in the previous figure, in control conditions the behavior was random, whereas after oxycodone it became more stereotyped. (C) Representative Ca 2+ transient traces of single cortical pyramidal neurons (CPNs). After oxycodone, Ca 2+ transient frequency increased. (D) Graphs show changes in frequency of Ca 2+ transients before and after injection of oxycodone in CPNs. Cells from Syn and CaMKII mice increased their frequency after oxycodone when analyzed separately or when pooled together. *p < 0.05, **p < 0.01. depolarizing current pulses or 4-AP, were significantly reduced by oxycodone bath application. In addition, electrophysiological recordings in current clamp mode demonstrated that after drug application the membrane potential became more hyperpolarized, inward rectification increased, and the number of APs decreased. This is consistent with our previous studies showing that DAMGO, a selective MOR agonist induces a rightward shift in the current-response curve of D1 and D2 (GFP-labeled) MSNs in the DLS (Ma et al., 2012) and could explain the reduced number of visible neurons after oxycodone administration. The effects of oxycodone on sEPSCs, i.e., reduced event frequency, are consistent with presynaptic regulation of neurotransmitter release by MOR activation. Indeed, regulation of glutamatergic transmission by presynaptic MORs has been conclusively demonstrated (Atwood et al., 2014b). For example, DAMGO induces long-lasting reduction Spontaneous postsynaptic currents of MSNs held at -70 and +10 mV to electrophysiologically isolate putative excitatory [α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA) receptor-mediated] and inhibitory (GABA A receptor-mediated) synaptic currents. Ten min after oxycodone bath application, spontaneous synaptic activity was reduced. This reduction is likely the result of presynaptic inhibition of neurotransmitter release. (C) Average frequency of spontaneous synaptic currents recorded at -70 mV and +10 mV. Numbers within the bar graphs indicate the number of neurons recorded. In both conditions, there was a statistically significant reduction in frequency (Student's t-test). (D) Average cumulative probability of inter-event intervals of spontaneous postsynaptic currents before and after oxycodone. A significant reduction in frequency occurred after oxycodone (two-way RM ANOVA, Bonferroni post-hoc test). **p < 0.01, ***p < 0.001. of electrically evoked EPSC amplitude in MSNs of the DLS and a single in vivo exposure to oxycodone can disrupt MOR-induced long-term depression (LTD) (Atwood et al., 2014a). Further, MORs specifically located on thalamostriatal terminals (VGluT2-expressing) can inhibit glutamate release and, behaviorally, MORflox-VGluT2cre mice do not acquire conditioned place preference and fail to display oxycodoneinduced locomotor stimulation (Reeves et al., 2021). However, VGlut2-expressing neurons also are present in other regions including, among others, amygdala, hypothalamus, cerebellum, and cerebral cortex, implying that other pathways also could be involved.
Single-cell extracellular and intracellular electrophysiological recordings in vivo have demonstrated that, in the absence of movement, a great percentage of striatal MSNs remain silent and, if spontaneously active, they discharge at low rates of ∼3-5 Hz (Sandstrom and Rebec, 2003;Mahon et al., 2006). Similarly, Ca 2+ transient frequency is very low in the absence of movement and even during movement, this frequency remains low, probably due to the slow kinetics of GCaMP6. In agreement with other studies, the observed mean frequency in both D1-and A2A-Cre-labeled cells was less than 0.15 Hz (Klaus et al., 2017;Yu et al., 2018). As the firing rate of MSNs recorded electrophysiologically is considerably higher, it is likely that Ca 2+ transients do not reflect single APs but burst firing, which is the result of convergent excitatory inputs from cortical and thalamic afferents. Presynaptic inhibition of glutamate release by MORs may prevent convergent inputs from generating burst firing and, in consequence, less frequent Ca 2+ transients. Traces are from two medium-sized spiny neurons (MSNs) recorded in vitro after a D1-Cre mouse used for Miniscope recordings was sacrificed. Two-photon laser-scanning microscopy (2PLSM) was used to identify MSNs expressing the Ca 2+ reporter GCaMP6f. MSN activity was promoted by bath application of 4-AP (100 µM) and is indicated by spontaneous, rhythmic Ca 2+ transients. 10 min after oxycodone, spontaneous activity was severely reduced. See Supplementary Videos 7-8.
Before oxycodone, Ca 2+ activity was positively correlated with movement. In contrast, after opioid administration, and in spite of increased movement, the number of active cells decreased and the frequency of Ca 2+ transients also was significantly reduced, at least in A2A MSNs, suggesting a decorrelation between MSN activity and movement. This result is reminiscent of previous findings showing reduced population Ca 2+ activity in MSNs from both direct and indirect pathways after cocaine IP injections, in spite of increased locomotion (Barbera et al., 2016). Interestingly, CPN numbers in our study did not change significantly after oxycodone and the Ca 2+ transient frequency increased significantly. This suggests that cortical neurons play an important role in behavioral hyperactivity.
Our study has a number of important limitations. One is that MORs are known to be enriched in the patch versus the matrix compartment in DLS. Since GCaMP6 is present in any neurons that express Cre recombinase, future studies are needed to target GCaMP6 in the patch compartment. Also, oxycodone can bind, albeit with much lower affinity, to other types of opioid receptors. However, in preliminary experiments we also tested the effects of fentanyl, a more selective MOR agonist, and observed similar reductions in the number of visible neurons and Ca 2+ transient frequency, suggesting that the main effects of oxycodone are mediated by MORs. We also need to emphasize that our study only examined acute effects of oxycodone in naïve mice and the outcomes might be different in the case of repeated injections of the drug (Iriah et al., 2019). In addition, the effects of oxycodone in MSNs of the NAc need to be examined as this region appears more sensitive to MOR activation than DLS (Ma et al., 2012). Finally, we need to consider that when imaging deep brain regions, the damage produced by aspiration of overlying tissue as well as by the lens itself can be extensive, even when using thin GRIN lenses.
Conclusion
Our combined imaging and electrophysiological study demonstrates a dissociation between motor behavior and striatal direct and indirect pathway MSN activity after systemic administration of oxycodone. Instead of an expected increase in Ca 2+ transient activity associated with increased locomotion, a reduced number of cells displayed Ca 2+ transient activity. The frequency of these transients also was reduced, primarily in indirect pathway MSNs, probably due to membrane hyperpolarization and reduced burst firing. In contrast, the frequency of Ca 2+ transients in projection neurons of the cerebral cortex increased significantly, suggesting an important role in behavioral activation. This implies that MOR activation hampers the communication along corticostriatal and thalamostriatal pathways, the main excitatory inputs onto MSNs, probably via presynaptic inhibition of glutamate release.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
The animal study was reviewed and approved by the Institutional Animal Care and Use Committee at UCLA.
Author contributions
JB, CC, ML, CE, and PG contributed to conception and design of the study. JB, KO, CY, AP, and DY organized the database. JB and CC performed the statistical analysis, wrote the first draft, and sections of the manuscript. All authors contributed to manuscript revision, read, and approved the submitted version. Example slices (350 µm) obtained from mice used for Miniscope recordings. (A) Cortical slice showing the site and spread of viral injection (CaMKII promoter) under infrared DIC optics (left panel) and under fluorescence microscopy (right panel). The injection corresponds to M1 motor cortex. (B) Example slice obtained from a D1-Cre mouse used for striatal Miniscope recordings. Left panel shows the extent of cortical tissue aspirated for GRIN lens implant. The spread of the viral injection remained circumscribed to the striatum, although it was common to see fluorescence along the corpus callosum and in the tissue surrounding the GRIN lens. (C) Higher magnification of a striatal slice from a D1-Cre mouse used for Miniscope recordings, but on the intact (non-aspirated) side. This slice was used for electrophysiological recordings of identified medium-sized spiny neurons (MSNs). The cell indicated by the yellow arrow was patched and the recording is shown in Figure 6A. SUPPLEMENTARY FIGURE 2 To determine the time course of oxycodone effects we examined behavior and Ca 2+ activity before (Pre) and 5 and 15 min after oxycodone. (A) When comparing Pre, 5 and 15 min after oxycodone injection there was a statistically significant increase in distance traveled among groups (one-way RM ANOVA F 2 , 14 = 8.14, p = 0.005), with Pre versus 5 min (p = 0.005) and pre versus 15 min (p = 0.028) significantly increasing, but 5 min versus 15 min (p = 0.63) showing no difference. When comparing Pre, 5 and 15 min after oxycodone injection there was a statistically significant increase in velocity among groups (one-way RM ANOVA F 2 , 14 = 8.97, p = 0.003), with pre versus 5 min (p = 0.004) and pre versus 15 min (p = 0.017) significantly increasing, but 5 min versus 15 min (p = 0.71) showing no difference. (B) Ca 2+ activity, as reflected by the number of active cells, showed a progressive decrease with time. There was a statistically significant decrease in total number of active cells in D1-Cre mice (one-way ANOVA, F 2 , 6 = 6.39, p = 0.03) observed between Pre versus 15 min (post-hoc Tukey test, p = 0.04). There was a statistically significant decrease in the total number of active cells in A2A-Cre mice (one-way ANOVA, F 2 , 9 = 13.50, p = 0.002) with a significant decrease observed between Pre versus 5 min and Pre versus 15 min (post-hoc Tukey test, p = 0.009 and p = 0.002, respectively). When combining D1-Cre and A2A-Cre data there was a statistically significant decrease in total number of active cells (one-way ANOVA F 2 , 18 = 18.8, p < 0.001) with a significant decrease observed between Pre versus 5 min and Pre versus 15 min (post-hoc Tukey test, p < 0.001). * p < 0.05, * * p < 0.01, * * * p < 0.001.
SUPPLEMENTARY FIGURE 3
Ca 2+ transient frequency as a function of movement versus non-movement epochs. (A) In striatum, the frequency of Ca 2+ transients before drug administration was consistently higher during movement compared with non-movement in D1-and A2A-Cre mice. After oxycodone, this relation persisted in D1-but not in A2A-Cre mice. (B) In the cerebral cortex, the correlation between behavior and Ca 2+ transient frequency was less evident than in striatum but still could be observed. After oxycodone, this relationship disappeared. Data were analyzed using Student's t-test, * p < 0.05. SUPPLEMENTARY FIGURE 4 (A) Traces represent multi-neuronal Ca 2+ transients recorded in an A2A-Cre mouse before and 15 min after saline IP injection. Each trace represents the activity of one medium-sized spiny neuron (MSN). The number of active cells is similar before and after saline. (B) Upward deflections indicate the mouse was moving. In both conditions the behavior appears random and is tightly correlated with Ca 2+ transients. (C) Some representative Ca 2+ transient traces of single MSNs. (D) Bar graphs show the average frequency of Ca 2+ transients before and after saline IP injection. Saline injections did not affect Ca 2+ transient frequency in grouped MSNs (p = 0.53, Student's t-test).
SUPPLEMENTARY FIGURE 5
Traces exemplify one medium-sized spiny neuron (MSN) from D1-Cre or A2A-Cre mice and one CPN from Syn or CaMKII mice before and 15 min after oxycodone. In striatum, the same MSN showed reduced activity, whereas in cortex the same CPN displayed increased activity.
SUPPLEMENTARY VIDEO 1
Raw Miniscope video of Ca 2+ activity before (Pre) oxycodone in a D1-Cre mouse injected with floxed GCaMP6f.
SUPPLEMENTARY VIDEO 2
Raw Miniscope video of Ca 2+ activity 15 min after oxycodone in a D1-Cre mouse injected with floxed GCaMP6f.
SUPPLEMENTARY VIDEO 3
Raw Miniscope video of Ca 2+ activity before (Pre) oxycodone in an A2A-Cre mouse injected with floxed GCaMP6f.
SUPPLEMENTARY VIDEO 4
Raw Miniscope video of Ca 2+ activity 15 min after oxycodone in an A2A-Cre mouse injected with floxed GCaMP6f.
SUPPLEMENTARY VIDEO 5
Raw Miniscope video of Ca 2+ activity before (Pre) oxycodone in a mouse injected with Syn-GCaMP6f.
SUPPLEMENTARY VIDEO 6
Raw Miniscope video of Ca 2+ activity 15 min after oxycodone in a mouse injected with Syn-GCaMP6f.
SUPPLEMENTARY VIDEO 8
Two-photon laser-scanning microscopy (2PLSM) video of Ca 2+ activity after oxycodone application, in the presence of 4-AP (100 µM), in a D1-Cre mouse used for Miniscope recordings. | 2022-10-27T14:13:48.280Z | 2022-10-26T00:00:00.000 | {
"year": 2022,
"sha1": "3e5bf5ec1d4209a39def52b83ade89e3878a6448",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "3e5bf5ec1d4209a39def52b83ade89e3878a6448",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
54093417 | pes2o/s2orc | v3-fos-license | The Neurotoxic Effects of Artemether on the Cytoarchitecture of the Trapezoid Nuclei of Adult Male Wistar Rats ( Rattus novegicus )
In Man, artemether is given at 160mg/kg/bodyweight for three days in the treatment of malarial. This study investigated the effects of corresponding 1.23/mg/kg/bodyweight of artemether for a period of seven days on the trapezoid nuclei and the behavioural functions on day 7 after drug administration in rats. This study observed no gross or morphological differences between the two groups of animals (control and experimental groups) on day 7 at the completion of experimental procedure. A significant statistical increase in average body weight was observed in the control groups C1 (which received only standard diet and water) and C2 (which received 1.23mg/kg/bodyweight of normal saline intramuscularly in addition to standard diet and water) from 140+ 19.65g on day 1 to 146 + 19.90g on Day 1 and 151 + 12.0g on Day 1 to 156.2 + 12.2g on Day 7 respectively. There was a non-statistically significant apparent reduction in body weight in the experimental group E, (which received intramuscular injection of 1.23mg/kg/bodyweight of artemether) from 160 + 9.0g on Day 1 to 157.4 + 8.0g on Day 7. The assessment of brainstem nuclei showed patchychromatic appearance of neurons of the trapezoid nuclei in the experimental group as against the normal vesicular appearance of neurons of the trapezoid nuclei in the Control Group C. The rats in the control groups CI and C2 displayed normal balance and co-ordination, while rats in the experimental group E, showed abnormalities of balance and co-ordination. Using t-test analysis technique at 95% confidence interval i.e t < 0.05 and P value = 2.26, no significant difference was observed between the average brain weight in the control groups C1 and C2 and the experimental group E.
INTRODUCTION
The Qinghaosu (Artemisinin) is an active antimalaria ingredient isolated from the leaves of a medicinal herb, the Artemisia Annua (Shen et al., 1996).The product has demonstrated marked schizonticidal activity on Plasmodii, including the multi-drug resistant strains of Plasmodium falciparum (Klayman, 1985).Artemisinin is an antimalarial agent with a chemical structure unlike any other.It has no nitrogenous heterocycle.(Schimid et al., 1983).It acts on the growth of the early "ring" stages of the trophozoites of Plasmodium falciparum and ensures rapid clearance of blood parasitaemia (Kombila et al., 1995).Chemical modifications of Artemisinin (reduction plus etherification) have enabled more potent and more soluble derivatives to be obtained.Among these different products is artemether.(Paluther®), the methyl ether form of dihydroartemisinin (Luo et al., 1984).
In the year 2006, the Federal Ministry of Health adopted the usage of artemisinin derivatives as the most potent antimalarial drugs for the treatment of malaria in Nigeria.The new policy of the Nigerian government also recommended that chloroquine containing products should no longer be used for the treatment of malaria.In spite of the toxic effects of chloroquine, it remained the mainstay of malaria treatment until the advent of resistant parasites.Similarly, it is important to establish the toxic effects of Artemisinin through clinical and morphological studies for appropriate therapeutic use.
In the last two decades, more than two million patients have been treated with artemisinin or one of its derivatives (artesunate or artemether), predominantly in China and Southeast Asia and more recently in Africa.(Shen et al., 1989).The general toxicity profile in experimental animals has been good.However, in all mammal species tested to date, these compounds have produced an unusual selective pattern of damage to certain brain stem nuclei i.e precerebellar nuclei of the medulla oblongata and particularly those involved in auditory processing and vestibular functions i.e the trapezoid nucleus, the gigantocellular reticular nucleus and the inferior cerebellar peduncle.In the rat, the target brainstem nucleus consistently and most severely affected is the nucleus of the trapezoid body.(Li et al., 2002, Apichart, 2000, and Brewer et al., 1994).
Changes in the affected neurons were loss of Nissl substance, perikaryonal swelling, margination of the nucleus (nucleus eccentricity), nucleolar changes, and increased perikaryonal eosinophilia with occasional clumping of eosinophilic debris.More severe cases had bilateral lesions in other brainstem nuclear groups and a subjective reduction in the normal neuron numbers present compared with vehicle controls.(Brewer et al.; Apichart and Li et al.).All mice with neuropathologic changes also showed behavioral changes as well as loss of co-ordination of balance and equilibrum.(Apichart et al.).In some mice with gait disturbance, no corresponding histopathologic damage could be detected suggesting that there might be a reversible component to artemether neurotoxicity.(Apichart et al.).
The trapezoid nuclei is functionally related to the auditory and vestibular systems, which regulate body balance and equilibrium.(West, 1995).The aims of this study, therefore, were to investigate and evaluate the neurotoxic effects of 1.23mg/kg/bodyweight of artemether on the trapezoid nuclei and behavioural functions on day 7 after the completion of experimental procedure.
MATERIAL AND METHOD
Twenty adult male Wistar rats, weighing between 120g and 180g, were obtained from the animal house of the Department of Veterinary Physiology and University of Ibadan, Nigeria.The rats were acclimatized for a period of seven days in the Department of Anatomy of University of Ibadan, Nigeria.Ethical approval was sought and received from the Department of Anatomy of University of Ibadan on the need to observe completely the rules guiding the employment of rats for scientific studies.The rats were kept five in a cage and fed with standard rat diet purchased from Ladokun and Sons Ltd. in Ibadan daily.All rats were fed with water ad libitum.The drug, artemether (Paluther®) used in this study was obtained as 8% solution for intramuscular injection i.e. 80 mg artemether preserved in 1ml of Arachis (vehicle) oil from May and Baker Nigeria Ltd of Oba Akran Avenue, Ikeja, Lagos State.Normal Saline and Urethane solution (a sedative solution) was obtained from Danax Pharmaceutical Company of Mokola, Ibadan, Oyo State, Nigeria.
The rats were divided into two groups comprising ten rats each namely; experimental group designated Group E and a control group designated Group C. Group E comprised of ten male rats.Each rat was identified by further designations as E a to J. The total dosage regimen of artemether employed in the treatment of malaria in a 70Kg adult man is 480mg/kg/bodyweight.Thus, rats (weighing between 120g and 180g) in Group E were each injected with corresponding 1.23mg/kg/ bodyweight of artemether for a period of seven consecutive days.No rat was injected with Arachis Oil used as vehicle for preserving the drug (artemether).This was because Arachis Oil had earlier been proved not to have any neurotoxic effect on rats in earlier studies, Li et al., and was also not available as a drug product in Nigeria.
Rats in Group C were further subdivided into Groups C1 and C2 with each comprising five rats.Each of the five male rats in Group C1 was coded a -e.Rats in Group C1 were fed with standard diet and water only.The rats were neither injected intramuscularly with artemether nor normal saline.Rats in Group C2 were each injected with 1.23mg/ kg/bodyweight of normal saline for a period of seven consecutive days.This was to scientifically confirm whether the results observed in the experimental Group E is not as a result of induced stress in the rats during intramuscular injection of 1.23mg/kg bodyweight of artemether.Normal Saline is an isotonic medium and thus has no effects on the body tissues.Rats injected with 1.23mg/kg/bodyweight of normal saline are thus exposed to the same level of induced stress as rats in the experimental Group E. Each rat was identified by further designated as C2 a to e.
All rats were sacrificed on day 7 for neurohistopathologic examination.The rats were sedated by intramuscularly injecting each rat with 1mg of Urethane solution (a sedative solution).Urethane has no known neurotoxic effects on rat brains (Meneguz et al., 1999 ).The rats were then individually sacrificed using a sharp laboratory knife.A necropsy was performed and the head and cranium were carefully removed, avoiding pressure on the underlying brain.The head and exposed brain were immersed in fresh 10% formalin solution and the brain remained in situ for seven days before its removal from the skull to avoid swelling up of the brain specimens and consequent enzymatic actions on the brain tissues.Brainstem tissues were histologically investigated at three levels from midbrain to hinbrain; (A) midbrain to pons (B) pons to the nucleus of hypoglossal nerve and (C) nucleus of hypoglossal nerve down to the hindbrain.Staining of the sections was carried out by employing the Cresyl Violet (VOGT's) Method for Nissl Substance (Carleton, 1967).
The gross parameters measured and the instruments used include: (i) The normal/abnormal morphological appearance of the brain.(ii) Weight of the animals using a Swiss microwa balance (type 7720).(iii)Weight of the brain using a Kertz' precision weighing balance.
The microscopic parameters measured and the methods used include: (i) The normal/abnormal morphological appearance of the trapezoid nuclei.(ii) The density of the trapezoid nuclei using a binocular Leitz' microscope with graticum attached to the eye piece.Rats with a score result of 0 were designated as Normal and having a steady gait while rats with a score result of 0.5 -1.0 were designated as abnormal and having an unsteady gait and poor balance coordination and rats with a score result of 0.2 -0.4 as slightly abnormal.
RESULTS AND DISCUSSION
Gross observations: Animal appearance.No gross differences were observed between the two groups of animals on day 7 at the completion of experimental procedure.The brain (with its component parts) of rats in both the control and experimental groups appeared morphologically normal.
Body Weight (Table I): The average body weight in each of Groups C1, C2 and E were calculated on day 1 and day 7 of experimental procedures.Using t-test analysis technique at 95% confidence interval i.e t < 0.05 and pvalue = 2.78, there was a significant increase in body weight between day 1 and day 7 in both control groups C1 and C2.The observed slight decrease in body weight from 160 ± 8.0g on day 1 to 157.4 ± 8.0g on day 7 in rats of Group E (Table I) could be due to the negative effects of 1.23mg/ kg/bodyweight of artemether on normal processes of metabolism with consequent depletion of body protein contents in rats of experimental Group E. However, using t-test analysis technique at 95% confidence interval i.e t < 0.05 and p-value = 2.26, there was no significant decrease in body weight between day 1 and day 7 in the Experimental Group E.
Brain Weight (Table II): Using t-test analysis technique at 95% confidence interval i.e t < 0.05 and p -value = 2.78, no significant difference was observed between the average brain weights of Control Groups C1 and C2.Similarly, using t-test analysis technique at 95% confidence interval i.e t < 0.05 and p -value = 2.78, no significant difference was observed between the average brain weight of the Control groups C1 and C2 (Table II) and the Experimental Group E.
Microscopic observations of the trapezoid nuclei.
Histological assessment of brainstem nuclei showed damage to the trapezoid nucleus in the Experimental Group.The nuclei stained dark as against staining purple in a normal cell.The neurons of the trapezoid nuclei in the Experimental Group E were patchychromatic, unlike the open vesicular appearance of the Control Group C. Such damage indicates neuropathologic lesions, chromatolysis and necrosis.
Density of damaged trapezoid nuclei.The density of damaged trapezoid nuclei was estimated in a field of X (0.0 to 1.0 mm) per section and in a total of five sections per animal in each of the rats in the Control Groups C1, C2 and the Experimental Group E. The total calculated density per rat was divided by five in the Control Groups C1, C2 and by ten in the Experimental Group E to calculate the mean of the calculated total.The average density of damaged trapezoid nuclei was calculated as 0.12 ± 0.01, 0.32 ± 0.01 and 1.28 ± 0.15 in the Control Groups C1, C2 and the Experimental Group E, respectively.Using t-test analysis technique at 95% confidence interval i.e t < 0.05 and pvalue = 2.78 and 2.26, no significant difference was observed between the density of damaged trapezoid nuclei in the Control Groups C1 and C2; while a significant difference was observed between the density of damaged trapezoid nuclei in the Control Groups C1, C2 and the Experimental Group E.
Presence of damaged neurons of trapezoid nuclei in the Control Group C1 could be due to effects of aging or stress, (Table III).Similarly, presence of damaged neurons of trapezoid nuclei in the Control Group C2 (Table III) could be due to the effects of induced stress on the brains of the rats during intramuscular injection of 1.23mg/kg/ bodyweight of normal saline leading to the expression of stress/heat shock proteins such as c -fos, c -jun and hsp -70 (Marc et al., 1995).The increase in density of damaged neurons of trapezoid nuclei in the Experimental Group E compared with the Control Groups C1 and C2 could be due to histopathological effects of intramuscular administration of 1.23mg/kg/bodyweight of artemether, which resulted in the neurons of the trapezoid nuclei appearing patchychromatic.
Assessment of balance and coordination.Rats in the control Groups C1 and C2 showed normal display of balance and co-ordination.However, in the Experimental Group E, the rats showed abnormalities of balance and co-ordination.All rats in control Groups C1 and C2 had a score of 0, with a steady normal balance and coordination.In the Experimental Group E, six rats fell three times and had a score result of 0.6 while three rats fell twice with a score result of 0.3 and one rat fell only once and had a score result of 0.3.These observations could probably be due to presence of lesions in the brainstem nuclei i.e the trapezoid nuclei associated with vestibular and balance functions.
This study concludes that the observable potential neurotoxicity of artemisinin derivatives i.e artemether and artesunate are of serious concern and should be considered in the clinical use of these drugs.Earlier studies observed that neither auditory evoked potentials nor any other neurophysiologic assessment had been validated as sensitive predictors of neuronal damage by these drugs.It may never be possible to be absolutely sure that currently used treatment regimens with artemisinin derivatives are completely safe.Loss of few neurons would be impossible to detect (Van Vugt et al., 2000).Thus, it is advisable that artemether be employed only in cases of acute malarial infection.Oral forms of artemether should be employed in the treatment of malaria as against the usage of intramuscular or intravenous forms which had been shown to produce more neurotoxicity in experimental animals.(Gordi et al., 2004).More specifically, the rectal administration of artemether should be employed in clinical use; as it had been shown to be efficacious as other forms in the clearing of blood parasitaemia, and safer than other forms of administration.(Ambroise-Thomas, 1999).
. A standardized assessment of gait (balance and coordination) and fine coordination skills (as described by Apichart et al.) was recorded twice after the completion of the 7 -day injection schedule.Movements of the rats along the horizontal edge of a 9mm wide and 100cm long brown wooden box for five steps after the completion of experimental procedure on Day 7 were observed.The following were noted and scores accorded respectively: (a) whether the rat was unsteady i.e. a body angle > 50 from the vertical axis (a score of 1), (b) hind legs slipping off the edge when walking for five steps (a score of 2) or (c) whether the rat fell twice or more (a score of 3) or (d) could not walk at all (a score of 4).There were four outcomes: normal throughout, reversible neurologic abnormality, irreversible abnormality or death.(Apichartet al.).The total obtained scores by each rat was divided by the total obtainable score of 10 i.e (1 + 2 + 3 + 4 = 10).
Table I .
Mean body weight of rats in control groups C1, C2 and experimental group E on day 1 and day 7.
Table II .
Brain weight of rats in the control groups C1, C2 and experimental group E after the completion of experimental procedure.
Table III .
Density of damaged trapezoid nuclei in the control groups C1, C2 and experimental group E. 1,062 X. | 2018-11-30T14:29:45.331Z | 2006-12-01T00:00:00.000 | {
"year": 2006,
"sha1": "24a4a956f2fe6ac630a0ef504edf2a1cc64d5d43",
"oa_license": "CCBYNC",
"oa_url": "http://www.scielo.cl/pdf/ijmorphol/v24n4/art03.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "24a4a956f2fe6ac630a0ef504edf2a1cc64d5d43",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Physics"
]
} |
269561891 | pes2o/s2orc | v3-fos-license | Design and Testing of a Compliant ZTTΘ Positional Adjustment System with Hybrid Amplification
This article presents the design, analysis, and prototype testing of a four-degrees-of-freedom (4-DoFs) spatial pose adjustment system (SPAS) that achieves high-precision positioning with 4-DoFs (Z/Tip/Tilt/Θ). The system employs a piezoelectric-driven amplification mechanism that combines a bridge lever hybrid amplification mechanism, a double four-bar guide mechanism, and a multi-level lever symmetric rotation mechanism. By integrating these mechanisms, the system achieves low coupling, high stiffness, and wide stroke range. Analytical modeling and finite element analysis are employed to optimize geometric parameters. A prototype is fabricated, and its performance is verified through testing. The results indicate that the Z-direction feed microstroke is 327.37 μm, the yaw motion angle around the X and Y axes is 3.462 mrad, and the rotation motion angle around the Z axis is 12.684 mrad. The x-axis and y-axis motion magnification ratio can reach 7.43. Closed-loop decoupling control experiments for multiple-input-multiple-outputs (MIMO) systems using inverse kinematics and proportional-integral-derivative feedback controllers were conducted. The results show that the Z-direction positioning accuracy is ±100 nm, the X and Y axis yaw motion accuracy is ±2 μrad, and the Z-axis rotation accuracy is ±25 μrad. Due to the ZTTΘ mechanism, the design proved to be feasible and advantageous, demonstrating its potential for precision machining and micro-nano manipulation.
Introduction
With the rapid development of the modern manufacturing industry, the accuracy requirements for ultra-precision machining and inspection are getting higher and higher.Submicron and even nanometer-level accuracy alignment systems are essential for various applications, such as wafer-level manufacturing and inspection, mass transfer and packaging of microminiaturized semiconductor products, robotic micro/nanomanipulations, and precision optical device polishing, etc. [1][2][3][4].However, traditional high-precision motion systems with 3-DoFs (XYZ) fail to account for pitch, yaw, and small angular deviations [5].These errors, at the spatial scale, pose critical challenges for the increasing miniaturization and integration of devices [6][7][8].Moreover, relying solely on traditional flexure-based nanopositioning stages can provide submicron or nanometer accuracy, but it is hard to meet the large travel range requirements [9].Therefore, how to develop a large-stroke nanopositioning system with multi-DoF levelling and deviation and compensation control function is critical to achieve the above advanced submicron or even nanoscale manufacturing.
To satisfy the need for ultra-high precision spatial pose adjustments, a few of motion systems have been proposed.According to the number of drive modules, these systems can be divided into two categories: (1) Three-Branch Chain Parallel Systems: Three points determine a plane, and the plane's position can be precisely controlled through the parallel connection of three chains.
Although this system has a simple structure, the accuracy of a single-branched chain directly impacts the overall performance, necessitating the sacrifice of motion stroke (only a few tens of micrometers) to achieve high-precision single-branched chains [10][11][12][13][14].
(2) Multi-Branch Chain Differential Systems: Two or more non-intersecting lines form a surface.By manipulating two points that move independently along a straight line, the position of the line and the surface can be adjusted at intervals.This approach enhances accuracy by allowing for controllable differential and enables greater movement of the branch (up to centimeter level).However, the presence of redundant points makes the motion logic and mechanical structure more complicated, which can result in poor stability of the motion stage [15][16][17][18][19].
A lot of research on multi-degree-of-freedom motion stages has been carried out by previous researchers.In the design of stages, Zhang proposed an ingenious sinusoidal corrugated flexure linkage design, featuring structural symmetry and independent planar motion guidance for the two axes.With a stroke of approximately 130 µm per axis and maximum cross-talk below 2.5%, and a natural frequency of 590 Hz [20].Zhang designed a nanopositioning stage employing the self-damping moving magnet actuator (SMMA) for long-stroke operation, supported by flexure guides.This system delivers 20 nm resolution within a ±5 mm motion range and maintains tracking errors below 0.1% of trajectory amplitudes at 1 Hz sinusoidal and triangular commands [21].Yang designed a longstroke nanopositioning stage with annular flexure guides and the classical feed-forward PID (FFPID) controller, achieving a ±5 mm motion range, 20 nm resolution, and 20 nm positioning accuracy at the maximum output position [22].
To improve the performances of stages further, many novel mechanisms and methods have been proposed.Li proposed Compliant Building Elements (CBE) to create practical flexure layouts, allowing early-stage design flexibility by assembling CBE blocks like constructing with LEGO bricks [23].Niu introduced a corrugated dual-axial mechanism with structural symmetry, independent planar guidance for the two axes, stroke around 130 µm per axis, maximum cross-talk less than 2.5%, and an operating frequency of approximately 590 Hz [24].Panas combined cross-pivot flexures to boost stiffness, load capacity, and range capacity in nanopositioning systems [25].Al-jodah created a compact range three-degreesof-freedom (3-DOF) micro/nanopositioning mechanism with leaf springs and voice coil motors (VCMs) [26].Ling proposed an extended dynamic stiffness modeling approach for concurrent kinetostatic and dynamic analyses of planar flexure-hinge mechanisms with lumped compliance [27].Moreover, the novel control and sensing strategies were proposed.Omidbeike presented a new sensing method that separately measures linear and angular displacements in multi-axis monolithic nanopositioning stages, providing enhanced accuracy [28].Kuresangsai applied a linear time-varying (LTV) finite impulse response (FIR) prefilter to a flexure-based X-Y micro-positioning platform, significantly reducing settling times from over 6 s to just 0.4 s [29].
However, there are few stages that are suitable for direct application in the fields of wafer surface defect detection, Mini/MicroLED chip transfer packaging, etc., which need to balance large travel and high precision.Specifically, the three-branch chain stages have the advantage of ultra-high precision and are often used for nanoscale inspection but also have very small strokes, while the multibranch chain stages are suitable for processing large instruments.Therefore, there is a lack of a spatial pose adjustment system (SPAS) with hundred-micron motion stroke and hundred-nanometre accuracy to meet the needs of large-stroke, high-precision device fabrication and testing.Given accuracy as a priority, a three-branch parallel structure is advantageous for this type of stage.Therefore, the key to developing this posture adjustment system is to design a large-stroke and high-precision motion branch chain.
To cater for this requirement, this paper designed a ZTTΘ(Z/Tip/Tilt/Θ) SPAS with 4 DoFs.To 8-inch wafers or MiniLED backlight board as the object of application, the diameter of the device size is designed for 200 mm.due to the smaller the table height-todiameter ratio, the higher the advantages of equipment integration and stability, this paper designs the device height of only 75 mm.Considering the need for high precision and large stroke the SPAS is designed based on a compliant structure with a simple and compact configuration, no motion friction [30,31].The stage employs piezoelectric ceramics as actuators.To overcome the small stroke issue of piezoelectric ceramics, this paper proposes a bridge-lever composite compliant amplification mechanism.The high-precision motion of the designed SPAS is achieved by designing a MIMO PID controller.After performance testing, the Z-direction feed microstroke is 327.37 µm, the yaw motion angle around the X and Y axes is 3.462 mrad, and the rotation motion angle around the Z axis is 12.684 mrad.The Z-direction positioning accuracy is ±100 nm, the X and Y axis yaw motion accuracy is ±2 µrad, and the Z-axis rotation accuracy is ±25 µrad.Such stroke size and accuracy performance proves that the design is feasible and advantageous, demonstrating the potential to provide large-stroke, high-precision displacements and operations in precision operation areas such as wafer inspection and MiniLED chip transfer packaging.
The primary contribution of this work is the development of a compliant 4-DoFs system with hybrid amplification modules, which can achieve nanoscale positioning in 4-DoFs(Z/Tip/Tilt/Θ) to facilitate spatially accurate alignment.The rest of this paper is organized as follows: Section 2 presents the scheme design and operating principle of SPAS, which has been verified.Section 3 models the integrated ZTTΘ SPAS.Section 4 involves size optimization and finite element analysis.Validation experiments and performance evaluation are provided in Section 5. Finally, the achievements and further work are concluded in Section 6.
Configurations and Working Principle 2.1. Structure of the ZTTΘ SPAS
As shown in Figure 1a, SPAS is designed as a precision-guaranteing device, compounded on a large-stroke 3-DoFs(XYZ) macro motion stage.As shown in Figure 1b the ZTTΘ SPAS consists of a Θ rotation module, three identical flexible connection modules, and three identical piezoelectric drive amplification modules.The stage's operation is controlled by a combination of various motion branches, enabling it to achieve micro feed motion in the vertical direction, two spatial sway motions, and rotational micromotion around the vertical axis.The design configuration of the critical components of the SPAS is shown in Figure 2a,b.The stage mainly consists of the Θ rotation module and three parallel Z-axis motion branches.These branches are fixed in a 120 • apart circular shape around the central axis of the positioning stage, with the base fastened to the Θ rotation module.The Θ rotation module is connected to the flexible connection module, which in turn is connected to the piezoelectric drive amplification module.This module is fixed to the base via bolts.The Θ rotation module is driven by two piezoelectric ceramic actuators, while each piezoelectric drive amplifier module is driven by a single piezoelectric ceramic actuator.
The ZTTΘ stage, presented in this paper, comprises 4-DoFs: a Z-axis feed micro motion, two spatial yaw motions (roll and pitch), and one spatial motion rotating in the vertical plane, with an overall diameter of 200 mm and a height of 68 mm.
Design of ZTT Module
This study employs a hybrid ZTT module integrating dual four-bar displacement guidance mechanisms and lever-bridge hybrid displacement amplification mechanisms.The piezoelectric amplification module of the SPAS is shown in Figure 3a.Although the design of the lever-type amplification mechanism is simple, its non-compact nature makes it prone to lateral parasitic displacement generation.Conversely, the bridge-type amplification mechanism exhibits a simple and compact structure, high displacement amplification factor, and good linear output displacement.However, it has low output stiffness and inadequate dynamic performance.The lever-bridge hybrid amplification mechanism, which combines the principles of lever amplification and bridge pressure rod instability, combines the benefits of a simple and compact structure, high displacement amplification factor, and good linear output displacement, while also improving the stiffness of the output end, thereby ensuring superior dynamic performance.To improve the decoupling ability and output stiffness of the mechanism, a double four-bar displacement guidance mechanism is integrated at the end of the lever-bridge hybrid displacement amplification module.This parallel four-bar displacement guidance mechanism, utilizing the elastic deformation of the straight beam flexure hinge, enables approximate linear motion in the vertical direction at the output end.However, due to unavoidable coupling in the horizontal direction, this significantly compromises the positioning accuracy of the mechanism.The double four-bar displacement guidance mechanism exhibits a symmetrical structure with fixed ends, effectively eliminating coupling errors and ensuring minimal horizontal displacement loss.As shown in Figure 3b, the red dashed lines represent the motion and deformation of the designed module.
Design of Θ Module
The Θ rotation module of the SPAS, as shown in Figure 4a, utilizes a two-stage lever displacement amplification mechanism in combination with a straight beam flexure hinge.By initially inputting the displacement or force in the horizontal direction, the two-stage lever amplifies and generates rotational motion around the center point at the output end.Compared to a single output implementation of the rotating mechanism, the utilization of two symmetrical parallel-connected two-stage lever displacement amplification mechanisms enhances the stability of the Θ rotation module, resulting in improved stiffness for the entire module.This approach improves both the horizontal stability and stiffness of the stage.The simultaneous application of an initial force or displacement to the output stage causes rotational motion around its center point.However, the design of the connecting rod leads to underconstrained DoFs in the non-rotational direction of the output stage, causing coupling errors.These errors include coupling along the X and Y axes in the horizontal plane, along the Z axis in the vertical plane, and rotational coupling around the X and Y axes in space.As shown in Figure 4b, the blue dashed lines represent the motion and deformation of the designed module.
To mitigate the coupling error at the output end of the Θ rotation module and enhance the motion precision of the mechanism, a symmetric straight beam flexure hinge is incorporated at the terminal output stage.This approach not only eliminates the coupling error but also enhances the output stiffness and motion accuracy, thereby ensuring the overall dynamic performance of the mechanism.Consequently, the working bandwidth of the mechanism is increased.
Static Modeling of ZTTΘ Stage
The symmetrical design of the piezoelectric-driven amplification module allows for the analysis of its left half initially.The comprehensive coordinate system and precise parameters are shown in Figure 5.
In the amplification module, the flexibility matrix C A i (where i = 1-6) represents the local flexibility matrix of point i in each A i element.When calculating the output compliance matrix, it can be deduced that Hinge 2 has no significant impact and can be disregarded at this stage.Therefore, the Flexure Hinges 1, 2, and 3 in the amplification module's branch chain A 1 A 3 A 4 belong to series connections, and the local coordinate systems of Hinges 1, 3, and 4 are connected to A i − xy in the coordinate system A 4 − xy.
Converting the flexibility matrix of A i − xy(i = 1, 3, 4) to the coordinate system A 4 − xy, it can be concluded that where, S(l j i ) denotes the translation matrix, R j i denotes the rotation matrix [32].when l A i A j ,x and l A i A j ,y (i = 1-6, j = 1-6) denote the length of the X and Y components of the displacement vector A i A j , respectively.They are expressed as The series flexibility of Flexure Hinges 1, 3 and 4 in the coordinate system A 4 − xy of the branch chain The flexibility matrix of the branch chain A 1 A 3 A 4 in the global coordinate system A − xy is: By transforming the flexibility matrix of the remaining free end local coordinate system A i − xy (i = 5,6) of the flexure hinge into the global coordinate system A − xy , it can be obtained that The output flexibility matrix of the left half of the amplification module in the global coordinate system A − xy is influenced by the parallel connection between the dual four bar displacement guidance mechanism and the lever bridge hybrid displacement amplification mechanism in the amplification module: The piezoelectric-driven amplifier module exhibits a left-right symmetric structure.By counterclockwise rotation of the output flexibility matrix of the left half by 180 degrees, the output flexibility matrix of the right half can be obtained: Therefore, the output flexibility matrix for the piezoelectric drive amplification module, which utilizes a dual four-bar displacement guidance mechanism and a lever bridge hybrid displacement amplification mechanism within the global coordinate system A − xy, is presented as follows: The double four-bar displacement guidance mechanism on both sides of the output terminal stage of the amplification module is connected in parallel with the lever bridge hybrid displacement amplification mechanism.The stiffness model of the entire module is shown in Figure 5.The flexibility matrices of Flexure Hinge 1, 3, and 4 in the local coordinate system A i − xy (i = 1, 3, 4) are transformed into the coordinate system A 2 − xy as shown below: The series flexibility matrix of the branch chain A 3 A 4 and Flexure Hinges 3 and 4 in the coordinate system A 2 − xy is: The series flexibility matrix of the branch chain A 1 A 3 A 4 A 5 A 6 and Flexure Hinge 2 in the coordinate system A 2 − xy is The input flexibility matrix and the input stiffness of the piezoelectric drive amplification module are As shown in Figure 6, the motion diagram of the left half of the piezoelectric drive amplification module is shown.The displacement amplification ratio, defined as the ratio of the output displacement to the input displacement, is derived from the compliance matrix and displacement relationship.Mathematically, the amplification ratio may be expressed as As shown in Figure 7, the established coordinate system reveals the parameter variables of its straight beam flexure hinge and straight circular flexure hinge, which exhibit similarities to those of the piezoelectric-driven amplification module.Among these, C B i (i = 1-7 ) denotes the local flexibility matrix for each point within the amplification module.Connected in parallel, the branch chain B 1 B 2 B 3 B 4 B 5 B 6 B 7 and straight beam flexure hinge form the upper half of the Θ rotation module in the global coordinate system B − xy.This yields the output flexibility matrix for the upper half of the module in the global coordinate system B − xy: Due to the symmetrical structure of the Θ rotation module, the output flexibility matrix of the upper half can be rotated 180 degrees counterclockwise to obtain the output flexibility matrix of the lower half: The output flexibility matrix of the Θ rotation module in the global coordinate system B − xy is The straight beam-type flexure hinges on both sides of the output terminal stage of the Θ rotation module are connected in parallel with the two-stage lever-type displacement amplification mechanism.The series flexibility matrix of the branch chain and Flexure Hinge 1 in the coordinate system B I − xy is The input stiffness of the Θ rotation module is As shown in Figure 8a, the dimensions of the two-stage lever displacement amplification mechanism within the Θ rotation module are presented.This mechanism exhibits a pronounced amplification effect, resulting in a substantial output displacement at the terminal stage.As shown in Figure 8b, the simplified analysis diagram of a two-stage levertype displacement amplification mechanism reveals the straight circular flexure hinge at the fixed end as a rotating pivot.Its displacement is represented by a linear spring moving perpendicular to the rigid link and a rotating spring rotating around the pivot.During the actual calculation and analysis process, the deformation of the straight circular flexure hinge is calculated using the flexibility coefficient matrix.Subsequently, the displacement amplification ratio of the mechanism is obtained:
Dynamic Modeling of ZTTΘ Stage
This paper uses the Lagrange equation method to establish the kinematic differential equations of the ZTT module (marked A) and Θ module (marked B), respectively.Establish the dynamic model of the left half of the piezoelectric drive amplification module shown in Figure 9a, and the dynamic model of the upper part of the Theta rotation module shown in Figure 9b.Among them, by simplifying the straight circular flexure hinge and the straight beam flexure hinge into rotating torsion springs, the system is simplified as a spring-mass system.The kinetic energy of the system is generated by the movement of the rigid links in the mechanism, and the potential energy is generated by the elastic deformation of the straight circular flexure hinges and straight beam flexure hinges in the mechanism.The input displacement coordinate of the system is expressed as (q 1 , q 2 ), q 1 is the displacement in the x-axis direction, q 2 is the displacement in the y-axis direction, and m k is the mass.
The potential energy of the system is the sum of the elastic potential energy generated by flexure hinges, which can be expressed as The kinetic energy of the system can be expressed as The kinetic energy and potential energy of the system can be substituted into the Lagrange equation: In the formula, L = T − U. Obtain the undamped free vibration frequency of the piezoelectric drive amplification module:
Multi-Objective Optimization
According to statics and dynamics models, the main dimensional parameters of the piezoelectric-driven amplification module are the width b of the straight beam, the dimensions of the straight-circular flexible hinge (r c , t c ), the dimensions of the straightbeam flexible hinge (l s , t s ), and the dimensions of the lever-bridge hybrid displacement amplification mechanism (l AB , l AC , l OC , l OD ), as shown in Figure 5. Therefore, to design the performance parameters of the SPAS, essentially determining each of the key parameters, this can be viewed as a multi-objective optimization problem where the optimization objective is the displacement of the output, and therefore the objective function can be set to the amplification ratio as shown in Equations ( 28) and ( 29).The main module dimensions of SPAS are optimized using a Genetic Algorithm.
(1) ZTT module: Limited by the dimensions of the piezoelectric ceramics, the overall thickness b a is set at 18 mm.Meanwhile, to ensure the Z-axis stiffness of the piezoelectric drive amplification module, t a2 is set to 0.8 mm.The remaining seven dimensional parameters (r a , t a1 , l a , l AB , l AC , l OC , l OD ) are used as design variables.
where u out is the output displacement, K A I is the input stiffness of the amplifier module, and R A is the amplification ratio of the module, which are derived from the kinematic modeling in the Section 3. K p1 and r p1,0 are the stiffness and nominal stroke of the selected piezoelectric ceramic PSt150/10/60 VS15, respectively.Taking into account the module's overall size, height, and wire cutting gap, the range of design variables and the obtained optimal solution are given in Table 1.The displacement amplification ratio is 7.92, and the maximum displacement of the theoretical output is 323.29 µm.(2) Θ module: As shown in Figure 7, taking into account the dimensions of the piezoelectric ceramics, as well as the compactness of the overall dimensions, the partial parameters are determined: b b = 12 mm, l b = 7.8 mm, l b5 = 7.5 mm, t b2 = 11 mm, t b3 = 8 mm, and t b5 = 12 mm.The remaining seven dimensional parameters (r b , t b1 , l b1 , l b2 , l b3 , l b4 ) are used as design variables.Similarly, to achieve a larger output rotation angle, the displacement amplification ratio of the Θ rotation module from kinematic modeling in Section 3 is selected as the optimization objective: The size parameters of the six design variables and the obtained optimal solution are given in Table 2.And the amplification ratio is 3.24.
Simulation
The model of SPAS is established by the 3D design software UG NX10.0.The static and dynamic characteristic are analyzed by the finite element analysis software ANSYS Workbench 19.0.
In FEA, the fineness of meshing affects the accuracy of the simulation, and the key to the performance of the flexible mechanism SPAS is the flexible hinge, in order to improve the accuracy of the simulation, the mesh size of the flexible hinge is set to 0.5 mm, and in order to improve the speed and stability, the mesh size of the rigid body is set to 1.5 mm.The material used in this study is AL7075-T6 (elasticity modulus E = 71.7 GPa, Poisson's ratio µ = 0.33 , density ρ = 2810 kg/m 3 , Yield strength σ = 503 MPa).The FEA results of the piezoelectric driven amplifier module and the Θ rotation are shown in Table 3 and Table 4, respectively.
As shown in Figure 10a,b, the maximum output displacement of the piezoelectric driven amplifier module is 303.22 µm, the ratio is 7.43, and the maximum bending stress is 176.94MPa.As shown in Figure 10c,d, the maximum output rotation angle of the Θ rotation module is 12.1789 mrad, the ratio is 2.98, and its maximum bending stress is 200.72 MPa, which is far less than the allowable limit stress of material AL7075-T6 of 503 MPa.These results indicate that the designed modules will not undergo fatigue failure within the normal working range, thereby fulfilling the design specifications.As shown in Figure 11a, when the identical displacement excitation is applied to the three Z-axis motion branches, the terminal motion stage generates an overall Z-axis feed motion, with the simulated output Z-axis displacement reaching 303.22 µm.In Figure 11b,c, only one of the Z-direction motion branches is subjected to displacement excitation, while the remaining two Z-direction motion branches are in a free state.Consequently, the terminal motion stage produces yaw motion around the X and Y axes, with the simulation value of the output swing angle being 3.196 mrad.In Figure 11d, applying symmetrical displacement excitation to the Θ rotation module results in rotational motion around the Zaxis, with the simulated output rotation angle value reaching 12.179 mrad.The simulation coupled errors of the SPAS are shown in Table 5, the results indicated that because of the symmetrical mechanism design, the coupled errors are very small.As shown in Figure 12a, the initial natural frequency of the module is 334.5 Hz, indicating its motion state during routine operation.In Figure 12b, the Θ rotation module's initial natural frequency is 873.39Hz, signifying its motion state under routine operation.The first mode deformation in both figures exhibits yaw motion, with the first mode in Figure 12c revolving around the Y-axis and the second mode in Figure 12d revolving around the X-axis.Their respective natural frequencies are 84.265Hz and 89.105 Hz.The results show that the designed device has good dynamic response performance.
Experimental Setup
As shown in Figure 13a, a prototype 4-DoFs SPAS is fabricated.The system consists of an AL7075-T6 material-based SPAS and is driven by three piezoelectric ceramic actuators (XMT,PSt150/10/60 VS15) for ZTT module and two for Θ module.The measurement system comprises three capacitive displacement sensors (SYMC, NS-DCS14), deployed above the three Z-direction motion branches, to detect the SPAS's motion position.The system is mounted on the Winner Optics (WN01AL) stage to mitigate external vibrations caused by environmental factors.
The decoupling of multiple-input-multiple-output (MIMO) control in the system is accomplished using inverse kinematics and a closed-loop PID control strategy.The control strategy of the 4-DoFs SPAS is shown in Figure 13c.The input of the entire control system comprises the reference Z-direction feed micromotion, yaw motion around the X-axis and Y-axis, and rotational motion around the Z-axis.After inverse kinematics analysis, the corresponding driving voltage signals d 1 (t) and d 2 (t) are output through the closed-loop controller, thereby driving the ZTTΘ.The SPAS utilizes a capacitive displacement sensor measurement system to detect the actual displacement signal of the stage in real-time, and compare it with the reference signals r 1 (t) and r 2 (t).Subtracting yields an error value e 1 (t) and e 2 (t).The four axes are adjusted by their respective PID controllers to achieve closed-loop motion control of the four axes.
Open-Loop Test
As shown in Figure 14, the first-order natural frequency obtained by sinusoidal frequency sweeping of the amplification module is 396 Hz, which is similar to the result of 334.5 Hz obtained by simulation.The ZTTΘ workspace encompasses the Z-direction feed micro motion stroke, the yaw motion stroke, and the rotation motion stroke.As shown in Figure 15a, the step voltage signals (0-10 V) applied to the three piezoelectric drive amplification modules are amplified by a piezoelectric ceramic voltage driver by a factor of 15.The resulting output terminal displacements are measured by capacitive displacement sensors.
In Figure 15b-d, we can see that the output displacement stroke of Module 1 is 328.5 µm, Module 2 is 326 µm, and Module 3 is 327.6 µm.It is clear that the output displacement of the three modules is almost identical.By using the displacement transformation relationship, we can calculate that the actual Z-direction feed microstroke of the ZTTΘ SPAS is 327.37 µm.Additionally, the actual output yaw angle is 3.462 mrad.
Closed-Loop Test
The closed-loop motion tracking experiment of the SPAS is designed to assess its performance.The control model is established using MATLAB 2022b/Simulink and compiled into dSPACE for calculation.The basic sampling time is 0.1 µs, and the PID controller parameters are specified as shown in Table 6.To evaluate the stage's performance, tracking control experiments are conducted on conventional Z-direction motion signals (Figure 16a).The reference signal is shown in Table 7, lasting 2 s.The maximum steady-state error after each step is stabilized within 100 nm, resulting in a closed-loop positioning accuracy of 100 nm for the step signal.As shown in Figure 16b, a tracking test is performed on the Z-direction closed-loop sine signal of the positioning stage.The reference signal is shown in Table 7.The maximum steady-state error of the trajectory tracking is kept within ±100 nm, thus ensuring uniform ZTTΘ positioning accuracy of ±100 nm for the Z-direction feed micro motion of the SPAS.Similarly, as shown in Figure 16c, a tracking test is conducted on the closed-loop sine signal of the positioning stage's yaw motion.The reference signal is shown in Table 7.The maximum steady-state error of the trajectory tracking is kept within ±2 µrad, thus allowing for the determination of the closed-loop positioning accuracy of the SPAS's yaw motion around the X-axis and Y-axis to be ±2 µrad.Lastly, as shown in Figure 16d, a tracking test is performed on the closed-loop sine signal of the positioning stage's rotational motion.The reference signal is shown in Table 7.The maximum steady-state error of the trajectory tracking is kept within ±25 µrad, ensuring uniform ZTTΘ positioning accuracy of ±25 µrad for the SPAS rotating around the Z-axis.
Performance Evaluation and Discussions
In high-precision operations and manufacturing, such as MicroLED chips, accurate alignment of spatial position during transfer, packaging, inspection, and repair processes is crucial for ensuring yield.To address this requirement, this paper presents a highprecision 4-DoFs (ZTTΘ) SPAS.An experimental evaluation is conducted on the system, demonstrating precise positioning of four degrees of freedom in space, as evidenced by the performance parameter test results reported in Table 8.These results were obtained through a series of trajectory tracking test experiments, demonstrating that ZTTΘ SPAS can effectively adjust pose in large stroke, high-precision, and MDOF space, including yaws, lifting, rotation, and more.This capability makes ZTTΘ SPAS suitable for use in Micro LED display equipment manufacturing and testing.Furthermore, topology and module optimization theories [33] can be employed to optimize the device's strength and quality, thereby enhancing its dynamic performance.This work contributes to the advancement of high-precision operation and manufacturing, which underscores the importance of precise alignment during various processes.
Conclusions
This paper presents the design, analysis, and prototype development of a 4-DoFs SPAS.The main achievements are summarized as follows: (1) Designed, optimized, and manufactured a ZTT module that combines a bridge-type lever hybrid amplification with a dual four-bar guide mechanism, and a multi-stage lever rotation Θ module.Compared with the traditional piezoelectric drive mechanism, it exhibits low coupling, high stiffness, and large stroke.The experimental results indicate that the Z-direction stroke is 327.37 µm, the tip-tilt value is 3.462 mrad, and the yaw value around the Z axis is 12.684 mrad.The closed-loop positioning accuracy is ±100 nm, the tip−tilt is ±2 µrad, the yaw around the Z-axis is ±25 µrad.In conclusion, the relevant theoretical derivations and experimental results fully demonstrate that the developed system can well meet the advanced submicron and nanoscale manufacturing requirements.
Figure 1 .
Figure 1.Design scheme of the ZTTΘ SPAS.(a) SPAS composite macro stage alignment system.(b) Working principle diagram.
Figure 2 .
Figure 2. Structural design of the ZTTΘ SPAS based on flexure.(a) Overall schematic diagram.(b) Assembly diagram of each module.
Figure 3 .
Figure 3. Design of piezoelectric drive amplifier module.(a) Piezoelectric drive amplifier module.(b) Principle of piezoelectric drive amplifier module.
Figure 4 .
Figure 4. Design of Θ rotation module.(a) Oblique second survey view of Θ rotation module.(b) Principle of piezoelectric drive amplifier module.
Figure 5 .
Figure 5.The local coordinate system of piezoelectric drive amplification module.
Figure 7 .
Figure 7.The local coordinate system of piezoelectric drive amplification module.
Figure 10 .
Figure 10.FEA results.(a) Maximum output displacement of the piezoelectric-driven amplifier module.(b) Maximum bending stress of the amplifier module.(c) Maximum output rotation angle of the Θ rotation module.(d) Maximum bending stress of Θ rotation module.
Figure 11 .
Figure 11.FEA results.(a) Maximum output displacement of Z-axis.(b) Maximum output yaw angle around the X axis.(c) Maximum output yaw angle around the Y axis.(d) Maximum output rotational angle around the Z axis.
Figure 12 .
Figure 12.FEA results.(a) Natural frequency of the amplifier module.(b) Natural frequency of the Θ rotation module.(c) First frequency of the ZTTΘ stage.(d) Second frequency of the ZTTΘ stage.
Figure 14 .
Figure 14.Sweep signal experiment of piezoelectric driven amplifier module.
Figure 15 .
Figure 15.Open-loop step signal test diagram of piezoelectric drive amplifier module.(a) Input step voltage signal of piezoelectric actuator.(b) Output signal of Z1 module displacement.(c) Output signal of Z2 module displacement.(d) Output signal of Z3 module displacement.
Figure 16 .
Figure 16.Closed-loop test diagram of the positioning stage.(a) Trace diagram of the ladder signal of the Z-direction motion.(b) Trace diagram of the sinusoidal signal of the Z-direction motion.(c) Trace diagram of the sinusoidal signal of the yaw motion.(d) Trace diagram of the sinusoidal signal of the Rotational motion.
( 2 )
Designed and built a compliant 4-DoFs SPAS driven by piezoelectric ceramics, which can realize four high-precision movements of Z-axis lifting, Tip, Tilt, and rotation.(3) With the developed system and controlling strategy, the experiments including open-loop test, closed-loop performance test, accuracy and travel range tests are successfully carried out in detail.
Table 1 .
The value of optimization variable of the piezoelectric driven amplifier module.
Table 2 .
The value of optimization variable of the Θ rotation module.
Table 3 .
The FEA results of the piezoelectric driven amplifier module.
Table 4 .
The FEA results of the Θ rotation.
Table 5 .
Deformation and coupled error.
Table 8 .
Performance parameters test results of the ZTTΘ SPAS. | 2024-05-04T15:07:19.277Z | 2024-04-30T00:00:00.000 | {
"year": 2024,
"sha1": "bf519b30dab53ed5f89aa9137962ee4cc6de030d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-666X/15/5/608/pdf?version=1714467246",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2d9cc1fb4401243a9ce61fbc31d380bab4de66d7",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
1258276 | pes2o/s2orc | v3-fos-license | Heart failure among Indigenous Australians: a systematic review
Background Cardiovascular diseases contribute substantially to the poor health and reduced life expectancy of Indigenous Australians. Heart failure is a common, disabling, progressive and costly complication of these disorders. The epidemiology of heart failure and the adequacy of relevant health service provision in Indigenous Australians are not well delineated. Methods A systematic search of the electronic databases PubMed, Embase, Web of Science, Cinahl Plus, Informit and Google Scholar was undertaken in April 2012 for peer-reviewed journal articles relevant to the topic of heart failure in Indigenous Australians. Additionally, a website search was done to identify other pertinent publications, particularly government reports. Results There was a paucity of relevant peer-reviewed research, and government reports dominated the results. Ten journal articles, 1 published conference abstract and 10 reports were eligible for inclusion. Indigenous Australians reportedly have higher morbidity and mortality from heart failure than their non-Indigenous counterparts (age-standardised prevalence ratio 1.7; age-standardised hospital separation ratio ≥3; crude per capita hospital expenditure ratio 1.58; age-adjusted mortality ratio >2). Despite the evident disproportionate burden of heart failure in Indigenous Australians, the accuracy of estimation from administrative data is limited by poor indigenous identification, inadequate case ascertainment and exclusion of younger subjects from mortality statistics. A recent journal article specifically documented a high prevalence of heart failure in Central Australian Aboriginal adults (5.3%), noting frequent undiagnosed disease. One study examined barriers to health service provision for Indigenous Australians in the context of heart failure. Conclusions Despite the shortcomings of available published data, it is clear that Indigenous Australians have an excess burden of heart failure. Emerging data suggest that undiagnosed cases may be common in this population. In order to optimise management and to inform policy, high quality research on heart failure in Indigenous Australians is required to delineate accurate epidemiological indicators and to appraise health service provision.
Background
Indigenous Australians are known to suffer poorer health and lower life expectancy than their non-Indigenous counterparts. This life expectancy gap is attributable substantially to chronic conditions such as cardiovascular disease [1].
Heart failure (HF; synonyms: congestive heart failure, cardiac failure) is a 'complex and lethal clinical syndrome' [2] in which the heart is unable to provide blood flow adequate for the body's metabolic needs [3]. Although HF is traditionally conceptualised as an impairment of the heart's ability to pump sufficient blood into the circulation during systole, it is now recognised that left ventricular ejection fraction, a measure of systolic function, is preserved in many cases [4], and that this pathophysiological heterogeneity of HF may be influenced by the spectrum of underlying causes [5].
The antecedents of HF, especially hypertension [6,7] and coronary heart disease including myocardial infarction (MI) [8,9], are disproportionately common among Indigenous Australians. Importantly, these conditions tend to become manifest at a younger age than among non-Indigenous persons, resulting in a much greater burden of disease, and contributing to greater levels of disability in Indigenous populations [1]. Renal disease, which clusters with features of the metabolic syndrome as a predictor of cardiac illnesses, is prevalent at extremely high levels in some Indigenous populations [10,11]. Diabetes, frequently associated with this cluster of cardiovascular conditions, also occurs at a much higher rate among indigenous Australians [12], and may independently predispose to HF [13]. Moreover, Indigenous Australians are known to suffer an exceptionally high incidence of rheumatic fever, with the potential sequelae of chronic rheumatic heart disease and HF [14].
Regardless of the underlying cause, HF is an important cardiovascular problem in its own right, being associated with substantial disability, impaired quality of life and diminished survival [15]. Further, timely identification of HF with institution of evidence-based interventions diminishes symptoms and potentially prolongs life [16,17]. However, the epidemiological indicators of HF among Indigenous Australians are poorly delineated. Although administrative data on HF in the Australian population are published regularly, particularly by the Australian Institute of Health and Welfare (AIHW), the accuracy of such administrative data, and the validity of epidemiological comparisons between Indigenous and non-Indigenous populations derived from these data are fraught, doubly in this instance given that there are caveats on both ascertainment of HF [18,19] and identification of Indigenous status [20]. Further, administrative data are generally not personbased, for example, re-admissions of the same patient cannot be distinguished [21]. Consequently, quality research specifically addressing HF in the Indigenous population is needed to verify indicator measures derived from administrative data.
Personal and psychosocial perspectives, based on qualitative research, are also important for understanding the effects of HF in the Indigenous population. Such information is a valuable adjunct to quantitative data in estimating the impact of the disease and to inform planning.
This review specifically explores publicly available information on HF in the Australian Indigenous population through: (1) a systematic search of the peer-reviewed literature and (2) a review of reports published on this topic based on analyses of administrative data by Australian federal and state/territory governments.
Search of peer-reviewed journal databases
A systematic search of the electronic databases PubMed, Embase, Scopus, Web of Science, Cinahl Plus, Informit and Google Scholar was undertaken, using search terms that comprised: (i) subject headings, specified by the respective databases related to heart failure or to Indigenous status (for example Medical Subject Heading [MeSH] in PubMed), and (ii) other synonyms for HF and Indigenous status, either listed by the electronic databases or generated by the authors. The search retrieved articles containing one or more terms relevant to both heart failure and Australian Indigenous (Aboriginal or Torres Strait Islander) status. A full list of search terms used is provided below (Additional file 1: Table S1). The search was restricted to articles published from 1990 onwards and to English language publications. Duplicates were identified and removed by automated search and manual scrutiny of the remaining list of citations.
The search strategy was modified for Google Scholar. Given the limited Boolean syntactical capacity of the Google Scholar search engine [22], a search with fewer, more general search terms was done. This retrieved a large number of citations (about 3000) including articles not elsewhere identified that contained 'heart failure' as a phrase in the full text, even if this was not present in the title, abstracts or keywords. The first 50 citations were screened manually for relevance and to ensure non-duplication of citations from other databases.
Search of websites for relevant reports
The Australian Government and state/territory Department of Health websites were searched from within the Google interface or using departmental website internal search engines for documents with the terms ' Aboriginal or Indigenous or heart failure or cardiac failure'. The Australian Institute of Health and Welfare (AIHW) website was independently searched for relevant articles as was the Australian Indigenous Health InfoNet website, a clearinghouse specifically devoted to reports and information pertaining to health of Indigenous Australians. References lists from articles and reports included in the search were also checked for relevant publications.
Inclusion criteria
Studies were considered for inclusion if they dealt with the Australian Indigenous population and included post-1990 data or qualitative evidence on one or more pre-defined heart failure-related issues related to (1) prevalence or incidence, either population-based or within clinical groups or clinical service settings (such as acute coronary syndrome cohorts); (2) aetiology, risk factors or clinical presentation and pathophysiology; (3) co-morbidities; (4) mortality & survival; (5) quality of life; (6) therapeutic interventions; (7) health service utilisation (including medication adherence, primary care attendances, hospitalisations, cardiac rehabilitation); (8) health service delivery issues (including needs, access and barriers); and (9) costs related to HF diagnosis and care.
One author (JW) screened all of the references (n=127) by title and abstract. A large number of references were deemed not relevant on the basis of this initial screening. The criteria for exclusion on this preliminary screening (Table 1) were checked and accepted by other authors (JK and ST). The remaining references (n = 51) were reviewed jointly by two authors (JW and JK).
Although at the outset it was not proposed to include conference abstracts, in view of the scarcity of published information, one abstract was ultimately included to reflect work/research as yet unpublished.
Results
Of 145 journal articles or conferences abstracts identified ( Figure 1), 94 were excluded on the basis of preliminary screening (Table 1). Of the 51 remaining papers reviewed by two authors, 9 were eligible for inclusion with a further two eligible articles published while the manuscript was under review (Table 2). Ten reports were also included ( Table 3).
Prevalence or incidence, either population-based or within clinical groups or clinical service settings (such as acute coronary syndrome cohorts) Peer-reviewed studies There was only one peer-reviewed, population-based study of HF prevalence in the Australian Indigenous population, and none of incidence rates. McGrady and colleagues reported on the "Heart of the Heart" study, in which volunteering adult Aboriginal participants from 6 Central Australian communities were enrolled in 2008-9 for a comprehensive cardiac assessment which included a researcher-administered structured questionnaire (medical history, drugs, smoking, alcohol intake and HF symptoms), medical record review, anthropomorphic measures (waist circumference, height and weight), blood pressure, cardiac and lung auscultation and fluid status assessment by a study cardiologist, biochemistry (for assessment of renal function, random non-fasting blood glucose level, glycosylated haemoglobin (Hb A1c ) and B-type natriuretic peptide (BNP) [25]. The prevalence of HF was 5.3% (95%CI 3.2-7.5%), with asymptomatic left ventricular dysfunction present in a further 13%. In 65% of those with HF detected, the disorder had not been previously diagnosed. The interpretability of the reported prevalence estimate in this study is limited by the absence of a non-Indigenous comparison group and uncertain representativeness of the participants in relation to the Central Australian Aboriginal community, although the age profile and prevalence of comorbidities are similar to findings of other whole-of-community medical record reviews.
Two retrospective cohort studies, both based on administrative linked data, reported the frequency of several comorbidities, including HF, among Indigenous subjects with coronary heart disease. In a study of patients admitted to Queensland public hospitals with acute myocardial infarction MI [23], Indigenous subjects had a higher prevalence of HF as a concurrent comorbidity (age-adjusted risk-ratio 1.67; 95% CI 1.35-2.00). In another study of 28-day survivors of first-ever MI in Western Australia, current or 5-year history of HF was more common among Indigenous subjects than their non-Indigenous counterparts (males: 17% versus 13%; p=0.018; females: 31% versus 22%; p=0.003), even though the median age of Indigenous subjects was 13-14 years lower [24].
Government reports
The periodic National Aboriginal and Torres Strait Islander Health Survey (NATSIHS) provides data on selfreported health problems. In the 2004-05 NATSIHS, information was collected from a sample of 10,400 persons, constituting about 1 in 45 of the total Indigenous population [33]. Comparator non-Indigenous data were obtained from the 2004-05 National Health Survey (n=approximately 25,900), intended to be representative of Australian households (but excluding subjects from Very Remote locations, unlike the NATSIHS). Survey samples included only the usual residents of private dwellings. The presence of 'oedema' was considered synonymous with HF in the self-reported data. In this survey, the overall prevalence of self-reported HF or oedema among Indigenous Australians was 1.0%, with age-standardised prevalence estimates of 17.4/1000 (95% CI 6.3-28.5) in Indigenous males and 28.8/1000 (95% CI 17.0-40.7) in Indigenous females. The precision of these estimates is likely to be poor, with a relative standard error of 25-50% reported for the male estimate. The Indigenous to non-Indigenous age-standardised prevalence ratio was 1.7 overall (males 1.9; females 1.6, not statistically significant for either sex, but significant for both sexes combined) [33]. 3. Other non-anthropological meanings of 'indigenous' (e.g., as synonym for 'intrinsic')
10
Review articlesno original data 6 Duplicate data 2 Case report data only 6 TOTAL 94
Aetiology, risk factors, clinical presentation and pathophysiology
In the recent report by McGrady and colleagues, the key risk factors associated with HF in a Central Australian Aboriginal population were coronary heart disease, hypertension, diabetes mellitus, obesity, and a history of acute rheumatic fever or rheumatic heart disease ( Table 2). Diabetes, hypertension and obesity (individually or in combination) were contributing factors in 48% of HF cases and 79% of those with asymptomatic left ventricular dysfunction, while 9% and 13% of these cases respectively had no clearly identified contributory risk factor [25]. Although the likely antecedents of HF in the Indigenous population (particularly coronary and rheumatic heart disease) are well discussed in the literature, HF was otherwise only mentioned incidentally in several studies of Indigenous Australians as a fatal complication (see below) of coronary [28] or rheumatic heart disease [29].
No studies quantified the contribution of pulmonary disease to HF in Indigenous Australians, or linked chronic obstructive pulmonary disease with HF in this population. However, co-morbid HF was reported in one retrospective cohort study of the role of human T-lymphotropic virus-1 (HTLV-1) infection in determining outcomes among Central Australian Indigenous adult subjects who had been hospitalised with bronchiectasis (n=89) [26]. HF was present in 22 patients and cor pulmonale was diagnosed in 11. HTLV-1 seropositivity was associated with worse outcomes. In addition to an increased likelihood of adverse radiographic indicators of pulmonary disease and increased • Crude rates only. bronchiectasis-specific mortality in HTLV-1 seropositive patients, HF (35% versus 11%; p=0.013) and cor pulmonale specifically (19% versus 3%; p=0.023) were both more common in HTLV-1 seropositive than HTLV-1 seronegative subjects. The frequency of HF with preserved ejection fraction was only addressed in two published studies. In a study of HF patients identifying as Indigenous referred for echocardiography in north Queensland [27], the reported proportion with preserved ejection fraction (57%) was considered to be 'at the upper end of that expected' from studies of other populations. However, the study did not provide non-Indigenous comparison data. In the Central Australian study, 61% of those with HF had impaired left ventricular systolic function (ejection fraction <50%) and 39% had HF with preserved ejection fraction [25].
Co-morbidities
Despite the known endemicity of chronic diseases such as diabetes and renal dysfunction among Indigenous Australians, the only publication which dealt specifically with co-morbidities for HF per se in an Indigenous population was the work of McGrady and colleagues. They reported the following comorbidities being significantly more prevalent in HF cases than non-cases in the Central Australian Aboriginal adult population: CAD (39%), Diabetes (78%), hypertension (78%) and ARF/RHD (26%) [25].
Mortality & survival Peer-reviewed studies
The peer-reviewed literature provided no population-based data on HF mortality among Indigenous Australians. However, deaths attributed to HF have been examined in the context of coronary heart disease and rheumatic heart disease (RHD). The Central Australian Secondary Prevention of Acute Coronary Syndromes (CASPA) Study was a retrospective audit of patients hospitalised with ACS in the Northern Territory [28]. After 2-year follow-up, the crude frequency of deaths attributed to HF was similar in Indigenous and non-Indigenous groups. However, Indigenous subjects had a substantially lower mean age on admission (50.1 vs. 59.3 years; p<0.001), despite which they were significantly more likely than non-Indigenous subjects to have died from any cause (30.0% vs. 17.8%; p=0.002). In the previously cited WA cohort of first-ever MI survivors [24], HF as a co-morbidity was independently associated with about double the risk of either recurrent AMI admission or cardiovascular death in Indigenous and non-Indigenous subjects.
Carapetis et al. [29] examined the outcome of cardiac valve replacement in subjects with RHD in northern Australia. In this retrospective chart review with some prospective follow-up of 80 consecutive patients (70 of whom were Indigenous), there were 29 late deaths, including 27 considered consequent to the RHD, 12 of which were attributed to HF and 1 to 'pneumonia with HF'. The ages of the patients at the time of death were not reported.
Government reports
In a report specifically addressing HF, Heart failure. . .what of the future? [34], the AIHW provided National Mortality Database (NMD) data from South Australia, Western Australia, Queensland and the Northern Territory only, due to poor Indigenous identification in administrative data in other jurisdictions. Death rates attributed to HF were reported to have remained 'fairly static' between 1995-96/ 1997-98 and 1998-99/2000-01 among male and female Indigenous and non-Indigenous populations. However, age-standardised death rates (45 years and over) in the period 1998-99 to 2000-01 where HF or hypertensive heart disease was certified as the underlying cause were nearly threefold higher in the Indigenous population (173 per 100,000 males and 160 per 100,000 females) than in the non-Indigenous population (60 per 100,000 males and 66 per 100,000 females). The study found that there was disproportionately high mortality among middle-aged Indigenous males. Indeed, the death rate among 55-64 year old males (47 per 100,000) was higher than among 65-74 year olds (34 per 100,000), although the rate rose markedly in the Indigenous male population aged 75 years and over.
In a more recently published AIHW report, again based on NMD data from the same four jurisdictions for the period 2002-04 [33], age-adjusted mortality attributed to HF was estimated to be over twice as high in Indigenous as non-Indigenous Australians overall, and 6.4 times as high in the 45-64 year age group. The overall standardised HF mortality ratio was more accentuated for the female (2.4) than the male Indigenous population (2.0).
Quality of life
There were no studies or reports dealing specifically with quality of life in HF per se among Indigenous Australians.
Therapeutic interventions
There were no studies or reports dealing specifically with therapeutic interventions for HF tailored and targeted for Indigenous Australians.
Health service utilisation (including medication adherence, outpatient attendances, hospitalisations, cardiac rehabilitation) Peer-reviewed studies Thomas et al. examined the conditions accounting for attendances at a community-controlled Aboriginal Medical Service in Darwin [31]. HF was managed at 3.4% (CI 1.9-4.9%) of consultations, by an Aboriginal health worker (AHW) only (42.6%), an AHW together with a doctor (53.5%) or a doctor alone (3.9%). No non-Indigenous comparison group was examined in the study. However, the authors noted that in national data from the Australian Morbidity and Treatment Survey, HF had been managed at 1.9% of primary care (i.e., general practice) consultations. Another study from NSW, reported in a conference abstract, found that of 68 patients referred to an urban AMS specialist cardiology clinic, 3% had heart failure [30].
Other reports
Between 1999 and 2011, six government reports were published providing data on differences in hospital separation rates for HF between indigenous and non-Indigenous people in various Australians jurisdictions. These reports were based on data derived from the National Hospital Morbidity Database (NHMD), and were characterised by separation-based rather than person-based analyses and incomplete (and varying) coverage of all states and territories (due to poor quality of data in some jurisdictions).
A Commonwealth Department of Health and Aged Care report (1999) estimated a total of 970 Indigenous and 39,305 non-Indigenous public and private hospital separations in Australia with an ICD-9-CM principal diagnosis code for heart failure. Neither crude nor standardised separation rate ratios were provided [37].
The 2003 AIHW report Heart failure. . .what of the future? analysed NHMD data from South Australia and the Northern Territory only (the NT providing public hospital data only) for two time periods, 1995-96/1997-98 and 1998-99/2000-01. Although limited in geographic and age (45 years and over) coverage, a fall in HF hospitalisation rates was reported for male and female Indigenous as well as non-Indigenous populations. In the later triennium, agestandardised hospitalisation rates where heart failure or hypertensive heart disease was the principal diagnosis were 1555 per 100,000 for the male Indigenous population (compared with 743 for the male non-Indigenous population) and 1579 per 100,000 for the female Indigenous population (compared with 541 per 100,000 in the female non-Indigenous population). This translates into a standardised rate ratio of about 2 for men and 3 for women. As with mortality rates, hospitalisation rates for Indigenous males were higher in the 55-64 year age group than older age groups [34].
Australian hospital separations (excluding Tasmania, ACT and private hospitals in NT) from July 2004 to June 2006 were also reported by the AIHW, by Indigenous status, for the top 10 ambulatory care sensitive conditions [35]. For congestive heart failure, age standardised rates were 6.6 separations per 1000 Indigenous people (95% CI 6.3-6.9), compared with 1.9 per 1000 for 'Other' persons, reflecting a standardised separation ratio of 3.4. Analogous data were published for the period July 2006 to June 2008.
More recently, a Productivity Commission report provided multijurisdictional National Hospital Morbidity Data on 2008-2009 hospital separations for selected chronic conditions by Indigenous status [38]. Tasmania, ACT and private hospitals in the NT did not contribute data. For congestive heart failure, the rates of separations per 1000 people, directly standardised using the Australian 2001 standard population, were 6.1 and 2.0 for patients identified as Indigenous and non-Indigenous respectively.
Additionally, the AIHW published 2008-2009 public and private hospital separation rates for all states and territories combined, by Indigenous status, in which the principal diagnosis was a condition for which hospitalisations are considered potentially preventable, including congestive heart failure [39]. The crude rates, adjusted in the original report for Indigenous under-identification, were 2.8 per 1000 population for Indigenous persons and 2.1 per 1000 for non-Indigenous persons (Crude rate ratio = 1.33).
Despite differences in methodology with respect to age group inclusion, geographic coverage and diagnostic codes, these government reports show that Indigenous people in Australia have about 3 times higher hospitalisation rates than non-Indigenous people, when the different age distribution is taken into account. Average bed days per HF admission were lower in Indigenous compared with non-Indigenous patients [35,37].
The Bureau of Health Information in NSW has recently published two reports based on administrative data from that state [40,41]. The first examined patient admissions to public and private hospitals, and reported that 2% of the "potentially avoidable" admissions of patients with HF occurred among Indigenous people, the same as the population proportion of Indigenous people in NSW [40]. This study reported crude proportions only (and thus did not take into account age differences) and was not personbased, hence could not identify recurrent admissions of the same person. The second report linked hospitalisation and mortality data in a cohort of individuals aged >45 years with evidence from hospital records of pre-existing HF. Aboriginal people were over-represented among those who had more than one admission with HF during the year of follow-up: 3% were Aboriginal compared to 2% among those with one or no HF admissions [41]. While this study used person-based data, the proportion of the cohort identified as Aboriginal was not stated, and no adjustment was made for Indigenous under-identification. Thus, while the reports overall suggested by crude comparison that potentially avoidable admissions for HF occurred no more frequently among Indigenous compared with non-Indigenous people [40], recurrent admissions were more frequent in Indigenous people [41]. Additionally, the exclusion of people aged <45 years excludes a substantial proportion of Aboriginal people at risk for HF due to high rates of IHD and RHD at younger ages.
The only reports covering primary care were the Bettering the Evaluation and Care of Health (BEACH) surveys, conducted periodically by the AIHW Australian General Practice (GP) Statistics and Classification Unit. These provide data on GP encounters (~1,000 GPs nationwide participate annually, with data on 100 consecutive encounters collected from each). Data from the BEACH survey on Indigenous HF have been reported in two AIHW publications. For the BEACH survey years 2002-03 to 2006-07 inclusive, the reported crude proportion of encounters (number per 100) at which HF issues were managed by GPs was 1.0 (95% CI 0.6-1.3) for Indigenous patients vs. 0.7 (95% CI 0.7-0.8) for 'Other' patients, with an age-standardised rate ratio of 2.7 for Indigenous versus 'Other' patients [35]. For the overlapping period April 2004-March 2005 to April 2008-March 2009 inclusive, the reported crude proportion of encounters at which HF issues were managed by GPs was 0.9 (95% CI 0.6-1.2) for Indigenous patients vs. 0.7 (95% CI 0.7-0.7) for 'Other' patients, with an age-standardised rate ratio of 2.6 for Indigenous versus 'Other' patients [36].
Health service delivery issues (including needs, access and barriers)
In a geo-mapping study of national HF services, Clark et al. reported lower service provision in rural and remote areas [32]. The spatial distribution of population, which included Indigenous status, was derived from Australian Bureau of Statistics Census data. HF prevalence was not measured directly but instead derived from European estimates, with Indigenous prevalence-weighting applied to communities, based on a combination of census data and AIHW-derived HF prevalence estimates.
Aspin and colleagues have recently reported findings from a qualitative study in which patients with chronic diseases (HF, diabetes and chronic obstructive pulmonary disease) were interviewed about the barriers and facilitators of access to health care and support [42]. Participants were recruited from Aboriginal Medical Services. Eleven of the 19 people interviewed had HF, with or without the other conditions. Those with HF were not distinguished from those with other chronic diseases, but issues were considered to be common across all the diseases. The culturally inappropriate services, racism, poor communication with health professionals and financial barriers were all impediments to service access, whereas support from family, the strength drawn from the Aboriginal community and regular access to good primary health care were regarded as assisting with participation in care.
Costs related to HF diagnosis and care
No peer-reviewed studies were identified on the absolute or relative costs of HF management among Indigenous Australians. In 2011, the AIHW published expenditure estimates based on National Hospital Morbidity Data on potentially preventable hospital separations, by Indigenous status, for both public and private hospitals in all states and territories during the period 2008-09. Patients identified as Indigenous accounted for 3.9% of the expenditure for HF overall ($14.5 million from a total of $372.8 million for all patients). This corresponded with a calculated approximate expenditure on HF hospitalisation of $26.70 per Indigenous person, compared with $16.90 per non-Indigenous person (crude Indigenous to non-Indigenous expenditure ratio =1.58) [39]. In the analysis, a loading of 5% was added to the Indigenous patient costs to account for previously estimated excesses in comorbidity for similar Diagnosis Related Groups.
Discussion
This comprehensive review of HF among Indigenous Australians reveals substantial current knowledge gaps that potentially hinder health service planning. Peer-reviewed journal papers as well as other publications sourced from administrative data were considered for inclusion, which was determined by relevance to a range of pre-defined subheadings under the broad rubric of HF.
Priority was given to peer-reviewed articles. An exhaustive, multiple-database keyword search strategy was used for the journal literature, maximising the sensitivity of citation retrieval. However, most papers identified were either not principally concerned with HF and/or provided local data with questionable generalisability. Despite abundant published research on the likely major antecedents of HF in this population (coronary disease, hypertension and rheumatic heart disease), few articles included explicit mention of HF, an essential inclusion criterion.
The search of applicable Australian government websites as well as the Google interface identified a number of reports that provided data comparing Indigenous with non-Indigenous populations; these dominated the publications identified. Web searches of this type are inherently less systematic than those of journal databases, and some grey literature may have been overlooked. The validity of government reports is compromised by the quality of administrative data with respect to ascertainment of HF, the accuracy of which varies depending on the indicator being measured [43,44] and by shortcomings in the identification of Indigenous status. Additionally, these data capture only hospitalisations and deaths, not the less severe end of the illness spectrum [45,46]. Despite these limitations, the indicators show substantial disparities in the occurrence of HF between Indigenous and non-Indigenous populations.
The only population-based Australia-wide estimate of prevalence of HF among Indigenous Australians identified in our search was derived from the 2004-05 NATSIHS survey [33]. An age-standardised prevalence of HF 1.7 times higher among Indigenous than non-Indigenous Australians was reported, but the precision of estimates (judging by wide confidence intervals) was poor, particularly for Indigenous males. Although the ABS made substantial efforts to optimise Indigenous participation and the accuracy of self-report in the survey, the latter was compromised by the potential for differential ascertainment of HF from Indigenous and non-Indigenous subjects and the conflation of non-specific 'oedema' with HF. The recent cross-sectional population-based study in several Central Australian communities, incorporating comprehensive cardiovascular assessment [25], detected HF in 5.3% (95%CI 3.2-7.5%) of study participants, with only 35% of these having a preexisting diagnosis of this condition. Although interpretability is limited by the absence of a non-Indigenous comparison group and by uncertainty regarding the representativeness of the participants, making it difficult to compare with whole-population data from the NATSIHS survey [33], there appears to be considerable under-diagnosis, at least in remote areas. The high proportion of newly detected cases in a young population (mean age 44 years [SD14]) raises the possibility of a 'large, as yet unidentified, burden of HF in the [broader] Australian Aboriginal population' [25].
The higher prevalence of HF in Indigenous people is also seen in clinical populations, with two retrospective cohort studies in which HF was a significantly more common current or previously documented co-morbidity in Indigenous compared to non-Indigenous patients hospitalised for myocardial infarction [23,24]. Death data in government reports are an important although problematic source of information about HF among both Aboriginal and non-Aboriginal people. The AIHW publications with multijurisdictional mortality data report death rates due to HF in the Indigenous population as double to treble those of non-Indigenous Australians, with the ratio higher for the middle aged. However, analysis of deaths attributed to HF is inherently problematic, particularly from administrative data. It has been argued that HF is a mode rather than a cause of death, that it is inconsistently recorded in death certification, and that the redistribution from HF codes to other causes in death coding is poorly standardised to the underlying aetiology [47]. Indeed, deaths reported as HF-related in administrative data have been considered as constituting 'garbage' codes that obscure the epidemiology of underlying causes of cardiovascular mortality [43]. It is also unknown to what extent comparative mortality rates have been miscalculated because of misclassification of Indigenous status, although linked data studies indicate that Indigenous all-cause mortality rates are underestimated [48].
In the literature reviewed, HF was frequently mentioned in passing as a subsidiary endpoint or as a covariate in analysis without further elaboration or interrogation. Not unexpectedly, HF was often a concomitant or complication of coronary disease [23,24,28], rheumatic heart disease [29], and, in one report, chronic airway disease (bronchiectasis) [26]. Finding that the proportion of Indigenous HF in which ejection fraction is maintained was at the upper limit of that expected from studies of other populations [27] is consistent with a high prevalence of coronary and hypertensive heart disease and 'diabesity' in this population [5]. However, no studies were identified that specifically provided data on the distribution of underlying causes of HF in the Indigenous population nationwide, although the most recent Australian and Indigenous Burden of Disease reports describe the use of (unpublished) estimates from hospital data to redistribute the burden of HF to underlying causes [45,49]. Heart failure in a Central Australian adult Aboriginal population sample was strongly associated with well-recognised risk factors (most especially coronary disease, diabetes mellitus, hypertension, obesity and rheumatic heart disease or history of rheumatic fever) [25].
Consistent with international evidence [50], HF is a predictor of increased morbidity and mortality following myocardial infarction in both Indigenous and non-Indigenous Australians [24]. Nonetheless, there are no longitudinal follow-up data quantifying or examining the determinants of morbidity, quality of life, survival or mortality among Indigenous Australians with established HF. Given the rather bleak natural history of HF regardless of cause [15] and the known efficacy of timely interventions [16,17], optimal tertiary prevention tailored to the needs of the Indigenous HF population is likely to be useful in improving Indigenous outcomes.
The three-fold excess frequency of HF-related hospitalisations reported by AIHW for Indigenous Australians parallels the estimated disparity in death rates. Administrative data on hospitalisation are widely used for information about chronic illnesses such as HF, but are influenced by factors such as access to and quality of ambulatory care as well as underlying disease prevalence and severity [51]. Further limitations include the inter-jurisdictional variation in quality of Indigenous identification [35,36], the shortcomings of HF ascertainment from disease codes [18,44], and their inability to capture information on disease frequency in the community. In metropolitan settings, a HF-related code in administrative data indicates a clinically verifiable diagnosis of HF with a positive predictive value of >90% [46]. However, it is unknown to what extent this validation is applicable to rural and remote settings where access to specialist expertise and diagnostic technology is more limited and the proportion of Indigenous patients is generally higher. The sensitivity of HF detection in administrative records can be very poor, even in urban teaching hospitals [18].
No report identified for this review combined both agestandardised analysis and investigation of Indigenous identification, both of which are critical for interpretability. One report comparing Indigenous and non-Indigenous hospitalisations for HF excluded persons <45 years, the age range in which disparities appear to be the greatest [34]. The markedly lower hospitalisation rate ratio in crude data (1.33) compared to about 3 in age-standardised comparisons [39] highlights the differing age distributions of the populations and the early age of HF onset in the Aboriginal population. Meaningful comparison of Indigenous and non-Indigenous HF-related hospitalisations requires inclusion of adults across a wider age range. There is an interaction of Aboriginality with age, therefore a single agestandardised rate over all adult ages will obscure variation in disparities across the lifespan. This can be addressed by comparing age-standardised rates separately for younger and older age groups as has been done in a study of Indigenous incidence rates for myocardial infarction [52].
The listing by AIHW of HF as among 'preventable' causes of hospitalisation highlights the role of primary care in preventing hospital admissions [51]. Only one peer-reviewed article reported general practice attendances by Indigenous Australians for HF management, based on attendances at a Darwin AMS [31], and one conference abstract reported attendances at a cardiology clinic within an AMS in Sydney [30]. Neither study included local non-Indigenous comparison data. Their generalisability to Indigenous populations elsewhere is also uncertain. The BEACH survey showed an age-standardised proportion of GP visits for HF as 2.6-fold higher among Indigenous than 'Other' attendees. However, a sub-study in which Indigenous identification was specifically explored suggested considerable Indigenous underidentification with 2.2% of patients in the sub-study identified as Indigenous, compared with 1.4% in the main survey [36]. Furthermore, it is difficult to interpret data on condition-specific proportions of health encounters such as primary care attendances, as they are not individual patient-based and depend on both the frequency of encounters for other conditions and variation in the condition-specific expertise offered by different providers. Between-population comparisons of such proportions are thus confounded by other morbidities, and cannot be taken at face value as indicators of disease frequency or disease-specific health behaviour.
As cardiovascular disease is largely preventable and linked to social determinants of health, these should be an important focus for health care interventions. Access to health care is an important moderator for health outcomes. Accordingly, in addition to epidemiological indicators, affordability, cultural appropriateness and geographical access to health services need to be investigated. Although the complex needs and many barriers that Indigenous people experience in accessing health services are well documented [28,53,54], only one publication was identified that had a strong focus on HF in this regard [42]. Late presentation is cited as a contributor to poor health outcomes for Indigenous Australians, such as in the setting of acute coronary syndrome [55], but there are no published data for this population on delayed in presentation in HF specifically. The work of Aspin and colleagues [42] accords with the barriers and inequities already described for ACS, cancer, renal and mental health. Common themes include negative associations with hospitals as a result of death of relatives, racism and cultural alienation; social, economic and cultural barriers; and competing priorities [56,57]. However, given the complexity of HF, further research could usefully identify specific HF-related issues in order to optimise its culturally appropriate management, including palliative care, in this population.
The overall lack of data on HF therapeutics in the Indigenous population is also noteworthy. Management interventions for HF often entail complexity that is particularly challenging for individuals in disadvantaged and remote communities. For example, chronic anticoagulation is problematic, given the requirement for rigorous monitoring to avoid potentially life-threatening adverse effects [58]. Disease management is impacted by the disproportionate burden of co-morbidities that accompanies HF in this population, with co-existing conditions being important determinants of quality of life, the complexity of medical interventions, and survival. Furthermore, biological differences between population groups in response to HF therapies have been noted in other settings [59]. This issue remains unexplored for the Australian Indigenous population.
Conclusions
The poor health outcomes experienced by Indigenous Australians can be attributed to socio-economic disadvantage and marginalisation and are manifest in a range of chronic conditions that include cardiovascular diseases [60]. Heart failure, a disabling and survival-limiting 'downstream' complication of these conditions, presents substantial challenges in its own right. Although this systematic review demonstrates that high-quality data are limited, better information is beginning to emerge, bolstering the evidence that Indigenous Australians have an excess burden of heart failure in comparison with their non-Indigenous counterparts.
Clearly, optimising the delivery of preventive and therapeutic interventions for HF in Indigenous Australians is predicated on a sound knowledge of the underlying causes and comorbidity burden. The likely antecedents and concomitants of HF in this population are well described: all elements of the cluster of hypertension, coronary heart disease, chronic kidney disease, metabolic syndrome and diabetes are known to be prevalent in substantial excess [6,7,10]. The substantial proportion of HF cases with preserved ejection fraction documented in North Queensland [27] and Central Australia [25] is consistent with this aetiological spectrum. Additionally, the prevalence of rheumatic valvular disease is extremely high among certain Indigenous populations, namely those in Northern Australia [61]. However, the role of these conditions in the pathogenesis of HF in Indigenous Australians has only been specifically examined in a single Central Australian study and its generalisability to the broader Australian context is uncertain [25]. The aetiological contribution of these risk factors to the burden of HF in the greater Australian Aboriginal population has not been formally quantified and constitutes an important gap in current knowledge.
Accurate data on indicators of HF among Indigenous Australians, as well as information on their access to and utilisation of health services for this problem, could inform better care and policy development but are limited by the scarcity of quality information identified in this review. Considering the constraints on administrative data, highquality research is required to confirm epidemiological indicators of HF and to monitor trends adequately. Such data are now beginning to emerge [25] and reflect growing recognition of the huge health disparities in Indigenous Australians and commitment to Closing the Gap. The Australian Bureau of Statistics has publicised that the next NATSIHS will be the largest Aboriginal and Torres Strait Islander health survey to date and 'will expand on the 2004-05 survey by increasing the number of participants by 30%, collecting new information on exercise, diet (including bush foods) and measures of cholesterol, blood glucose and iron. For the first time, the ABS will directly measure obesity and blood pressure levels, as well as nutritional status and chronic disease' [62]. In view of the diversity of Indigenous populations and known geographical inequities in health service provision, local as well as whole or multi-jurisdictional epidemiological studies are desirable. In addition, qualitative investigations of the impact of HF and effective, culturally suitable approaches to HF management in Indigenous Australians are needed.
Additional file
Addtional file 1: Table S1. Search Terms. | 2016-05-12T22:15:10.714Z | 2012-11-01T00:00:00.000 | {
"year": 2012,
"sha1": "8d2cf59308b6e042a198214fcec8025ad41d188e",
"oa_license": "CCBY",
"oa_url": "https://bmccardiovascdisord.biomedcentral.com/track/pdf/10.1186/1471-2261-12-99",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bce9dcb7a40c5c2115c77d4241b5cb83e5570f99",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255791460 | pes2o/s2orc | v3-fos-license | Cytogenetic relationships among Citrullus species in comparison with some genera of the tribe Benincaseae (Cucurbitaceae) as inferred from rDNA distribution patterns
Comparative mapping of 5S and 45S rDNA by fluorescent in situ hybridization (FISH) technique is an excellent tool to determine cytogenetic relationships among closely related species. In this study, the number and position of 5S and 45S rDNA loci in all Citrullus species and subspecies were determined. The cultivated watermelon (C. lanatus subsp. vulgaris), C. lanatus subsp. mucosospermus, C. colocynthis and C. naudinianus (or Acanthosicyos naudinianus) had two 45S rDNA loci and one 5S rDNA locus which was located syntenic to one of the 45S rDNA loci. C. ecirrhosus and C. lanatus subsp. lanatus had one 45S rDNA locus and two 5S rDNA loci, each located on a different chromosome. C. rehmii had one 5S and one 45S rDNA locus positioned on different chromosomes. The distribution of 5S and 45S rDNA in several species belonging to other genera in Benincaseae tribe was also investigated. The distribution pattern of rDNAs showed a great difference among these species. The present study confirmed evolutionary closeness among cultivated watermelon (C. lanatus subsp. vulgaris), C. lanatus subsp. mucosospermus and C. colocynthis. Our result also supported that C. lanatus subsp. lanatus was not a wild form of the cultivated watermelon instead was a separate crop species. In addition, present cytogenetic analysis suggested that A. naudinianus was more closely related to Cucumis than to Citrullus or Acanthosicyos, but with a unique position and may be a link bridge between the Citrullus and the Cucumis.
that three subspecies of 'Citrullus lanatus' are unrelated species and there are seven species in Citrullus including C. naudinianus.
Although various analyses have been carried out aiming to establish phylogenetic relationships among different Citrullus species, the conclusions are inconsistent and the progenitor of cultivated watermelon has not yet been determined [5][6][7][8][9][10][11]. Cytological [12] and crosscompatibility [13] observations favored C. colocynthis as the ancestor of cultivated watermelon. Several studies established that C. lanatus var. citroides was phylogenetically sister to cultivated watermelon [3,[14][15][16]. However, C. ecirrhosus has been found to have closer relative relationship with cultivated watermelon based on sequencing analysis of cpDNA regions [17,18]. Jarret and Newman [9] proposed that C. rehmii might be the ancestral to cultivated watermelon by the analysis of internal transcribed spacer sequence heterogeneity.
Mapping of 5S and 45S rDNA by fluorescent in situ hybridization (FISH) technique is an excellent tool to determine phylogenetic relationships because their map positions can reveal similarities and differences between chromosomes of related species [19][20][21]. To date, the position and number of rDNA loci have been determined in more than 1000 plant species with FISH [22]. These studies showed that the number and position of the 5S and 45S rDNA were usually characteristic of a given species, genus or group [23] and 5S and 45S rDNA tended to occupy similar chromosomes and positions in closely related species [24][25][26][27][28]. The chromosomal localization of 5S and 45S rDNA loci has been reported using the FISH technique in three C. lanatus subspecies [29]. The study [29] showed there were one 5S locus and two 45S rDNA loci on chromosomes 4 and 8 and the 5S rDNA locus was located syntenic to one of the 45S rDNA loci on chromosome 8 in C. lanatus subsp. vulgaris and C. lanatus subsp. mucosospermus. Whereas C. lanatus subsp. lanatus contained one 45S locus on chromosomes 4 and two 5S rDNA loci, one 5S rDNA locus was on chromosome 8 and the other 5S rDNA locus was on chromosome 11. The rDNA distribution has also been investigated in C. colocynthis and C. rehmii besides C. lanatus var. lanatus and C. lanatus var. citroides [30]. However, the chromosomes with rDNA sites have not been identified due to the lack of chromosome identification markers in Reddy et al.' study [30]. Recently, we successfully labeled specific chromosomes using oligonucleotides (oligos) libraries as probes in cucumber [31]. In this study, the distribution of the 5S and 45S rDNA in all Citrullus species including three C. lanatus subspecies were investigated and the chromosomes with rDNA sites were identified using synthesized watermelon oligos probes to gain cyto-evolution at the chromosome level and resolve relationships among Citrullus species.
rDNA probes and oligo probes
The plasmids of 5S and 45S rDNA which were cloned in the vector pUC18 were kindly provided by K. Arumuganthan, University of Nebraska. The 5S and 45S rDNA were labeled with digoxigenin-dUTP and biotin-dUTP via nick translation, respectively. Two oligos libraries were developed to identify cultivated watermelon chromosomes 4, 8 and 11 with rDNA sites reported by Guo et al. [29]. Library 1 contained 10000 oligos from watermelon chromosomes 4 and 8 at the densities of 2-3 oligos per kilobases, respectively. Library 2 contained 10000 oligos from watermelon chromosomes 8 and 11 at the densities of 2-3 oligos per kilobases, respectively. The region spanned by library 2 was located adjacently the region spanned by library 1 on chromosome 8. Thus, the chromosome with overlapping signals must be chromosome 8 and the others are either 4 or 11. Two oligos libraries were also developed to identify melon chromosomes 8, 10 and 12. Library 1 contained 10000 oligos from melon chromosomes 8 and 10 at the densities of 2-3 oligos per kilobases, respectively. Library 2 contained 10000 oligos from melon chromosomes 10 and 12 at the densities of 2-3 oligos per kilobases, respectively. The region spanned by library 2 was located adjacently the region spanned by library 1 on chromosome 10. Thus, the chromosome with overlapping signals must be chromosome 10 and the others are either 8 or 12. The oligo libraries were synthesized by MYcroarray (Ann Arbor, MI). Each synthesized oligo contained 48 bp of genomic sequence, a 5′ F primer, which included the T7 RNA polymerase promoter sequence, and a 3′ R primer. Amplification and labeling of oligos libraries was the same as the protocol developed by us [31].
Fluorescence in situ hybridization (FISH)
FISH was performed according to published protocols [32]. Firstly, slides were hybridized with oligo librariy 1 (biotinlabeled) and oligo librariy 2 (digoxigenin-labeled). After the first round of probing and image capture, coverslips were taken off carefully and wash the slides three times in 1X PBS (phosphate-buffered saline) (5 min each). The slides were then dehydrated in an ethanol series (70, 90, and 100 %, 5 min each), denatured again in 70 % formamide at 80°C for 2 min, dehydrated in a second ethanol series, and reprobed with the 45S rDNA (biotin-labeled) and 5S rDNA (digoxigenin-labeled) sequences simultaneously. Biotin-labeled and digoxigenin-labeled probes were detected using a fluorescein isothiocyanate (FITC)-conjugated antibiotin antibody (Vector Laboratories) and a rhodamine-conjugated antidigoxigenin antibody (Roche Diagnostics), respectively. Chromosomes were counterstained with 4,6-diamidino-2-phenylindole (DAPI) in a VectaShield antifade solution (Vector Laboratories). Images were captured digitally using a CCD camera (QIMA-GING, RETIGA-SRV, FAST 1394) attached to an Olympus BX63 epifluorescence microscope. Gray-scale images were captured for each color channel and then merged. Final image adjustments were done with Adobe Photoshop (Adobe Systems).
Chromosome identification using watermelon oligos probes in Citrullus species and A. naudinianus
It is difficult to distinguish each chromosome in watermelon because the chromosomes are relatively small and are morphologically similar. Two oligos probes were developed to identify watermelon chromosomes 4, 8 and 11. As expected, two oligo probes produced bright FISH signals on cultivated watermelon chromosomes 4, 8 and 11 (Fig. 1a1, a3). We also performed FISH on metaphase chromosomes in all Citrullus species including three C. lanatus subspecies and A. naudinianus in order to investigate the potential of oligo probes for cross-species hybridization. Two probes generated FISH signals with similar intensity on three pairs of chromosomes in C. lanatus subsp. mucosospermus (Fig. 1b1, b3), C. colocynthis (Fig. 1c1, c3), C. lanatus subsp. lanatus (Fig. 1d1, d3), C. ecirrhosus (Fig. 1e1, e3), C. rehmii (Fig. 1f1, f3), and on two pairs of chromosomes in A. naudinianus (Fig. 1g1, g3). The counterparts of cultivated watermelon chromosomes 4, 8 and 11 in other Citrullus species or subspecies and A. naudinianus were also designated as chromosomes 4, 8 and 11 based on the homeologous relationship with the corresponding chromosome of watermelon.
Chromosomal distribution of 5S and 45S rDNA in Citrullus species and A. naudinianus
To determine the distribution of 5S and 45S rDNA in Citrullus species and A. naudinianus, following the FISH analysis of using oligos probes, the slides were washed and reprobed with 5S and 45S rDNA sequences simultaneously. The results were summarized Table 1. Consist with previous results [29], the genome of cultivated watermelon contained one 5S locus (red) and two 45S rDNA loci (green) on the chromosomes 4 and 8 (Fig. 1a2, a4). The 5S rDNA locus (red) was colocalized with the 45S rDNA loci (green) on the chromosome 8 (Fig. 1a2, a4). The number and location of 5S (red) and 45S (green) rDNA in the genomes of C. lanatus subsp. mucosospermus (Fig. 1b2, b4) and C. colocynthis (Fig. 1c2, c4) were identical to those in the genome of cultivated watermelon (Fig. 1a2, a4).
Discussion
The genus Citrullus belongs to the Benincaseae tribe of the Cucurbitaceae family [1]. The Benincaseae is a relatively large tribe with 204-214 species in 24 genera [33]. The phylogenetic relationships among species in this tribe were assessed by morphological [34,35], geographical [4,36], biochemical [37] or nucleotide data [4,36,38,39]. In this study, we investigated the distribution of 5S and 45S rDNA in several genera located on the different branches of the phylogenetic tree of the tribe Benincaseae [36]. We found L. siceraria and cultivated watermelon had exactly same rDNA distribution. Similarly, C. sessilifolia had identical rDNA distribution pattern as D. palmatus. This was in accordance with the results based on the molecular phylogenetic studies, which revealed a close relationship between the genera Citrullus and Lagenaria, and between the genera Coccinia and Diplocyclos [4,36].
Consist with previous studies [29,30], the present FISH analysis demonstrated that the rDNA gene loci showed wide differences among the Citrullus species. C. lanatus subsp. vulgaris (cultivated watermelon) had same number and location of 5S and 45S rDNA loci to those in C. lanatus subsp. mucosospermus and C. colocynthis, but differed from those in C. lanatus subsp. lanatus. Our results are concordant with assumptions that the progenitor of the cultivated watermelon might be C. colocynthis [12,13] and C. lanatus subsp. mucosospermus is the recent ancestor of C. lanatus subsp. vulgaris [29]. Several reports based on nuclear and plastid data of the genus Citrullus also confirmed evolutionary closeness among cultivated watermelon, C. lanatus subsp. mucosospermus and C. colocynthis. For example, the analyses of the sequence variation at cpDNA regions also confirmed the relationship because some of the C. lanatus var. lanatus accessions shared a unique substitution at the trnE-trnT region with all C. colocynthis accessions [40]. Schaefer et al. [36] also confirmed that the watermelon and C. colocynthis evolved from a common ancestor. Our results also supported Chomicki and Renner' findings [4], which revealed that C. lanatus subsp. lanatus was not a wild form or progenitor of the cultivated watermelon but instead was a separate crop species, domesticated independently. In addition, present cytogenetic analysis showed that C. lanatus subsp. lanatus and C. ecirrhosus were more closely related to each other than they are to other Citrullus species. Based on the current FISH analysis and previous molecular phylogenetics [4] and cytogenetic studies [29,30], a scheme about phylogenetic relationships among the Citrullus species was given (Fig. 4).
One of the most striking observations from the present results was that the number, size and location of 5S and 45S rDNA signals were exactly identical between A. naudinianus and C. metuliferus in genus Cucumis. Moreover, two melon oligos libraries generated distinct FISH signals in A. naudinianus. The number and location of rDNA and oligos probes signals were completely identical between A. naudinianus and C. metuliferus. However, unambiguous signals weren't detected when melon oligos probes hybridized to chromosomes of Citrullus species and watermelon oligos probes hybridized to chromosomes of Cucumis species. In addition, A. naudinianus had 24 chromosomes while other Citrullus species had 22 chromosomes. In the genus Cucumis, C. sativus is the only species with chromosome number 14, whereas the rest of the Cucumis species have a basic number 24 [41]. Our previous study also confirmed that the A. naudinianus had closer genetic affinity with the genus Cucumis than with Citrullus by comparative genomic in situ hybridization [42]. All above results suggest that A. naudinianus is more closely related to Cucumis than to Citrullus or Acanthosicyos. Interestingly, both watermelon and melon oligos probes generated distinct FISH signals on A. naudinianus chromosomes although cross-genus hybridization was not feasible using oligo probes in other species. Therefore, A. naudinianus could be a link bridge between the Citrullus and the Cucumis.
Consistent identification of individual chromosomes in a species is the foundation for successful cytogenetic research. FISH signals are reliable markers for chromosome identification. Most common FISH probes used in chromosome identification have been repetitive DNA elements or large genomic DNA clones [43][44][45][46][47]. Large genomic clones often contain dispersed repetitive sequences that will cause high background signal in FISH and cannot be used as chromosome-specific FISH probes [48]. The watermelon chromosomes are relatively small in size and are morphologically similar. Repetitive DNA elements were unavailable for differentiating all watermelon chromosomes. Recently, eleven BAC clones were used to differentiate eleven watermelon chromosomes and assign linkage groups to their corresponding chromosomes [49]. Furthermore, using the eleven BAC clones, 5S and 45S rDNA loci were assigned to chromosomes 4 and 8 in the genomes of cultivated watermelon and C. lanatus subsp. mucosospermus, and chromosomes 4, 8 and 11 in C. lanatus subsp. lanatus [29]. However, in present study we found that one of 5S rDNA sites was on an unidentified chromosome rather than on chromosome 11 by using oligos probes to identify chromosome 11 in C. lanatus subsp. lanatus.
Conclusions
The present study confirmed evolutionary closeness among cultivated watermelon, C. lanatus subsp. mucosospermus and C. colocynthis. Our result also supported that C. lanatus subsp. lanatus was not a wild form of the watermelon instead was a separate crop species. In addition, present cytogenetic analysis suggested that A. naudinianus was more closely related to Cucumis than to Citrullus or Acanthosicyos, but with a unique position and may be a link bridge between the Citrullus and the Cucumis. In future, we will develope chromosome-specific oligos probes for identifying each watermelon chromosome. Furthermore, bulked oligo probes will be designed to cover an entire watermelon chromosome. Thus, interchromosomal rearrangements and karyotype evolution among related Citrullus species can be revealed in more detail through cross-species chromosome painting. The information will be useful for collection and utilization of genetic resources for cultivated watermelon improvement.
Ethics
Not applicable.
Consent to publish
Not applicable.
Availability of data and materials
All data are contained within the manuscript.
Abbreviations FISH: fluorescent in situ hybridization.
Competing interests
The authors declare that they have no competing interests.
Authors' contributions YHH designed the study, performed the analysis and wrote the paper. YX, JMW and ZYL participated in the design of the study, analyzed the data and critically revised the manuscript. KPL, YXW, HZ, YW and XML carried out the experiments and acquisition of original data. KPL analyzed the data and participated in writing. All authors read and approved the final manuscript. | 2023-01-14T14:57:48.879Z | 2016-04-18T00:00:00.000 | {
"year": 2016,
"sha1": "5e17893989eff640293733d654b6720a7491a69d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12862-016-0656-6",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "5e17893989eff640293733d654b6720a7491a69d",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": []
} |
12254155 | pes2o/s2orc | v3-fos-license | Embryonic and adult isoforms of XLAP2 form microdomains associated with chromatin and the nuclear envelope
Laminin-associated polypeptide 2 (LAP2) proteins are alternatively spliced products of a single gene; they belong to the LEM domain family and, in mammals, locate to the nuclear envelope (NE) and nuclear lamina. Isoforms lacking the transmembrane domain also locate to the nucleoplasm. We used new specific antibodies against the N-terminal domain of Xenopus LAP2 to perform immunoprecipitation, identification and localization studies during Xenopus development. By immunoprecipitation and mass spectrometry (LC/MS/MS), we identified the embryonic isoform XLAP2γ, which was downregulated during development similarly to XLAP2ω. Embryonic isoforms XLAP2ω and XLAP2γ were located in close association with chromatin up to the blastula stage. Later in development, both embryonic isoforms and the adult isoform XLAP2β were localized in a similar way at the NE. All isoforms colocalized with lamin B2/B3 during development, whereas XLAP2β was colocalized with lamin B2 and apparently with the F/G repeat nucleoporins throughout the cell cycle in adult tissues and culture cells. XLAP2β was localized in clusters on chromatin, both at the NE and inside the nucleus. Embryonic isoforms were also localized in clusters at the NE of oocytes. Our results suggest that XLAP2 isoforms participate in the maintenance and anchoring of chromatin domains to the NE and in the formation of lamin B microdomains. Electronic supplementary material The online version of this article (doi:10.1007/s00441-011-1129-2) contains supplementary material, which is available to authorized users.
Introduction
Lamin-associated polypeptide 2 (LAP2) proteins belong to the family of LEM domain proteins associated with the inner nuclear envelope (NE) and nuclear lamina (Dorner et al. 2007;Schirmer and Foisner 2007;Wagner and Krohne 2007;Zaremba-Czogalla et al. 2011). They are alternatively spliced products of a single gene and act as integral membrane or nucleoplasmic proteins (Harris et al. 1994). Six LAP2 isoforms have been identified in mammals (α, β, γ, δ, ε, ζ). The LAP2 proteins are widely expressed and evolutionarily conserved in vertebrates with up to 90% sequence identity in the regions responsible for the function of the protein. The N-terminal part (187 amino acids [aa]), which is common to all LAP2 isoforms, contains the LEM domain (aa 111-152), a structural motif responsible for interaction with BAF (barrier to autointegration factor; (Furukawa 1999;Shumaker et al. 2001), and the LEM-like domain (aa 1-50) interacting directly with chromatin (Cai et al. 2001). The binding sites for lamin B (aa 298-373), the germ-cell-less (GCL) protein (Nili et al. 2001) and HA95 protein (Martins et al. 2003) lie in the variable region of LAP2. Depending on the splicing pattern and presence of the transmembrane domain in the C-terminus, LAP2 proteins are integral membrane (β, γ, δ, ε) or intra-nuclear proteins (α, ζ) and play diverse roles in the cell nucleus (Shaklai et al. 2008). LAP2 expression differs in different cell types: LAP2β and γ are found in somatic cells, whereas LAP2α is intensively produced in proliferating cells and is the only form present in mature sperm (Alsheimer et al. 1998).
LAP2 proteins play an important role in mammals, in the attachment of chromatin to the NE and nuclear lamina filaments during interphase and in nuclear reassembly after mitosis. LAP2 proteins interact with A and B type lamins and, through the LEM domain, also with BAF by interconnecting lamin filaments with chromatin inside the nucleus (LAP2α, LAP2ζ; Shaklai et al. 2008) and at the NE (LAP2β, LAP2γ, LAP2ε, LAP2δ). LAP2α-lamin A/C protein complexes are important for the retention of retinoblastoma protein in the cell nucleus (Gant et al. 1999;Markiewicz et al. 2002;Pekovic et al. 2007;Yang et al. 1997).
LAP2β interacts directly with mouse transcriptional repressor protein, GCL (Nili et al. 2001), and together with its binding partners is able to repress the activity of the E2F5-DP3 transcription factor. LAP2β directly interacts, through the same region as GCL, with histone deacetylase 3 (HDAC3) and thus might be involved in the regulation of chromatin organization (Somech et al. 2005). The Nterminal part of human LAP2β, comprising the LEM domain and lamin-binding region (aa 1-408) and the common domain only (aa 1-187) are able to block in vitro the formation of pronuclei (Gant et al. 1999).
In Xenopus, five cDNA sequences have been identified coding for LAP2 proteins and have been translated in silico to several isoforms. Two LAP2 isoforms of 557 and 518/ 519 aa contain a transmembrane domain, whereas the isoform of 409 aa lacks this domain (Gant et al. 1999;Lang et al. 1999). Three different polypeptides have also been recognized in immunoblots with anti-human MAN serum and antibodies generated against aa 255-429 of the cDNA clone named XLAP2β (see Fig. 1c for details of the domain/exon structure of cloned XLAP2 cDNAs; note that isolated cDNA clone 3 neither possess a typical lamin binding domain nor a transmembrane domain). The in vitro expression of XLAP2β cDNA results in the synthesis of a 66-kDa polypeptide confirming that this protein is indeed XLAP2β Lang et al. 1999). The somatic XLAP2β polypeptide is the only one found in adult animals, whereas polypeptides of 86 kDa and 40 kDa are present in oocytes and eggs Lang et al. 1999) but no data for the expression of the 40-kDa polypeptide during embryogenesis have ever been shown. The N-terminal common fragment of XLAP2, when added to an in vitro nuclear assembly, inhibits chromatin decondensation and nuclear growth similar to the human Nterminal fragment (Gant et al. 1999). A later report has also demonstrated the coprecipitation of XLAP2 β protein with lamins B1, B2 and A from A6 Xenopus cells . The common N-terminal domain of XLAP2 (aa 1-165) interacts with BAF and BAF-DNA complexes. Moreover, bacterially expressed Xenopus LAP2 cDNA clones 2 (ω), 4 (β) and 3 bind to BAF with different affinities (Shumaker et al. 2001). Embryonic XLAP2ω and/or XLAP2γ proteins interact in vitro with a spindle assembly factor, TPX2 protein, and participate through this protein in the proper assembly of postmitotic nuclei in the Xenopus in vitro nuclear assembly system (O′Brien and Wiese 2006). Little or no data on the subcellular localization and developmental regulation of the expression and distribution of particular XLAP2 isoforms have been reported so far. Moreover, no precise data are available on the identification of particular XLAP2 isoforms and relationships between isolated cDNA clones and identified peptides. In addition, the exact location of XLAP2 proteins in the cell nucleus and at the NE during development and in adult cells is unknown.
In this study, we have taken advantage of newly prepared, affinity-purified antibodies raised against the common N-terminal fragment (1-165 aa) of XLAP2 proteins in our laboratory (Salpingidou et al. 2008). By mass spectrometry, we have identified the presence of two early-development-specific isoforms: XLAP2ω (86 kDa) and XLAP2-40 kDa. We have demonstrated that the subcellular localization of embryonic XLAP2 proteins is similar to the location of the adult isoform (XLAP2β) and that all isoforms associate similarly with anaphase chromatin. We have also demonstrated that XLAP2β is present at the NE and at chromatin inside the nucleus in the form of clusters and microdomains. Similarly, both embryonic isoforms, viz. XLAP2ω and XLAP2γ, locate at the NE in clusters forming microdomains. All three isoforms colocalize with lamin B2, lamin B3 and, apparently, Nup62 (F/G repeat nucleoporin) in development and throughout the cell cycle.
Plasmids and cDNAs
Plasmid pET23a coding for the 165 N-terminal amino acids of XLAP2 as a His-tag fusion protein was the kind gift of Prof. K. Wilson, Baltimore, USA. Sequence comparison was performed on the following XLAP2 cDNAs: β Fig. 1 Identification of XLAP2ω and XLAP2γ polypeptides immuno-isolated from Xenopus egg extract. a Immunoblot from the immuno-isolation experiment with control and anti-XLAP2 Igs; staining with XLAP2-specific antibodies. Of the total material, 10% was loaded onto each lane (arrowheads positions of the 86-kDa XLAP2ω and 40-kDa XLAP2γ bands in control egg extract in the C lane and of isolated XLAP2ω polypeptide in the IP lane, C control egg extract, Ig-H heavy immunoglobulin chain, Ig-L light immunoglobulin chain, L loaded egg extract, W wash, FT flow through, IP immunoprecipitated proteins, M molecular-weight markers). b Fragments of silver-stained gel containing protein samples from the pulldown reaction (90% of the material was loaded). Gel pieces from boxes were excised and subjected to mass spectrometry analysis (LC/ MS/MS). Molecular masses of reference proteins (in kDa) are shown. c Representations of XLAP2 protein isoforms translated in vitro from cDNA sequences found in Xenopus: XLAP2 clone 2 (AN: AF048815); XLAP2 clone 3 (AN: AF048816); XLAP2 clone 4 (AN: AF048817, Gant et al. 1999); XLAP2β (AN: Y17861, Lang et al. 1999); XLAP2ω (AN: AJ514937, Schoft et al. 2003). Identical regions are shaded similarly. Numbers above each representation indicate the amino-acid positions of domains. Sequences marked with letters are putative Xenopus-specific exons (Gant et al. 1999), which are absent from mammals and are positioned between exons 5 and 6 (A) or 8 and 9 of mouse cDNA (Berger et al. 1996). The laminbinding domain is according to . Tryptic peptides identified by using LC/MS/MS are given below each representation. The 86-kDa polypeptide was identified as the translation product of two cDNAs: XLAP2 clone 2 (AF048815) and XLAP2ω (AJ514937). The 40-kDa polypeptide was determined as a member of XLAP2 family (Y17861), clone 2 (AF048815), clone 3 (AF048816), clone 4 (AF048817) and ω (AJ514937).
Cells, tissues and embryos
Xenopus XTC cells were grown in 54% L-15 Leibovitz medium containing 10% fetal bovine serum, 2 mM Lglutamine, 10 I.U./ml penicillin, 10 μg/ml streptomycin, 0.025 μg/ml amphotericin B at 22-26°C under normal air conditions. Tissue samples (3 mm in diameter) were dissected out from male frogs and either snap-frozen in liquid nitrogen and stored at -70°C (for biochemical studies) or fixed for microscopy. Tissue extracts were prepared from samples powdered in a RetschMill homogenizer in liquid nitrogen, boiled in extraction buffer and recovered by centrifugation. Embryos were directly homogenized in SDS-sample buffer and boiled. Embryonic stages were classified according to Nieuwkoop and Faber (1994).
Antibodies and antibody purification
The anti-XLAP2 serum was produced in rabbit by immunization with the bacterially expressed, affinitypurified N-terminal (1-165 aa) fragment of the XLAP2 protein. The specific immunoglobulins were purified from full serum on antigen covalently bound to resin (Rzepecki et al. 1998).
Gel electrophoresis and immunoblotting
Proteins were separated on 10% or 12% SDS-gels by polyacrylamide gel electrophoresis (PAGE) and electrotransferred onto nitrocellulose filters. Optical density measurements of the protein bands in immunoblots were performed with the BIO-PROFIL Bio-1D Windows Application V99.01. Protein content in XLAP2 bands was normalized according to the actin content in each lane.
Isolation of Xenopus egg extract
Egg collection and extract preparation were carried out according to Salpingidou et al. (2008).
Extraction of XLAP2 from cell nuclei isolated from XTC cells XTC cells that had been cultured for 72 h were trypsinized, pelleted at 800g and washed briefly with ice-cold phosphate-buffered saline (PBS). All preparation steps were performed at 4°C. The cell pellet was homogenized in 2 ml isotonic buffer consisting of 10 mM TRIS-HCl pH 7.5, 5 mM MgCl 2 , 50 mM NaCl, 250 mM sucrose, 0.1 mM phenylmethane sulphonylfluoride (PMSF) and 10 μl/ml inhibitor cocktail (Sigma) in a glass Potter homogenizer with a tight pestle and the nuclei were recovered at 2000g. Equivalents of 6.6×x10 6 nuclei were extracted for 20 min with 200 μl of a solution consisting of 10 mM TRIS-HCl buffer pH 7.5 or of 10 mM TRIS-HCl buffer pH 7.5 containing 1% (vol/vol) Triton X-100, 1 M NaCl, 1% (vol/ vol) Triton X-100 plus 1 M NaCl, or 6 M Urea. Samples were fractionated by centrifugation at 10,000g for 10 min into supernatant and pellet. Supernatants were supplemented with 50 μl SDS sample buffer, whereas pellets were homogenized in 150 μl SDS sample buffer and analysed by Western blotting. Of the supernatants and pellets, 10% were resolved on SDS-PAGE gel, transferred onto nitrocellulose filter and probed for XLAP2 protein.
Immuno-isolation of XLAP2 isoforms from Xenopus XTC cells and tissues
Frozen Xenopus liver pieces were powdered at liquidnitrogen temperature and incubated on ice with a lysis buffer consisting of PBS pH 7.4 containing 0.05% Tween 20, 1 mM dithiothreitole (DTT), 0.5 mM MgCl 2 , 0.2 mM PMSF and 10 μl/ml inhibitor cocktail (Sigma). After sonication, the lysate was cleared of cell debris by centrifugation at 12,000g. The resulting supernatant was supplemented with 5% SDS and 20 mM DTT and heatdenatured. Proteins were precipitated with 10% trichloroacetic acid. The precipitate was then resuspended in immunoprecipitation buffer (Rzepecki and Fisher 2000) and used for protein isolation. Xenopus XTC tissue-cultured cells, 1.5×0 7 per test, were pelleted at 800g, washed, resuspended in lysis buffer and treated as above. The lowspeed supernatant fraction from Xenopus eggs was incubated on ice with gentle inversion after addition of 1% Triton X-100, 0.02% SDS, 0.3 M NaCl, 0.2 mM PMSF and 10 μl/ml inhibitor cocktail. Centrifugation at 17,000g, 4°C, was carried out to produce a supernatant for immunoprecipitation. Immuno-isolation with specific affinity-purified anti-XLAP2 immunoglobulins, immobilized on Protein A Agarose (Sigma), was as described previously (Rzepecki and Fisher 2000).
For mass spectrometry, proteins immuno-isolated from eggs were resolved by "clean" SDS-PAGE and the gels were silver-stained. Gel fragments were excised from the gels in the region containing the protein band of interest and the samples were subjected to LC/MS/MS analysis.
Microscopic procedures XTC cells were grown on coverslips, fixed with 4% paraformaldehyde in PBS and permeabilized with 0.5% Triton X-100 in PBS or 40 μg/ml digitonin in PBS. Small pieces of Xenopus tissues and staged embryos were fixed with 4% paraformaldehyde in PBS containing 0.5% Triton X-100 and paraffin-embedded. Tissue sections were prepared for immunohistochemistry and immunofluorescence. For imaging, two fluorescence microscopes were used: a confocal microscope LSM510 META with an FCS system (Zeiss) and an Olympus IX70 fluorescence microscope with a two-channel confocal laser scanning unit (FV500). Any brightness and contrast adjustments were performed in Adobe Photoshop or Zen 2007 (Zeiss).
Electron microscopy
Transmission electron microscopy (TEM) and immunogold labelling was as described previously (Salpingidou et al. 2008). Affinity-purified anti-XLAP2 IgGs were used for the immunogold staining of tissues, cultured cells and NEs. Images of somatic tissues were taken via a Jeol B 100 transmission electron microscope at magnifications of ×20,000 and ×40,000. XTC cells were either embedded in Araldite (Agar Sciences) for TEM or prepared for cryosectioning and immunogold labelling. XTC samples were examined under a Hitachi H7600 transmission electron microscope.
For scanning electron microscopy (SEM), oocytes were dissected out from hormone-stimulated female frogs. The nuclei were manually isolated, placed on silicone chips and processed for immunogold labelling as described by Allen et al. (2007). SEM was performed on Hitachi S-5200.
Statistical analyses of clusters immunogold-labelled NEs of Xenopus oocytes examined by SEM From each micrograph, immunogold label coordinates were digitalized by using Scion Image software and saved as data files with two variables (X and Y coordinates). The left, right, top and bottom boundaries were subsequently determined (maximum and minimum of coordinate variables) and another data set with the same number of observations was created with the X and Y coordinates being randomly assigned values from the uniform distribution and within the same boundaries as in the original file. The basic logic behind this methodology can be described as follows. If the tendency of empirical (observed) points to form clusters is more likely than that of points with randomly assigned coordinates (theoretical points), then we can say that a general clustering tendency exists in the observed data. The major obstacle in statistical analyses is that the LAP2 signal is excluded from the nuclear pore complex (NPC) area, narrowing the space for LAP2. We overcame this problem by introducing the same distance values between labels in the control as in the original micrograph. Data were analysed with SPSS 17 software by three methods: hierarchical clustering, K-means method and Bacher's method. The statistical significance of the cluster analyses was calculated by using the following tests: the one-sided two-sample t-test for means, the non-parametric Z-test (Kolmogorov-Smirnov test) and the U-test (Mann-Whitney). Statistical significance was assumed at values of P<0.05.
Statistical analyses of immunogold labelling of Xenopus XTC cells examined by TEM
The calculated relative labelling intensity for XLAP2β in four different regions inside the cell nucleus of XTC cells was based on cryosectioned and immunogold-labelled XTC cells according to the method described in Mayhew (2005). XTC cells were cryosectioned and immunogold-labelled; 517 immunogold labels were counted from eight micrographs at the same magnification. The standard deviation error for the relative labelling intensity (RLI) was calculated by Student's t-test and was P<0.05 for all regions. The areas of the entire nucleus, the NE and the nuclear invagination were calculated from the middle Z-section of confocal microscope images. The areas of the remaining two regions were calculated proportionally also from the middle Zsection of the confocal microscope image by assigning them the remaining nucleus area after the subtraction of the NE and invaginations areas. For the calculations of areas from confocal images, we used Zen 2008 software for the Zeiss LSM 510 confocal microscope.
Results
Characterization of antibodies and identification of 40-kDa polypeptide as XLAP2γ isoform by using LC/MS/MS method We generated rabbit antibodies against the common Nterminal 1-165-aa fragment of XLAP2 protein. The specific immunoglobulins were affinity-purified by using the XLAP2 N-terminal fragment (Salpingidou et al. 2008;Supplemental Fig. 1). With this antibody, the previously described XLAP2β protein (66 kDa) was immunoprecipitated from Xenopus liver and XTC cultured cells, as confirmed by Western blot (not shown).
In order to identify additional XLAP2 polypeptides and link them to known cDNAs, we employed immunoprecipitation and mass spectrometry (LC/MS/MS). Proteins were first immunoprecipitated from Xenopus egg extracts by using the XLAP2 antibody. These proteins were immunoblotted with anti-XLAP2 (Fig. 1 a) to detect possible LAP2 isoforms. Two polypeptides of 86 kDa and 40 kDa were consistently detected. Gel fragments containing these two proteins (Fig. 1b) were further analysed by LC/MS/MS. Our mass spectrometry data identified an 86-kDa polypeptide as clone 2/ω and the 40-kDa polypeptide as a translation product of a previously uncharacterized cDNA (see Discussion,Fig. 1c). According to the existing LAP2 nomenclature, the 40-kDa isoform should be named LAP2γ. The identified LAP2γ peptide is not a degradation product of XLAP2ω, since solubility assays have confirmed that this protein contains the transmembrane domain, which is located on the C-terminal end of XLAP2γ (and of XLAP2ω). Our antibodies are N-terminus-specific and so the potential degradation product of that size would not be recognized by our antibodies because of the cleaved off N-terminal part of the protein with all epitopes. Moreover, the 40-kDa protein possesses a transmembrane domain and reacts with LEM-domain antibodies, as has been described previously (Lang et al. 1999) in Xenopus oocytes and unfertilized eggs.
Notably, the clone 3 cDNA, which is the only known candidate for coding the XLAP2γ protein, lacks the TM domain, exon C and the lamin-binding region characteristic for major human LAP2 proteins. This confirms that it cannot be translated into XLAP2γ protein.
XLAP2ω and XLAP2γ are expressed during early stages of development, whereas XLAP2β is expressed late in development and in adult tissues The Western blot analysis of XLAP2 protein levels during development demonstrated the presence of at least three differentially expressed isoforms (Fig. 2a). XLAP2ω and XLAP2γ occur in eggs and are expressed during development up to stage 41 with the maximum around stage 20. The continuous presence of XLAP2γ along with XLAP2ω is notable. Expression of the 66-kDa XLAP2β isoform is only observed from the neurula stage onwards and its expression levels reach a maximum from stage 34. In order to confirm an earlier observation that XLAP2β is the sole isoform after stage 41, we analysed stages 42 up to 52 by using a chemiluminescent method to increase the sensitivity of Western blot detection (Fig. 2a, black and white section). This experiment confirmed that XLAP2ω and XLAP2γ were not present in later developmental stages.
Subcellular localization of XLAP2 proteins during development and in adult tissues
Immunofluorescence analyses indicated that embryonic XLAP2 proteins (ω and γ) localized to the chromatin throughout the cell cycle during early development (up to the gastrula stage). At the morula stage, a significant Fig. 2 Developmentally regulated expression and subcellular localization of XLAP2 isoforms in Xenopus. a Immunoblot analysis of XLAP2 proteins during development. Positions of three major protein bands representing XLAP2 isoforms are marked together with an actin band used as a loading control. The embryonic XLAP2 proteins, viz. XLAP2ω and XLAP2γ, are the only two isoforms present up to gastrula stage, showing an accumulation of the amount of protein until stage 22 followed by a steady decrease up to stage 41 and its disappearance from stage 44 (a, right). Expression of the somatic XLAP2β isoform is first detected at the gastrula stage and increases significantly from the 28th stage. The additional protein band of 76 kDa may be a degradation product of the 86-kDa isoform. Molecular masses of reference proteins (in kDa) are marked. b-f Subcellular localization of XLAP2 protein isoforms during Xenopus development and in XTC cells. The paraffin sections from Xenopus embryos and adult tissues were probed for XLAP2 (b, c) or costained for XLAP2 (red) and either lamin B2/B3 (d) or lamin B2 (e, f; green). DNA was stained with 4,6-diamidino-2-phenylindole (DAPI; blue). Embryonic samples show autofluorescence (mostly from the yolk platelets) that reveals the shape and size of each cell (e.g. easily seen in red and blue channels from morula to gastrula stage). c In the morula stage embryos, XLAP2-containing fractions are associated with mitotic chromosomes (c.1) and karyomere formation is seen around individual chromosomes (c.2). Note the position of embryonic XLAP2 in membrane-like structures on the morula chromatin (arrow), the typical cluster-like location on the chromatin (arrowhead) and the XLAP2 cluster outside the chromatin (double arrowhead). In somatic cells, XLAP2β aggregates in between separating chromatids at early anaphase (c.3) and then associates with peripheral regions of chromatin in late anaphase (c.4; arrows peripheral regions of chromatids with associated XLAP2, arrowheads chromosome core regions not yet associated with XLAP2). d-f The XLAP2 proteins colocalize with B-type lamins in stage 26-28 (d) and stage 44 (e) Xenopus embryos and in adult tissues (f). Images represent brain tissue. Bars 100 μm (b, morula), 20 μm (b, blastulastage 20), 5 μm (c), 20 μm (d), 10 μm (b', e, f) amount of XLAP2 was located around the chromatin, frequently forming cloud-like structures inside the cell nucleus (Fig. 2b, arrow). XLAP2 was distributed irregularly but in an NE-like fashion in the cell nucleus (Fig. 2b, inset). At later stages of development, both embryonic XLAP2 isoforms were located typically at the NE, as easily visible at stage 20 (Fig. 2b) and through later stages (Fig. 2d, e). Irregular but complete NEs were noticeable at the blastula stage, with regular NEs at the gastrula stage (Fig. 2b'). Detailed immunofluorescence studies revealed that both embryonic XLAP2 isoforms were associated with karyomers in interphase nuclei at the morula stage. Mitotic chromatin was also associated with embryonic XLAP2 proteins (Fig. 2c, 1st anaphase and 2nd interphase respectively). From the blastula stage, both embryonic XLAP2 proteins were localized to the NEs that had formed around the interphase chromatin. The amounts of embryonic XLAP2 proteins diminished during development but they still associated with mitotic chromatin. From stage 20, XLAP2ω and XLAP2γ localized generally to the NEs and colocalized with lamins B2/B3 and lamin B2 in all tissues tested (e.g. brain, ectoderm and somites; not shown). As demonstrated in Fig. 2d-f, XLAP2 proteins occurred in the brains of stage-26 and stage-44 embryos (Fig. 2d, e, respectively) and in adult Xenopus (Fig. 2f). During early anaphase in XTC tissue-cultured cells, XLAP2β was localized in between the chromatids and, in late anaphase, associated with the chromatin (Figs. 2c, 3, 4). The association of XLAP2β with peripheral chromatin was readily visible (Figs. 2c, arrows), whereas internal regions were still weakly bound by XLAP2β (Figs. 2c, arrowheads). Detailed immunofluorescence analyses of the cell-cycle-dependent distribution of the XLAP2β isoform in XTC cells also revealed complete colocalization with lamin B2 (Supplemental Fig. 2). Punctate labelling of NPCs with m414 antibodies (anti-Nup62 and other F/G repeat nucleoporins) overlapped with the smooth continuous LAP2 labelling of NE at interphase (Supplemental Fig. 2).
XLAP2β is localized to the inner nuclear membrane of NE but also to chromatin inside the cell nucleus and to NE invaginations XLAP2β was ubiquitously expressed in all adult tissues tested but was absent from oocytes and eggs in which the ω and γ isoforms were expressed (see Supplemental Fig. 1b). The location of the XLAP2β protein was demonstrated in the brain and in muscle cells by TEM after immunogold labelling in Fig. 3a, b. XLAP2β was located mostly in heterochromatin regions next to the NE. Nevertheless, a small fraction of the protein (typically between 10% and 30% of the total label) was detected within chromatin microdomains inside the nucleus. This interpretation was supported by the immunofluorescence confocal microscopy data from brain tissue (Fig 2f) in which XLAP2β was located predominantly in the NE of eppendymocytes and glial cells (see also Supplemental Fig. 1e). As illustrated in Fig. 3b, immunogold labelling of muscle and muscle satellite cells nuclei demonstrated the labelling of the heterochromatin regions next to the NE and at some distance from the NE. Solubility assays on XTC cells (Fig. 3e) and on liver and brain tissues (not shown) indicated that the only protein detected (66 kDa) behaved as an integral membrane protein suggesting that the fraction of XLAP2β inside the nucleus possessed a transmembrane domain.
In order to solve the question of the XLAP2β presence in the nuclear interior, we performed immunogold labelling on tissue-cultured XTC cells serving as a model of somatic tissues. Immunogold labelling indicated that XLAP2β was associated with heterochromatin adjacent to the NE (Fig. 3f-j, arrowheads), although some clusters of XLAP2 protein were also present in the heterochromatin inside the nucleus (Fig. 3f-j, double arrowheads). Only a few of them were accompanied by membranes of NE invaginations in osmium-stained cryosections from XTC cells (see also Supplemental Fig. 3f demonstrating a cluster of XLAP2 protein on chromatin inside the nucleus and its location in relation to NE membranes). In parallel, we performed confocal microscopy on XTC cells; this confirmed that the predominant fraction of XLAP2 was associated with the NE but that a small fraction of the protein was also detected inside the nucleus (see the three strong signals in central Zsections of Fig. 3d). These particular intra-nuclear signals probably represented long invaginations of the NE, because the signal was present in consecutive Z sections beginning from the NE and nuclear periphery to the centre of the nucleus. Thus, at least part of intra-nuclear XLAP2 protein was associated with NE invaginations. A small fraction of XLAP2β protein, presumably associated with intra-nuclear chromatin, seemed to be also dispersed throughout the interior of the cell nucleus. This particular fraction, which could be visualized at early prophase, was located between the condensing chromatids (see Fig. 3c, arrowheads).
We further investigated XLAP2β locations by using statistical analyses. We analysed the distribution of immunogold-labelled XLAP2β protein in cryosections from XTC cells by using TEM. We examined the distribution of immunogold label (n=517 spots) from eight micrographs, divided them into four groups and calculated the RLI compared with the control, i.e. the randomly generated labelling of the entire cell nucleus (see Materials and methods). The chosen regions were: chromatin at the NE, chromatin inside the nucleus associated with the NE, chromatin inside the nucleus not associated with NE, and the nuclear interior with no chromatin and no NE. Table 1 reveals that the XLAP2β distribution is not random and is always associated with chromatin. However, a significant portion of protein is associated with chromatin but is distant from the NE, inconsistent with a location in the membrane. Moreover, 72% of the calculated label in the first three groups is found in clusters of four or more labels suggesting a microdomain-like distribution of XLAP2β both at the NE and at the intra-nuclear chromatin. In order to support this finding, we used confocal microscopy. We measured the intensity of the fluorescent XLAP2-specific signal in 0.5-μm-thick optical sections through the centre of five nuclei. The calculated relative intensities of the fluorescent signal from NE, from NE invaginations and from the nuclear interior were 165± 11, 159±16, 45±14, respectively. This correlated with the calculated RLI values from TEM micrographs.
Two types of structures containing XLAP2β and lamin B2 were present inside the cell nuclei of XTC cells: a typical invagination with a visible lumen (Fig. 4a, arrow) and other structures without a visible lumen (Fig. 4a, arrowheads). XLAP2β in the NE, at the upper and the lower part of cell nucleus, was not homogeneously distributed (Fig. 4b). This could represent a cluster-like Immuno-gold staining of ultrathin sections of embedded tissues with antibodies against XLAP2, followed by incubation with protein A conjugated to 8-nm gold particles (Cyt cytoplasm, NE nuclear envelope, Nuc nucleus). In brain (a) and muscle (b) tissues, XLAP2 (black dots) localizes mostly at the inner nuclear membrane (arrows) and peripheral heterochromatin (arrowheads) but is also present in intra-nuclear regions (double arrowheads). b Note that XLAP2 occurs both in the nucleus of the myotube (top) and in the muscle satellite cell (bottom). Bar 500 nm. c In early prophase of XTC cells, costaining for XLAP2 (green) and phospho-histone H3 (red) reveals that XLAP2β is located not only at the NE, but also in intra-nuclear loci (arrows). Bar 5 μm. d Confocal studies of interphase XTC tissue cultured cells stained for XLAP2 demonstrate that XLAP2β is present in NE and in invaginations of the nuclear membrane (strong dots inside the nucleus). Note that the region lying on the upper left side of the nucleus corresponds to the nucleolus area (see merge). Bar 10 μm. e Biochemical properties of XLAP2β protein from XTC tissue-cultured cells. Analysis of the XLAP2β polypeptide solubility in isolated cell nuclei, extracted with 10 mM TRIS buffer alone or containing Triton X-100 (TX), NaCl, Triton X-100 plus NaCl (TX+NaCl), or urea (6M Urea). Equivalent amounts of the supernatants and pellets were resolved in SDS-PAGE gel, transferred onto nitrocellulose filter and probed for XLAP2 protein. XLAP2-66 displayed the properties of an integral membrane protein involved in interactions with other protein components of the nuclear lamina, because it was solubilized in the presence of Triton X-100 together with NaCl. f-j Transmission electron microscopy (TEM) studies of XTC tissue-cultured cells confirm the NE localization for XLAP2 proteins. XTC cells were grown in culture, embedded in gelatin and cryo-sectioned for immuno-gold TEM. XLAP2 was detected with antibodies against XLAP2 followed by Protein A coupled with 5-nm gold particles. Gold-labelled XLAP2 was mostly localized at the nuclear periphery and inner nuclear membrane (INM, arrowheads) but intra-nuclear clusters (double arrowheads) were also present (f, j). g Higher magnification of the boxed area in f (Cyt cytosol, ONM outer nuclear membranes, Nuc nucleus). Bars 100 nm structure containing XLAP2 at the NE. Some of such structures spanned throughout the cell nucleus (e.g. typical invagination and "non-lumenal" structures, Fig. 4b, arrows and arrowheads, respectively). In all invaginations with a visible lumen, XLAP2 colocalized with lamin B2 (Fig. 4d); the lumenal part was free of DNA (Fig. 4d, e) and stained with dihydrocholecalciferol (Fig. 4e) indicating the endoplasmic-reticulum-like structures present inside.
Three-dimensional reconstructions of series of confocal images of a single nucleus (Fig. 4f, g) confirmed that such lumenal structures connected opposite sides of the cell nucleus membrane, that they were free of DNA (Fig. 4g) but that they contained F/G-repeat nucleoporins (Fig. 4f) suggesting the presence of NPCs on such structures. Only a few out of the many internal non-lumenal structures containing XLAP2 also contained lamin B2 (Fig. 4c, d).
This could represent the internal chromatin-bound clusters of XLAP2 detected in TEM experiments.
Embryonic XLAP2 isoforms locate at microdomains at the inner nuclear membrane of oocyte NE Analyses of the distribution of embryonic XLAP2 isoforms were performed by examining manually isolated oocyte nuclei with SEM following immunogold labelling. XLAP2ω and XLAP2γ were located at the inner NE membrane outside of the NPCs (Fig. 5a, b, yellow dots). This was in agreement with the detailed analyses of TEM immunogold data from adult tissues and XTC cells in which XLAP2β location was not associated with NPCs (see Fig. 3a,b, f-j). The distribution of embryonic XLAP2 isoforms was not uniform (see Fig. 5c, d) and could reflect the distribution of nuclear lamina filaments underlying the inner nuclear membrane of the NE.
We performed hierarchical analyses of clusters based on the Euklidean distance between the label and single linkage method followed by a calculation of the potential cluster number by using the dendrogram method, on the basis of at least three labels as a potential cluster (see Materials and methods; see also Supplemental Fig. 2). Statistical analyses of 356 immunogold labels from six micrographs revealed the non-random distribution of the label with the Student's t-test value of P<0.05. Our statistical approach allowed us to identify the most probable number of clusters in each micrograph. The example presented in Supplemental Fig. 3a contains five clusters.
Discussion
The presence of several (up to five) isoforms of XLAP2 protein has long been proposed in Xenopus. Since the discovery of the first XLAP2 cDNA clones, several reports have suggested the existence of XLAP2 proteins β and ω Lang et al. 1999), and clones 2, 3 and 4 (Gant et al. 1999). Our mass spectrometry data (Fig. 1) have confirmed the identity of the XLAP2-86 polypeptide as XLAP2ω/clone 2 and of the 40-kDa polypeptide as XLAP2γ but without answering the question of its original cDNA. In addition to the mass spectrometry results, we have found that our IgG precipitates XLAP2γ from egg extract. XLAP2γ also fractionates with the MP2/NEP-B (NE vescicle population B) fraction of membranes from the low-speed supernatant from eggs (Salpingidou et al. 2008) and its solubility when it is extracted from eggs and from the membrane fraction of eggs is identical to the solubility of XLAP2ω and XLAP2β from XTC cells (Fig. 3).
The expression pattern of XLAP2 isoforms during Xenopus development as demonstrated in our experiments differs slightly from previous data (Lang et al. 1999). First of all, we demonstrate the presence of two (instead of one) early development-specific isoforms: XLAP2ω and XLAP2γ (Fig. 2). In zebrafish, ZLAP2ω, the largest LAP2 isoform (84 kDa), is the only previous embryonic isoform discovered (Schoft et al. 2003) and is also replaced late in development by the adult ZLAP2β isoform. Another difference is that, in Xenopus and zebrafish, all identified LAP2 proteins are integral membrane proteins, whereas in mammals, at least two LAP2 proteins are "soluble" nucleocytoplasmic proteins.
The embryonic XLAP2 isoforms (ω and γ) are associated with chromatin and separated karyomers at the morula and blastula stage ( Fig. 2c.1, c.2). This resembles the intra-cellular localization of zebrafish ZLAP2ω protein observed in early embryos up to the late gastrula stage (Schoft et al. 2003) and suggests similar roles for embryonic LAP2 isoforms in lower vertebrates. The association of XLAP2ω and γ with mitotic chromatin decreases gradually during development up to late gastrula stage. Notably, XLAP2β appears at this particular stage of development and is mostly associated with the NE and NE-associated chromatin. This confirms the in vitro data from the native-gel shift assays, that bacterially expressed LAP2 cDNA clones 2 (ω), 3 and 4 (β) bind to BAF and chromatin with different affinities (Shumaker et al. 2001). Xenopus LAP2 isoforms have additional exons, designated A, B and C (Fig. 1c), compared with mammalian LAP2s. XLAP2β cDNA lacks exon A. This may contribute to the different properties of the particular XLAP2 proteins in binding to chromatin (Shumaker et al. 2001). Fig. 4 Intra-nuclear localization of XLAP2 as correlated to nuclear antigens and lipid membranes. XTC tissue-cultured cells were fixed with paraformaldehyde and probed for XLAP2 (green in b, red in e) or costained for XLAP2 (red in a, c, d) and lamin B2 (LAM B2, green in a, yellow in c, d) or costained for XLAP2 (green, f, g) and either p62-F/G nucleoporins (p62-Nup F/G, Ab 414, red, f) or phosphorylated histone H3 (red, g). Cellular membranes were visualized with dihydrocholecalciferol (DHCC) and DNA was counterstained with DAPI. a Various phenotypes of cultured cells presenting large nuclear channels (arrow) or nuclear dots (arrowheads) in which XLAP2 colocalizes with lamin B2. b Large channels (arrows) and smaller invaginations (arrowheads) are present on consecutive Z-sections of the cell nucleus from the bottom to the top. c, d XLAP2 staining correlates with lamin B2 and lipid membranes in NE. The nuclear territory contains nuclear dots/invaginations that colocalize with lamin B2 (arrowheads) or show a single XLAP2 signal (double arrowheads), forming channels (arrow) concomitant with the membrane signal (DHCC). e Cell with a large nuclear channel lacking DNA but possessing membrane signal (arrow). f, g Large nuclear invaginations or channels running from the bottom to the top of a nucleus. Images represent single Z-section (0.25 μm thick) from the nucleus, with the lines marking the surface of the Z-axis cross section through the nucleus. The cross sections from seven consecutive Z-stacks are shown bottom and right. Note that XLAP2 signal colocalizes with p62-F/G repeats containing nucleoporins (f) or with branches of the DNA-free tunnel in the nucleus (g). Bars 10 μm (a, b, e), 5 μm (c, d) Taken together, the XLAP2ω/clone 2 and XLAP2γ isoform are expressed in early embryos and associate with interphase and mitotic chromatin, with the XLAP2ω/clone 2 possessing the highest binding activity with BAF and chromatin compared with all isoforms examined; this might reflect the evolutionary adaptation to extremely rapid cell divisions at this stage of development. In contrast, XLAP2β expression starts when the cell divisions are slower and interphase is more pronounced, concomitant with the nine-fold lower affinity of binding with BAF (Shumaker et al. 2001). Fig. 5 Analyses of the distribution of XLAP2ω and XLAP2γ proteins on the inner surface of the nuclear envelope (NE) of Xenopus oocyte by using immunogold labelling and scanning electron microscopy (SEM). Manually isolated nuclei from frog oocytes were transferred onto silicone chips. The inner side of the NE was manually exposed by gently removing the nuclear content and processing for SEM immunogold procedures. XLAP2 was detected with antibodies against XLAP2 followed by Protein A coupled with 10-nm gold particles (yellow dots gold particles associated with immunocomplex bound to XLAP2 protein, NPC nuclear pore complexes). a, b Embryonic XLAP2 proteins localize non-randomly and show a tendency to form clusters and microdomains at the inner nuclear membrane. They also show a tendency to locate longitudinally; this might reflect the shape of the nuclear lamina filaments to which they are initially attached. c Typical cluster of XLAP2 proteins. d Typical region lacking XLAP2 protein. Bars 100 nm (a, b), 50 nm (c, d) Fig. 3f-j. Immunogold label (n=517) was counted from eight micrographs at the same magnification. The standard deviation error for RLI was P<0.05 for all regions (Student's t-test). The area of the entire nucleus, nuclear envelope (NE) and nuclear invagination was calculated from the middle Zsection of the confocal microscope image from Fig.3c. The area of the remaining two regions was calculated proportionally also from the middle Z-section of the confocal microscope image after subtraction of the NE and invaginations (n/d not determined) In adult tissues and in XTC cells, a fraction of XLAP2β has also been detected inside the nucleus, usually in clusters on heterochromatin, but also with no visible traces of NE invaginations being detected (Figs. 3, 4, Table 1). Immunogold labelling of XTC cells (Fig. 3f-j) suggests that this intra-nuclear fraction is much greater than the possible amount of proteolytic degradation products detected in XTC cells. This suggests the presence of a hydrophobic (lipid) fraction associated with the surface of chromatin as reported for the first time some time ago (Maraldi et al. 1992) or might reflect tubular NE invaginations (Ellenberg et al. 1997;Fricker et al. 1997;Gehrig et al. 2007) or both. The last-mentioned seems to be the most probable, since our confocal microscopy data indicate the presence of the XLAP2β fraction inside the cell nucleus. This is especially visible during prophase when the chromatin is condensing and relocating some chromatin components or adjacent proteins (Fig. 3c). Based on the TEM immunogold results (Table 1) and three-dimensional analyses of confocal sections through XTC cell nuclei (Fig. 4), this intranuclear fraction of XLAP2 can be divided into membranefraction-associated with NE invaginations (and chromatin) and non-membrane-fraction-associated with chromatin.
The calculated RLI data confirm that only some of the clusters of intra-nuclear XLAP2β are connected with NE invaginations (Table 1). The difference in labelling intensity between chromatin at the NE and at the invaginations might be the result of incorrect assumptions of the number of invaginations in single nuclei (three per nucleus, as in Fig. 3d). If we take into account only those invaginations with a visible lumen, make appropriate corrections for this in the calculations of RLI and assume one invagination per cell nucleus, we will obtain an RLI value for chromatin at the invaginations similar to the value for NE-associated XLAP2. This correlates with the same intensity of the signal from XLAP2β-stained NE and from XLAP2βstained NE invaginations detected by confocal microscopy (Fig. 4b, f). The association of all isoforms of XLAP2 in clusters on chromatin and/or NE is of great interest. It confirms that XLAP2 proteins are involved in the organization of chromatin structure by anchoring large protein complexes and thereby modifying chromatin structure on a domain-like scale rather than a single transcription unit. The second implication is the possible involvement of LAP2 protein clusters in forming discrete microdomains at the NE composed of either lamin A or lamin B. Newly imported into the nucleus, lamin A and B molecules are not incorporated uniformly into the nuclear lamina but form separate microdomains (Furukawa et al. 2009) reflecting separate microdomains of lamin A and B present in the nuclear lamina (Haraguchi et al. 2008). Our data indicate that these lamin A and lamin B microdomains are formed and stabilized by interactions with LEM domain proteins of the inner nuclear membrane, such as XLAP2. Since we have observed the full colocalization between XLAP2 isoforms and B-type lamins (Supplemental Fig. 2), we suggest that part of the lamin B microdomains in Xenopus NE might be formed and maintained by LAP2 proteins (e.g. Fig. 4c, d). XLAP2 proteins have been found to coprecipitate with A-and B-type lamins in multiprotein complexes and so the specificity of particular microdomain formed by XLAP2 might be determined by other proteins of the complex specific either for A-or B-type lamins. Our data also suggest that XLAP2 proteins are similarly involved in linking epigenetically marked chromatin as other integral membrane proteins of the NE inner membrane, e.g lamin-B-receptor protein (Makatsori et al. 2004). | 2014-10-01T00:00:00.000Z | 2011-02-24T00:00:00.000 | {
"year": 2011,
"sha1": "927670314830b53029b0e21260854212c91866af",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00441-011-1129-2.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "e4091805702357a3e59883c701612fe5903850a8",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
89347731 | pes2o/s2orc | v3-fos-license | Identification of strain types of some Beet necrotic yellow vein virus isolates determined in Northern and Central Parts of Turkey
Received : 17.11.2015 Accepted : 05.03.2016 Beet necrotic yellow vein virus (BNYVV), the agent of rhizomania disease, causes severe economic losses in sugar beet fields in all over the world. The virus is transmitted by a plasmodiophorid vector, Polymyxa betae Keskin. Twenty soil samples, collected from sugar beet fields in northern and central parts of Turkey during surveys in 2004 and 2005 and known to be infested with viruliferous cultures of P. betae carrying BNYVV, were selected and used in this study. Sample selection was made according to symptom expression of beet seedlings in preliminary bait plant tests and locations of the soil samples that accurately represent the region from which they were taken. Total RNAs were extracted from sugar beet plants grown in these soils and used to amplify RNA-2 (nt. 19-1088) and RNA-3 (nt. 50-1268) of BNYVV by reverse transcription polymerase chain reaction (RT-PCR) method. Restriction fragment length polymorphism (RFLP) analysis of PCR-amplified products showed that most of BNYVV isolates studied were A-type strain, however, two isolates did not exactly match the band profile of A-type strain. Additionally, the presence of BNYVV RNA-5 component was investigated by RTPCR using the primers specific for P26 coding region. Four samples belonging to three provinces were found to be involving RNA-5 segment (20%).
Introduction
Many soil-borne viruses are known to infect sugar beet (Beta vulgaris L.) worldwide. Among them, Beet necrotic yellow vein virus (BNYVV), which is the agent of rhizomania disease, is transmitted via the plasmodiophorid Polymyxa betae Keskin (Abe and Tamada, 1986) and causes severe economic losses in sugar beet fields in all over the world. BNYVV is type species of the genus Benyvirus and possesses a multipartite positive single stranded RNA genome, with four or five components (Tamada, 1999). RNA-1 and RNA-2 are required for replication, assembly and cell-to-cell movement, RNA silencing suppression and vector transmission of the virus. The RNA-3, which encodes for a 25kDa protein (p25), is responsible for pathogenicity and production of typical disease symptoms in sugar beet roots Chiba et al. 2008). RNA-4 coded p31 protein is more directly responsible for vector transmission and root-specific suppression of RNA silencing (Rahim et al. 2007;Peltier et al. 2008). The RNA-5 encodes 26kDa protein (p26) and is known to be an additional pathogenicity factor (Tamada et al. 1989). Also, BNYVV isolates containing RNA-5 have been reported to be more pathogenic than RNA-5 lacking isolates (Tamada et al. 1996;Miyanishi et al. 1999).
Three major strain types of BNYVV have been identified using molecular analysis (Kruse et al. 1994;Koenig et al. 1995). The A-type strain, which is the most widespread (Schirmer et al. 2005), has been previously detected in Turkey (Kruse et al. 1994;Kutluk Yilmaz et al. 2007a), however the B-type strain is mainly found in Germany, France (Kruse et al. 1994;Koenig et al. 2008), Japan (Miyanishi et al. 1999), Iran (Sohi and Maleki, 2004) Belgium, the United Kingdom (Ratti et al. 2005) and China (Li et al. 2008). The P-type strain is closely related to the A type, but contains a fifth RNA and to date it has been recorded in Kazakhstan (Koenig and Lennefors, 2000), France (Schirmer et al. 2005) and the United Kingdom (Ward et al. 2007). The other BNYVV isolates containing RNA-5 segment have been suggested to be named as the J-type strain which possess two deletions at amino acid positions 77 and 227-229 in p26, compared to the P-type strain (Schirmer et al. 2005). The J-type widely distributed in Asia, especially in Japan (Tamada et al. 1989) and China (Li et al. 2008) whereas it was recorded in a field of Germany (Koenig et al. 2008). More recently, the Jtype RNA-5 containing BNYVV isolates were also detected in Turkey (Kutluk Yilmaz et al. 2016).
Although increases in sugar beet yields following some chemical treatments to control rhizomania disease were obtained, the use of these products is being phased out because of health hazards and adverse effects on the environment associated with their use. Alternatively, different methods have been tried for the control of rhizomania disease (Burketova et al. 1996;Wang et al. 2003;Akca et al. 2005;Akca et al. 2014). But, their efficiency not sufficient in eliminated BNYVV soil inoculum (Bragard et al. 2013). P. betae sporosori is able to persist in the soil for up to 25 years while remaining viruliferous (Abe and Tamada, 1986), therefore, the most reliable solution is use of resistant sugar beet cultivars.
In the current study, twenty BNYVV isolates were selected according to their symptom expression and geographic origin from the northern and central parts of Turkey. The BNYVV strain types were determined using reverse transcription-polymerase chain reaction (RT-PCR) and restriction fragment length polymorphism (RFLP) analyses, and the relationship between the phenotypic appearance of BNYVV on the sugar beet bait plants and strain types were investigated.
Bait plant technique
In this study, twenty BNYVV isolates belonging to six provinces (Amasya, Cankiri, Corum, Kastamonu, Samsun and Tokat), out of 156 BNYVV-infected ones (Kutluk Yilmaz and Arli-Sokmen, 2010), were selected according to their symptom expression and geographic origin in order to be used in molecular studies ( Figure 1, Table 1). In the bait plant experiment, two 300 mL plastic pots were filled with each soil sample (20 samples) mixed with sterile sand (1 part soil: 1 part sand), and 10 sugar beet seeds of the rhizomania-susceptible (cv. Arosa) were sown in each pot for the bait plant test technique (Burcky and Buttner, 1985). The plants were grown under controlled conditions of 12-h photoperiod, at 20°C (night) and 25°C (day) temperatures. All plants were equally watered generously every week with Hoagland's solution. After growing sugar beet seedlings six weeks, leaf symptoms on bait plants were recorded. In the harvest, plant roots were carefully washed in running top water. After that the combined roots of each pot were divided into two parts. One was used to test for the presence of BNYVV by ELISA and the other part for RNA extraction. Plant material was stored at -20°C until used. In this study, soil samples infested with BNYVV P-type and B-type supplied by Claude Bragard (Universite Catholique de Louvain, Louvainla-Neuve, Belgium) was used as a reference material.
Enzyme linked immunosorbent assay (ELISA)
The bait plants were tested by double-antibody sandwich (DAS)-ELISA according to Clark and Adams (1977) using a specific polyclonal antiserum (Bioreba AG, Switzerland) to identify BNYVV. The optical density was measured at 405 nm using the micropleyt reader (Tecan Spectra II, Austria) and a sample was considered as positive when absorbance value at 405 nm was more than two times the mean of the negative control (Meunier et al. 2003).
RFLP analysis for the characterization of BNYVV strain types
Twenty isolates were used for RLFP analysis (Table1). The primers BNYVV-2F and BNYVV-2R; BNYVV-3F and BNYVV-3R were used to amplify the RNA-2 (nt. 19-1088) and RNA-3 (nt. 50-1268), respectively and followed by restriction cut with enzymes EcoRI and EcoRI, BamHI, StyI. Digestions were done as previously described by Kruse et al. (1994) and the restrictions patterns were analyzed by electrophoresis in 1% agarose gel stained with ethidium bromide.
Detection of BNYVV
In a previous study, BNYVV was detected in 156 of the 510 sugar beet fields in the northern and central parts of Turkey (Kutluk Yilmaz and Arli-Sokmen, 2010). In the bait plant test with BNYVV-positive soil samples, sugar beet seedlings of rhizomania susceptible cv. Arosa showed different kinds of virus symptoms such as chlorosis, leaf rolling+curling, green vein banding + distortion, necrotic lesions and chlorotic lesions+chlorosis in growth room conditions (Table 1). Besides this, these kind of symptoms were not detected all plants, some of BNYVV-infected plants gave no visible symptom. The presence of some symptomatic differences in bait plant tests raises the question whether different BNYVV strain types are present in this region. Therefore, a total of twenty BNYVV-infested soil samples were selected according to their symptom expression and geographic origin from northern and central parts of Turkey. The rhizomania susceptible (cv. Arosa) cultivar was grown in these soils. To confirm of the presence of BNYVV in bait plant samples, DAS-ELISA test was done. All of the samples tested were positive in ELISA for BNYVV ( Figure 1; Table 2).
RFLP analyses of BNYVV isolates
RT-PCR studies were done by using the primers specific to RNA-2 (nt. 19-1088) and RNA-3 (nt. 50-1268). RFLP analysis of the PCR products revealed that BNYVV isolates in the region were A-type strain (Figure 2, 3 and 4), despite two isolates (421 and 186 No) did not exactly fit the band profile of A type isolates (Figure 3). The isolate 421 gave no visible symptom on sugar beet seedlings (cv. Arosa) in bait plant test, whereas the isolate 186 showed green vein banding and leaf deformation (Table 1). Besides this, no BNYVV B-type infections were found in the region (Table 2).
Detection of BNYVV RNA-5
To test whether the samples involves the fifth RNA segment, they were subjected to RT-PCR analysis with BNYVV RNA-5 specific primers (Schirmer et al. 2005). Four BNYVV-positive samples (20%) produced the expected amplicon of 885 bp in PCR amplification by using BN5/F1 and BN5/R1 primers (Data not shown).
In this study, RNA-5 was determined in Corum, Samsun and Tokat provinces (Figure 1 and Table 1). This result indicated that BNYVV isolates containing RNA-5 segment seems to be not located in this region. The BNYVV isolates (40, 103 and 150) with RNA-5 showed light chlorosis and chlorotic lesions+chlorosis types of symptom on sugar beet seedlings in bait plant tests, whereas the isolate (412) gave no visible symptom (Table 1).
Discussion
In the current study, RFLP analysis was conducted for strain determination (A and B type) of BNYVV isolates selected according to their geographic origin and symptom expressions such as chlorosis, leaf rolling+curling, green vein banding+distortion, necrotic lesions and chlorotic lesions+chlorosis (Table 1) in bait plants grown in soils from northern and central parts of Turkey ( Figure 1). Restriction patterns obtained for all BNYVV isolates were identical to A-type strain, despite two isolates (421 and 186) did not exactly fit the band profile of A-type isolates ( Figure 3, Table 2). These isolates may differ in polymorphic sites and could be different variants of BNYVV. Polymorphism may further be investigated by sequencing. Interestingly, the isolate 421 gave no visible symptom on sugar beet seedlings (cv. Arosa) in bait plant tests, whereas the isolate 186 showed green vein banding and leaf deformation (Table 1). Similarly, A-type BNYVV have been previously reported in Turkey (Kruse et al. 1994;Chiba et al. 2011;Kutluk Yilmaz et al. 2016) and in its neighboring counties for example Iran (Sohi and Melaki, 2004) and Greece (Kruse et al. 1994;Pavli et al. 2011). No B-type BNYVV infections were found in tested samples in the region whereas the B-type BNYVV (Sohi and Maleki, 2004) was recorded in Iran. Additionally, BNYVV RNA-5 segment was detected in four samples belonging to Corum, Samsun and Tokat provinces ( Figure 1 and Table 2). In a previous study, Kutluk Yilmaz et al. (2016) indicated that BNYVV populations containing RNA-5 is highly widespread in sugar beet production areas in Turkey. Besides this, nucleotide sequencing and phylogenetic analyses showed that RNA-5 of Turkish BNYVV isolates is closer to the J-type BNYVV isolates than the P-type isolates (Kutluk Yilmaz et al. 2016). On the other hand, the J-type widely distributed in Asia (Tamada et al. 1989;Li et al. 2008) whereas it was recorded in a field of Germany (Koenig et al. 2008).
From RNAs of BNYVV, the RNA-3 encoded p25 controls rhizomania symptoms in sugar beet roots and severe symptom expression in Chenopodiaceae hosts Jupin et al. 1992). Besides this, the severity of symptoms in sugar beet was increased by the additional presence of RNA-5 (Tamada et al. 1996;Miyanishi et al. 1999). In this study, two of RNA-5 containing isolates (40 and 103) showed chlorosis type of symptom on the susceptible cv. Arosa, whereas the isolate 150 gave chlorotic lesions+chlorosis type of symptom. Also, the isolate 421 (with RNA-5 segment) gave no visible symptom (Table 1) and its obtained restriction patterns did not exactly fit the band profile of A-type isolates (Figure 3). In the previous study, chlorosis type of symptom has been typically associated with BNYVV (Rush et al. 2006). Also they indicated that some BNYVV-infected plants never exhibited foliar symptoms. Similarly, in the current study, some of BNYVV-infected plants gave no visible symptom (Table 1). On the other hand, it is known that high soil pH and lime (CaCO3) contents cause chlorosis by affecting nutrient sufficiency in soils (Kutluk and Arli Sokmen, 2007). Kutluk reported that increasing lime and exchangeable Mg contents of soils increased soil pH and induced BNYVV and Beet soil-borne virus (BSBV) infections transmitted by vector P. betae, respectively. There are very few published studies about different infection phenotypes of BNYVV populations on sugar beet. Some researchers indicated that the resistant plants after rub inoculation displayed a range of symptoms from no visible lesion or necrotic lesions at the inoculation site, whereas the susceptible plants showed bright yellow lesions (Chiba et al. 2008). Consequently, the majority of the samples contained A-type BNYVV (80%), but in 20% of the samples, Atype BNYVV with RNA-5 was identified. There was no any interaction found between symptom expression and strain types of BNYVV populations. Moreover, RNA-5 segment did not seem to be directly associated with symptom severity on sugar beet seedlings (rhizomania susceptible cv. Arosa) in bait plants in this study. | 2019-04-01T13:12:42.924Z | 2016-07-01T00:00:00.000 | {
"year": 2016,
"sha1": "1a91615c10523b7c16b4d7f160fcab5de5edc91c",
"oa_license": "CCBYSA",
"oa_url": "http://dergipark.org.tr/download/article-file/222134",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ae54b3d6560a13eca9a1a410d72f6dfce409be49",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
257557916 | pes2o/s2orc | v3-fos-license | Mycophenolate mofetil–induced hyperlipidemia with cutaneous manifestations
Abstract We report a case of a 49 year old woman who developed biopsy‐proven xanthomas on her hands and arms after initiation of Mycophenolate mofetil therapy for systemic lupus erythematosus (SLE) and subsequently went into remission upon cessation of the medication.
| INTRODUCTION
Xanthomas are lipid deposits in the connective tissue of the skin, tendons, or fasciae that are predominantly composed of foam cells, which are macrophages that have excessively phagocytized recently oxidized low-density lipoprotein cholesterol (LDL-C) particles. 1 Clinically, they are expressed as yellow-orange papules, plaques, or nodules. Their clinical presentation requires that the differential diagnosis includes cutaneous tumors. However, xanthomas are not neoplasms or cutaneous tumors. 2 These lesions represent an abnormal lipid transportation system in blood vessels and are associated with an increased risk of cardiovascular disease. 3 Therefore, xanthoma manifestation is a significant symptom that requires careful investigation to uncover the underlying etiology.
While not always the case, xanthomas are often due to an underlying primary or secondary hyperlipidemia. Examples of primary hyperlipidemia include the manifestation of xanthoma tuberosum and tendinosum in people with familial hypercholesterolemia or palmar crease xanthoma in familial dysbetalipoproteinemia. 4 Examples of secondary hyperlipidemia include underlying drug interactions or diseases that affect lipid metabolism such as hypothyroidism, nephrotic syndrome, and primary biliary cholangitis.
In drug-induced hyperlipidemia, large increases in total cholesterol (TC), LDL-C, and triglycerides of up to 40%, 50%, and 300%, respectively, have been reported. 5 Drugs that are well known to cause secondary hyperlipidemia include diuretics, beta-blocking agents, progestogens, combined oral contraceptives containing "second generation" progestogens, danazol, immunosuppressant agents, protease inhibitors, and enzyme-inducing anticonvulsants. 5 Immunosuppressant drugs that have an increased risk of inducing hyperlipidemia include cyclosporin, azathioprine, and corticosteroids with an increase in TC by 10%-40%, LDL-C by 0%-50%, HDL-C by 0%-90%, and triglycerides by 0%-70%. 5 Previous reports indicated that mycophenolate mofetil does not cause adverse effects on lipid metabolism, 5,6 but more recent articles suggest that there is an association between mycophenolate mofetil and hyperlipidemia. 7, 8 We report a case of a 49-year-old woman who developed biopsy-proven xanthomas on her hands and arms after initiation of mycophenolate mofetil therapy for systemic lupus erythematosus (SLE).
| CASE REPORT
The patient is a 49-year-old female with a pertinent medical history of SLE who presented for evaluation of painful, non-pruritic, oozing lesions on her hands and arms. The patient denied any history of prior, similar lesions and noted that they first started to develop within days of initiation of mycophenolate mofetil therapy for control of her SLE.
They were described as indurated pink crusted papules, resembling hypertrophic lupus or small keratoacanthomas. A biopsy of a lesion showed focal epidermal spongiosis. Within the reticular dermis, there were numerous histiocytes with foamy cytoplasm interspersed by an infiltrate of lymphocytes with scattered neutrophils and rare eosinophils. Infectious etiology was ruled out with periodic acid-Schiff (PASF), tissue gram stain, and acid-fast bacterial culture failing to demonstrate fungal, bacterial, or mycobacterial forms. The diagnosis of eruptive xanthomas was made given the clinical history and pathologic correlation. The patient denied any familial history of heart disease, high cholesterol, or hyperlipidemia and was not on estrogen replacement therapy or any medications commonly associated with metabolic syndrome including systemic retinoids and antipsychotics. 5 A lipid panel revealed normal triglycerides (92), HDL (51), and elevated LDL (150).
Management of this patient included cessation of mycophenolate mofetil and initiation of a statin, which resolved both the abnormal LDL level and cutaneous manifestations of disease. The patient has not had any relapses in condition even after stopping simvastatin.
| DISCUSSION
A variety of medications have been associated with druginduced hyperlipidemia, including immunosuppressant drugs such as mycophenolate mofetil. The mechanism behind drug-induced hypercholesterolemia has yet to be elucidated. Nuclear receptor pregnane X receptor (PXR; NR1I2), a ligand-activated transcription factor primarily located in the liver and intestines, may play a role. 7 PXR activation has been shown to regulate many phase 1 and 2 drug-metabolizing enzymes and transporters, including cytochrome P450 (CYP) 3A4. 7 PXR is a unique nuclear receptor because it has a large selection of structurally diverse ligands that can activate this receptor, thereby affecting not only drug metabolism, but also processes that aim to maintain physiologic homeostasis such as inflammation, cellular proliferation, and importantly, glucose and lipid metabolism. 7 PXR has been shown to have adverse effects on glucose and lipid metabolism and blood pressure. 7 PXR agonists increase TC and LDL-C in the serum via CYP3A4 induction and increase endogenous cholesterol synthesis via increased rate of incorporation of C-acetate into plasma cholesterol. 7 The association between mycophenolate mofetil and hyperlipidemia has been reported in recent years. In cardiac transplant patients on immunosuppressant therapy receiving 3 g/day of mycophenolate mofetil, hypercholesterolemia was present in 41% of the patients. 8 However, it was unclear as to what extent this metabolic effect was due to other immunosuppressants co-administered with mycophenolate mofetil. Therefore, specific demonstration of mycophenolate mofetil-induced hyperlipidemia with sufficient deposition to cause xanthoma manifestations is a clinical rarity.
A possible explanation for mycophenolate mofetilinduced hyperlipidemia is PXR agonism. 7 One study found that mycophenolic acid (MPA), the active parent drug of mycophenolate mofetil via in vivo enzymatic activity, is a PXR activator that was observed to enhance CYP3A4 expression, 9 potentially increasing TC levels. 7 An additional explanation could be due to its antiinflammatory effect. States of inflammation including infection decrease LDL-C. 7 Mycophenolate mofetil has anti-inflammatory effects, 10 so a resulting rise in LDL-C levels would not be unexpected. Future studies should continue to further define the metabolic effect that mycophenolate mofetil has when administered both individually and in conjunction with other immunosuppressant medications.
In conclusion, mycophenolate mofetil can induce hyperlipidemia and cutaneous manifestations of disease including xanthomas. Although there are limited data proving its adverse effect on lipid metabolism, this case continues to promote the significant association between mycophenolate mofetil and hyperlipidemia. While the exact mechanism is unclear, mycophenolate mofetil has shown the ability to increase TC levels possibly via its anti-inflammatory effects and MPA's PXR agonism. 7 Further investigation is necessary to better elucidate the benefit-risk safety profile of mycophenolate mofetil and to caution patients who may have underlying dyslipidemia and increased cardiovascular risk.
AUTHOR CONTRIBUTIONS Rebecca M Yim:
Conceptualization; writing -original draft; writing -review and editing. Vikram Sahni: Writing -original draft; writing -review and editing. Jason Mathis: Conceptualization; investigation; project administration; writing -review and editing.
FUNDING INFORMATION
None. | 2023-03-17T05:06:37.118Z | 2023-03-01T00:00:00.000 | {
"year": 2023,
"sha1": "eee574415f8d0084fc2f9807417ab80ebb623d01",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "eee574415f8d0084fc2f9807417ab80ebb623d01",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
14254619 | pes2o/s2orc | v3-fos-license | Functional annotation of the vlinc class of non-coding RNAs using systems biology approach
Functionality of the non-coding transcripts encoded by the human genome is the coveted goal of the modern genomics research. While commonly relied on the classical methods of forward genetics, integration of different genomics datasets in a global Systems Biology fashion presents a more productive avenue of achieving this very complex aim. Here we report application of a Systems Biology-based approach to dissect functionality of a newly identified vast class of very long intergenic non-coding (vlinc) RNAs. Using highly quantitative FANTOM5 CAGE dataset, we show that these RNAs could be grouped into 1542 novel human genes based on analysis of insulators that we show here indeed function as genomic barrier elements. We show that vlincRNAs genes likely function in cis to activate nearby genes. This effect while most pronounced in closely spaced vlincRNA–gene pairs can be detected over relatively large genomic distances. Furthermore, we identified 101 vlincRNA genes likely involved in early embryogenesis based on patterns of their expression and regulation. We also found another 109 such genes potentially involved in cellular functions also happening at early stages of development such as proliferation, migration and apoptosis. Overall, we show that Systems Biology-based methods have great promise for functional annotation of non-coding RNAs.
INTRODUCTION
Understanding pervasive transcription from the human genome and the resulting 'dark matter' RNA represents a fundamental challenge of contemporary biology. Due to their extensive impact in many regulatory networks, the universe of these RNAs has been called the 'computational engine of the cell' (1,2). We recently reported on a novel class of mammalian very large intergenic non-coding RNAs-vlincRNAs (3,4) that comprises a significant fraction of the dark matter RNA in human cells (5). These transcripts were defined from poly-A minus RNAseq or total RNAseq datasets as regions of 50 kb or more of relatively abundant intergenic transcription that have no overlap with annotated genes (3,4). These transcripts can reach ∼1 MB with a median size of ∼83.3 kb (4.8 times larger than median size of a known gene in UCSC Genes database). At least 2147 unique vlincRNAs exist, spanning over 10% of the genome. Despite being initially identified as regions of transcription and thus potentially harbouring multiple transcripts, different types of evidence suggest that vlincRNA can represent one big independently regulated transcript. First, entire transcription units have evidence of RNA presence (3,4). Second, long-range RT-PCR analysis suggests the presence of a single transcript (3). Third, RNAi-mediated inhibition of vlincRNAs can be measured large distances from the site of siRNA design (6). Finally, 5 ends of a significant number of vlincRNAs associate with canonical RNA Pol 2 promoters suggesting that they are regulated by the same mechanisms as known genes (3). Another property of vlincR-NAs is highly cell-type specific expression (4). In fact, a specific subset of vlincRNAs controlled by promoters containing sequences of endogenous retroviruses (LTRs) associates with cancers and pluripotent cells, more than conventional transcripts encoding proteins (3). This suggests a tantalizing link between these two processes via this class of ncRNAs.
While RNA depletion experiments and human-mouse synteny properties suggest functional roles for some these transcripts (3,6), the biological properties of most remain enigmatic. The sheer number and diversity of these RNAs and the even larger number of possible vlincRNAphenotype combinations suggest the need for novel Systems Biology approaches to establish a global picture of functionality of this and other classes of ncRNAs (7). The hallmark of such approaches is integration that would take place on independent multi-dimensional genomics datasets to extract the signal and reduce both technological and biological noise (5).
We endeavoured to test this approach on the FANTOM5 Cap Analysis of Gene Expression (CAGE) dataset, a unique resource based on 5 capture of transcripts and designed to pave the way for systems biology analysis. It combines a very large breadth of human tissues and cell lines of different origin with the highly sensitive and reproducible quantitation of single-molecule sequencing (SMS) (8,9). The accurate nature of RNA measurements on the SMS platform allows detection of relationships in the data that otherwise might go unnoticed in the context of general experimental noise (10).
In this work, we used 833 samples of the following origins: 399 normal (not of stem cell origin), 92 pluripotent (embryonic, adult, induced Pluripotent Stem, various stages of stem cell differentiation, progenitor), 332 cancerous and 10 immortalized, to show that expression profiles within the FANTOM CAGE dataset can facilitate the identification of known targets of ncRNAs. We combined this data with publicly available ChIPseq data and used a Systems Biology approach to annotate functional patterns of the vlincRNA class of ncRNAs. We show that similar to a class of activating enhancer-like RNAs (RNA-a's), many vlincRNAs likely function in part by positively regulating nearby genes in cis (11,12). We find that vlincRNAs represent mostly standalone transcripts regulated separately from nearby genes. In a large measure, this result stems from an analysis of the distribution of insulators--DNA elements initially discovered as genomic barriers, but whose function as such remained in question (13). We show that insulators do in fact act as genome barriers and use this result to annotate 1542 novel standalone human vlincRNA genes, of which 722 we could assign to human promoters.
Statistical analyses and packages
Statistical analysis procedures were performed using R language and environment for statistical computing (http:// Nucleic Acids Research, 2016, Vol. 44, No. 7 3235 www.R-project.org/). Spearman correlation, Kolmogorov-Smirnov (KS) and Mann-Whitney-Wilcoxon one-sided and two-sided tests were used as indicated.
Analysis of different genomic tracks and their intersections was performed with GenomicRanges Bioconductor package (18). GOstats (2.30.0) (19) Bioconductor package was used for GO terms enrichment analysis. Annota-tionDbi (20) and GSEABase (21) Bioconductor packages were used to produce custom GO-annotation for vlincRNA genes.
Calculation of CAGE-based digital gene expression (DGE) of vlincRNAs and exons of known genes
All the analyses below use previously reported vlincRNAs (3) constructed using RNAseq data from 14 tissues and cell lines. The vlincRNA dataset consists of 2762 vlincR-NAs whose strand we could define (originating from strandspecific RNA-seq data from the ENCODE project) and 1193 vlincRNAs without strand assignment because the underlying SMS RNAseq used a not highly strand-specific cDNA synthesis protocol. Nevertheless, in this work, we defined strand of the 1193 vlincRNAs by simply comparing the number of reads aligned to either plus and minus strand of the genome in a source RNAseq library. In fact, we indeed found that DGE values based on CAGE and RNAseq tags from the sense strand of vlincRNAs always correlated slightly better than counting the tags from both strands (for example, 0.819 versus 0.804 for blood based on SMS-RNAseq). Therefore, the counts (normalized by the total number of reads and length of genomic element) of CAGE tags within the boundaries of vlincRNAs and exons of known genes on the same strand served as the corresponding DGE values throughout the entire analysis presented in this work.
We thus derived a total of 3955 (2762 + 1193) vlincR-NAs. Coordinates of these vlincRNAs sometimes overlap because they were found in different tissues or cell lines (3).
Coordinates of CAGE tags were overlapped with coordinates of genomic intervals corresponding to 729 723 exons of 80 922 UCSC Genes transcripts and 3955 vlincRNAs. Intervals corresponding to UCSC Genes exons and UCSC RepeatMasker rRNA repeats were excluded from coordinates of vlincRNAs irrespective of strand on which they overlapped. This was done to exclude any signal from exons of sense short protein-coding transcripts (<5 kb) allowed to be present inside vlincRNAs by definition (3), and signal from opposite strand exons due to non-strand specific cDNA synthesis. The number of aligned CAGE tags was calculated for each exonic or vlincRNA interval in a strand specific fashion--only tags mapping to the same strand as exons or vlincRNAs were counted. A tag was assigned to an interval if the 5 end of the former mapped inside the interval or on its border. The total number of aligned tags was summed up across different sequencing channels corresponding to the same biological sample. The number of tags was summed across exons of the same transcript to give the final number per the transcript. Since vlincRNAs and known genes of various sizes were compared to each other in the downstream analysis, the DGE CAGE counts required normalization by length of the corresponding ge-nomic elements. This step did not change significantly the correlation between CAGE and RNAseq (data not shown). Therefore, the 'raw DGE' was normalized by per 1 M (genes) or 100 M (vlincRNAs) 'informative reads' in each sample and per 1 kb of sequence length, following the standard RPKM (reads per kilobase per million) formula raw DGE informative reads · length of interval · 10 9 (genes) raw DGE informative reads · length of interval · 10 11 (vlincRNAs) where 'informative reads' is the total number of reads in the sample uniquely aligning to the reference genome and corresponding to chr1-22, X,Y; 'length of interval' is the length of vlincRNA reduced by total length of overlapping UCSC Genes exons and rRNA repeats in case of vlincRNA or sum of lengths of exons in case of UCSC Genes transcripts.
Calculation of RNASeq-based DGE for vlincRNAs and exons
The same procedure as described in the above section was used for the RNAseq data with one exception--a tag aligning across the border of an interval was counted as 0.5 tag for that interval.
RTPCR validation of the CAGE quantitation of vlincRNAs
For this analysis, 31 vlincRNAs were randomly chosen out of 407 vlincRNA previously found in K562 (3). PCR primers were designed approximately in the middle of vlin-cRNA boundaries (Supplementary Table S1) using nonrepetitive (as defined by RepeatMasker) regions of vlin-cRNAs. K562 cells were grown in RPMI-1640 medium with 10% FBS and 1% pen-strep to density of ∼1 million cells/ml. Total RNA was isolated from fresh cells using Tri-zolPlus system (Life Technologies) following the manufacturer's instructions. RNA concentration was assessed using Qubit 3.0 fluorimeter (Life Technologies) using Qubit RNA HS Assay Kit (Life Technologies, catalog Q32852) following the manufacturer's instructions. For DNAseI treatment, 20 g of total RNA was mixed with 10 l Turbo DNase Buffer (Life Technologies, catalog AM2238); 1 l RNAse-Out (Life Technologies, catalog 10777-019) and 2 l Tur-boDNAse (Life Technologies, catalog AM2238) and incubated for 30 min at 37 • C in total volume of 100 l. After the DNAseI treatment, the RNA was purified using two rounds of AMPure beads (Beckman-Coulter, catalog A63880) purification with 1X volume of the beads following the manufacturer's instructions. For first-strand cDNA synthesis, 1 g of the DNAsetreated RNA was mixed with 80 ng of random hexamers (Life Technologies, catalog 48190-011) and 1 l of 10 mM dNTPs in a total volume of 13 l. The samples were denatured at 65 • C for 5 min and placed directly on ice for 2 min. After that, 4 l of 5X SuperScript III incubation buffer (Life Technologies, catalog 18080-044) and 1 l of 0.1M DTT (Life Technologies, catalog 18080-044) were added. The samples were incubated at 15 • C for 20 min. After that, the samples were moved on ice and 1 l of RNase-Out (Life Technologies, catalog 10777-019) and 1 l of Su-perScript III (200 U/l) (Life Technologies, catalog 18080-044) were added. The samples were moved back to 15 • C and the program was skipped to the following steps: 25 • C for 10 min, 40 • C for 40 min, 55 • C for 50 min, 85 • C for 5 min and 4 • C storage. Then 1 l of RNaseH (Life Tecnhologies, catalog 18021-071) and 1 l of RNaseIf (New England Biolabs, catalog M0243S) were added and the samples were incubated at 37 • C for 30 min. The cDNA synthesis was performed in SureCycler 8800 (Agilent Technologies). The cDNA was purified using one round of AMPure beads (Beckman-Coulter, catalog A63880) purification with 1.8X volume of the beads following the manufacturer's instructions. cDNA concentration was quantified using Qubit 3.0 fluorimeter (Life Technologies) using Qubit ssDNA Assay Kit (Life Technologies, catalog Q10212) following the manufacturer's instructions.
The real-time PCR reactions were performed using 10 ng of cDNA, 5 l PowerUp SYBR Green Master Mix (Applied Biosystems, catalog A25742), 500 nM of each forward and reverse primer (Supplementary Table S1) in 10 l reaction volume on Agilent Technologies Stratagene Mx3005P cycler. The reaction conditions were as follows: step 1 (UDG Activation) 50 • C 2 min, 1 cycle; step 2 (Dual-Lock TM DNA Polymerase Activation) 95 • C 2 min, 1 cycle; step 3 95 • C 15 s, 60 • C 1 min, 40 cycles; step 4 (melting curve analysis) 95 • C 1 min, 55 • C 30 s, 95 • C 30 s, 1 cycle. The specificity of amplification was confirmed by melting curve analysis for each primer pair. Each primer pair was assayed in triplicates. Analysis of the real-time data and Ct value extraction was performed using MxPro-Mx300P software v4.10 Build 389, Schema 85 using default parameters. Average Ct values were taken for further analysis. Each vlin-cRNA was normalized to the most abundant vlincRNA ID-1102 (Supplementary Table S1) and the normalized values were then used to calculate Spearman correlation with SMS CAGE and SMS RNAseq.
Calculation of DGE of vlincRNAs excluding internal promoters or 5 ends of ESTs
Genomic intervals corresponding to exons of antisense or short sense UCSC known genes and RepeatMasker rRNA repeats were excluded from vlincRNA boundaries as described above. Additionally, intervals corresponding to 5 ends of ESTs extended by 1 kb in both directions were also excluded from vlincRNAs mapping to the same strand. Alternatively, promoters (16) extended by 1 kb in both directions were excluded from vlincRNAs overlapping them on either strand. Then RPKM values were calculated as described above considering length of vlincRNAs has been reduced by the lengths of the excluded intervals.
CAGE tags density calculation around annotated 5 ends of vlincRNAs
For each vlincRNA, a 10 kb interval around its 5 end was prepared: left boundary +/−5000 nt for each of 2068 top strand vlincRNAs and right boundary +/−5000 nt for each of 1887 bottom strand vlincRNAs. Each interval was split into 20 bins of 500 bp each. The number of aligned CAGE tags was calculated for each bin of each interval on the respective strand. A tag was assigned to a bin if its 5 end was located inside the bin. Sum across all vlincRNA intervals yielded the CAGE tags density in each bin. Before the analysis three vlin-cRNAs were removed as outliers with extremely high tag density: chr12:49525312-49578581:minus, chr2:43290436-43449539:minus and chr14:77402830-77490884:minus. (3) and (4). ENCODE RNASeq data is described above.
Correlation between CAGE and RNASeq DGE for vlincR-NAs and known genes
For CAGE and SMS RNAseq data, RPKM DGE was counted as described above. For ENCODE RNAseq data, DGE was counted on SAM files converted from BAM files with samtools. The same DGE calculation procedure as for CAGE was used, but for normalization purpose number of alignments in the SAM files mapping to chr1-22,X,Y,M was taken.
DGE values were calculated based on CAGE, SMS RNAseq and ENCODE RNAseq datasets for vlincRNAs and UCSC genes transcripts with exonic lengths of 1000 nt or longer and used to calculate Spearman correlation between expression levels in the same cell lines (Table 1).
VlincRNA assignment to promoters and LTRs
A promoter (see above) from any of the three categories ('Active', 'Weak' and 'Poised') located within +/−5 kb from transcriptional start site of the vlincRNA was assigned to that vlincRNA using GenomicRanges Bioconductor package (18). Of the 3955 vlincRNAs, 1702 could be assigned to at least one such promoter. Promoters of 611 vlincR-NAs overlapped with LTR repeats. Thus, we obtained three vlincRNA categories: 'LTR' (assigned to a promoter with LTR), 'nonLTR' (assigned to a promoter without LTR) and 'No promoter' (not assigned to a promoter).
Analysis of correlation in targets of miRNAs
A set of experimentally verified miRNA targets was acquired from miRTarBase (http://mirtarbase.mbc.nctu.edu. tw/) (22) (human subset), release 4.5 from 1 November 2013. The genomic boundaries of each pre-miRNA were extended by +/−1 kb to serve as surrogate for the expression of the primary precursor of that miRNA. RPKM expression levels of all of the primary miRNAs based on +/−1 kb intervals and their targets were calculated using the FAN-TOM5 CAGE data on the set of 833 tissues. We then selected only miRNAs with non-zero expression in at least one of 833 FANTOM5 samples, and the same required for Figure 3.
Correlation between nearby vlincRNAs and UCSC genes
Starting with the RPKM-normalized expression of 3955 vlincRNAs in 833 FANTOM5 CAGE samples and expression of 80 922 UCSC Genes transcripts on the same dataset, we calculated a correlation matrix between these two datasets, in which columns represented vlincRNAs and rows--UCSC Genes transcripts. In the matrix, each element is a Spearman correlation between 833 expression values for a vlincRNA and a UCSC Genes transcript. In addition, a square matrix for Spearman correlations between the 80 922 UCSC Genes transcripts was calculated using custom python script based on SciPy and NumPy modules. For each vlincRNA-gene pair, the following configurations could be defined: a gene on the same strand as the vlin-cRNA located upstream ('same upstream'), nearby gene on the same strand located downstream ('same downstream'), a gene on the opposite strand and 3 -end to the closest to 3 -end of a vlincRNA ('opposite tail-to-tail'), a gene on the opposite strand and 5 -end to the closest to 5 -end of a vlin-cRNA ('opposite head-to-head'). A nearby gene in each configuration falls into one of eight genomic distances from a neighbouring vlincRNA or a UCSC gene: 0-1 kb, 1-5 kb, 5-10 kb, 10-20 kb, 20-30 kb, 30-40 kb, 40-50 kb and more than 50 kb. For each genomic distance and each configuration, a Spearman correlation between transcript pairs (vlincRNA-UCSC gene or UCSC gene-UCSC gene) was taken from the correlation matrix, and if there were several nearby genes, median correlation for them was computed. Out of 3955 vlincRNAs only seven had nearby gene only on one side on either strand because of proximity to the border of a chromosome or chromosome arm. Also, 219 out of 80 922 genes had no nearby gene on either strand on one side. Finally, median values for each distance and configuration were calculated for each vlincRNA and UCSC gene ( Table 3, Supplementary Table S4).
Insulator analysis
A joined set of genomic insulators was produced from insulator elements found in nine cell lines (see above). Two transcripts (vlincRNAs or UCSC genes) were considered separated by insulator, if any genomic insulator from the set occurred between them according to their hg19 coordinates and considered not separated otherwise. Pairs of nearby transcripts defined above were split into two sets: nearby genes separated by insulators and nearby genes not separated. For these two sets, median Spearman correlations were calculated for each genomic distance and configuration for genes and vlincRNAs (Supplementary Table S4, Table 4).
Building vlincRNA genes
A vlincRNA was designated as a standalone transcript if it did not have a UCSC Gene not separated by an insulator element within 50 kb from either 5 or 3 direction. Overall, for 1753 vlincRNAs the nearest gene was over 50 kb away from 5 and 3 ends; 81 vlincRNAs were separated from the nearest gene by insulators on both 5 and 3 ends, and 607 vlincRNAs had a gene over 50 kb away on one end and an insulator element on another. Therefore, out of 3955, 2441 vlincRNAs were designated as standalone transcripts. The coordinates of 2441 vlincRNAs were then merged in a strand-specific fashion to yield 1542 standalone vlincRNA genes with unique coordinates (Supplementary Table S6).
Analysis of interval overlap between different genomic datasets
The genomic overlap analysis was performed with Bedtools suite version 2.22.1 using 'overlap' function on sorted BED files containing coordinates and genomic strand information of all regions in two datasets to be compared. The same suite was then used to calculate the P-values of overlapping events based on Fisher's exact test ('fisher' function). We used this procedure to overlap ENCODE promoters of 1542 vlincRNA genes with 3043 'non-annotated stem transcripts' (NASTs) from Fort et al. (23) or vlin-cRNA genes with 60 mouse macroRNAs (24) mapped onto hg19 coordinates. The method relies on distribution of the intervals throughout the entire sequenced portion of the genome. This assumption is applicable for NASTs, many of which mapped to introns of known genes, and mouse macroRNAs. However, most of vlincRNA genes are located in intergenic space defined in a strand-specific fashion and thus could overlap known genes on opposite strand. To address this issue, we recalculated theoretical P-value under assumption of uniform distribution of vlincRNA genes in intergenic space with a method developed earlier and tested with randomly generated intervals (3). Three out of 1542 vlincRNA genes were found to overlap the mouse macroR-NAs in the genic space. In addition, theoretically 3.62 vlin-cRNA genes are expected to overlap randomly distributed macroRNAs intervals in the intergenic space. To get upperbound P-value estimation, we used binomial distribution with 1542 trials, 15 successes (the actual number of vlin-cRNA genes overlapping the macroRNAs) and probability 0.0043 ((3.62+3)/1542) of success in each trial. Binomial Pvalue equals to 0.0034 which is in general consistent with Fisher's exact test P-value of 0.004 given by Bedtools.
Transcription factor analysis
Significance of ChIPseq tag count differences between vlin-cRNA +/− 5 kb flanking regions and random genomic intervals was assessed with simulation tests against 1542 random non-overlapping 10 kb regions in hg19 genome (1000 permutations). Promoters assigned to vlincRNAs genes (located with +/−5 kb from 5 end) or UCSC Genes transcripts (located with +/−1 kb from 5 end) were intersected with OCT4/POU5F1, SOX2, NANOG binding sites to give ChIPseq+ or ChIPseq-status to the corresponding vlin-cRNA or UCSC Gene transcripts. Overall, 28 479 UCSC Gene transcripts could be associated with 20 255 promoters. Among them 1419 promoters were associated with SOX2 transcription factor binding sites (TFBSs), 3961 with OCT4, 3426 with NANOG and 795 promoters had all three TFBSs. And, for 722 vlincRNA genes associated with promoters, 98 have SOX2 binding sites, 138 have OCT4, 153 have NANOG and 65 had all three TFBSs (Supplementary Table S6).
GO analysis of UCSC genes correlating with vlincRNAs
For each of the 1542 vlincRNA genes, we obtained a list of GO terms enriched in UCSC Genes either correlating (Spearman correlation > = 0.35) or anti-correlating (Spearman correlation < = −0.35) with that vlincRNA gene using GOstats (2.30.0) (19) Bioconductor package (Supplementary Figure S7). Only GO terms enriched with unadjusted P-value threshold of <0.05 were recorded as associated with a particular vlincRNA gene. Correlating and anti-correlating GO terms were recorded separately. Using Bioconductor packages AnnotationDbi and GSEABase (see above), these correlation tables were used to produce custom GO-annotation subsequently used with GOstats for GO terms enrichment analysis as described in GOstats manual (http://www.bioconductor. org/packages/release/bioc/vignettes/GOstats/inst/doc/ GOstatsForUnsupportedOrganisms.pdf). To obtain GOenrichment analysis of different vlincRNA subsets (for example, LTR ChIPseq+), we used with GOstats and GO terms associated with all 1542 annotated vlincRNAs as a background. Only biological process GO terms were considered. False Discovery Rate (FDR)-correction was used to adjust the final P-values reported in Supplementary Table S8. Description of some columns in this table: 'Padj' = P-value adjusted for multiple testing; 'ExpCount' = number of upregulated (or downregulated) vlincRNAs with this GO term expected by chance; 'Count' = observed number of upregulated (or downregulated) vlincRNAs with this GO term; 'Size' = total number of vlincRNAs with this GO term.
VlincRNA subcellular localization analysis
Exponentially growing K562 cells were collected and fixed following the protocol recommended by Advanced Cell Diagnostics (ACD, Hayward, USA) for RNAscope (25) assay of Non-Adherent Cells. Cells were then cyto-centrifuged onto slides using Thermo Scientific TM Shandon TM EZ Megafunnel TM . Slides were air-dried for 20 min, immersed in 50% ethanol for 5 min, in 70% ethanol for 5 min, in 100% ethanol for 5 min, stored in fresh 100% −20 • C ethanol and shipped to ACD. Cell staining was performed by ACD using 20 double Z target probe pairs hybridizing to a 1 Kb region of the target RNA. The hg19 coordinates of the regions chosen for probes were chr8:91319651-91320841 and chr5:53627148-53628102 for vlincRNA genes #898 and #1501 correspondingly (Supplementary Table S6). The cells were counterstained with DAPI to image the chromatin, and then coverslipped with a photoprotective solution. The cells were visualized on a Zeiss LSM 510 confocal microscope using typically 18 slices in the z dimensions through the cell. FISH probes were imaged with excitation and emission filters optimized for Atto500 (red) and then reimaged for DAPI (blue). The FISH and DAP images were merged, and z-stacks flattened as needed in Velocity software. The images shown were imaged at identical power and gain settings.
Measuring vlincRNA expression using CAGE
The entire analysis presented here depends on accurate measurement of levels of protein-coding mRNAs and especially non-coding vlincRNAs, since no one has ever attempted measuring such long transcripts using CAGE. The FANTOM5 CAGE data is based on obtaining short sequence tags from 5 ends of transcripts containing 5 cap using Helicos SMS platform. The resulting 5 tags are mapped back to the genome and thus represent 5 ends of coding and non-coding RNAs if the latter contain 5 cap. Overall, the 5 ends of vlincRNAs showed a pronounced association with CAGE tags, as shown by the cumulative distribution of all tags around all 5 ends of vlincRNAs ( Figure 1A, also Figure 1B and C for a specific example). On the other hand, the example in Figure 1D and E highlights a problem in the detection of vlincRNAs: reliance only on CAGE tags mapping around the 5 end of a transcript that falls within a repetitive region with a poor alignability score--for example, an LTR. In fact, a significant fraction of vlincRNAs associate with LTR containing promoters (3), thus presenting a problem for accurately quantifying these transcripts with tags mapping only near 5 ends. The presence of repeats that frequently drive expression of other types of ncR-NAs (23,26,27) makes this a more global problem. Therefore, we explored various CAGE tag counting strategies for vlincRNAs and for mRNAs, by comparing them with SMS RNAseq data in selected samples. To our knowledge, this work provides the first direct comparison of CAGE versus RNAseq both performed on the SMS platform (28).
CAGE counts for different transcripts were obtained in two different ways: (i) '5 flanking'--summing up tags located within varying distances from 5 ends--the classi- Figure S1). The CAGE signal while enriched at the 5 ends of vlincRNAs also covered fairly well the bodies of vlincRNAs and dropped in the flanking genomic regions (Supplementary Figure S1). In fact, the CAGE and RNAseq profiles looked similar--higher at the 5 ends and gradually decreasing towards the 3 ends--and the Spearman correlation between the CAGE and RNAseq distributions was indeed quite high: 0.947 (Supplementary Figure S1). If the CAGE signal represented only the sites of transcription initiation, the correlation with RNAseq would be low. Instead, this result argues that the internal CAGE tags predominantly represent sites of internal cleavage and capping rather than internal initiation. This observation encouraged us to compare whether the quantitation using internal CAGE tag counting method would give similar results with the RNAseq method.
To compare CAGE versus RNAseq in mRNA detection, for each known UCSC Genes transcript of 1000 nt or longer, we calculated four DGE CAGE values for +/-1 kb, +/-500 bp and +/-100 bp around annotated 5 ends--the '5 flanking' counting method--as well as by summing up CAGE tags in all annotated exons--the 'internal' method. Noteworthy, integrating the tags around +/−500 bp has been the preferred method of estimating gene expression so far using for CAGE-based data (8). These values were then compared to DGE values based on RNAseq counts calculated by summing up RNAseq reads mapping to the sense strand of annotated exons. Table 1 shows the corresponding Spearman correlations. Notably, for most samples, the method of counting did not make substantial difference even though it was consistently higher in the 'internal' mode. However, for human leukemia cell line K562, the 'internal' method gave an appreciably higher correlation with RNAseq. For example, the corresponding values for +/−500 bp '5 flanking' and 'internal' were 0.753 and 0.818. Supplementary Figure S2 shows the scatter plots of internal CAGE versus RNAseq counting methods.
We also calculated CAGE DGE counts for vlincRNAs initially found in K562 and whole blood (3) using the two methods of counting: around the predicted 5 ends and inside vlincRNA borders. Since our knowledge of vlincRNA 5 end is less precise than for known genes, we used a +/−5 kb clamp around the 5 ends to estimate the CAGE DGE counts. The comparison was done to RNAseq counts obtained by summing up reads within vlincRNA bounds. The Spearman correlations with RNAseq were much higher for the 'internal' method of counting (Table 1). Using window sizes similar to the ones used for known genes in the '5 flanking' method decreased the correlations even further (Table 1). Overall, for both tissues, the correlations between CAGE and RNAseq in vlincRNAs counted within the vlin-cRNA boundaries were >0.8 (Table 1), which made us confident that CAGE offers a reliable estimate of expression for this novel class of very long non-coding RNAs. The lower CAGE versus RNAseq correlation found in blood vlincR-NAs (0.819 versus 0.838 in K562, Table 1) could be explained by their lower expression levels compared to K562 vlincRNAs (average expression of 0.208 versus 0.34, Mann-Whitney-Wilcoxon test P-value = 1.29 × 10 −10 ).
In addition, the 'internal' method was also significantly more sensitive in vlincRNA detection ( Supplementary Figure S3). In five cell types, the internal method has consistently detected more vlincRNAs with higher counts. This could be explained by one limitation of CAGE--the peak of the signal is limited to a very specific region of the corresponding transcript and if such region has properties that interfere with detection of reads--for example, sequence properties that interfere with sequencing or mapping of reads--then no signal is obtained. Such problem is illustrated in Figure 1D and E, where 5 end of a vlincRNA falls within a repetitive region with a poor alignability score. The 'internal' counting method alleviates this problem.
Given the critical importance of accurate measurements for all downstream work, we did additional tests of robustness of the internal CAGE counting method. One can imag-ine that presence of overlapping transcripts could complicate accurate quantitation. Therefore, first, we asked a question whether internal initiation sites would bias vlincRNA quantitation. To address this, we removed CAGE tags and RNAseq reads that fell within +/−1 kb of (i) promoters and (ii) 5 ends of ESTs annotated within boundaries of vlincR-NAs and recalculated the vlincRNA expression levels using the internal method. For this and all other analyses in this paper, we will be using the promoter list derived from the chromatin state segmentation analysis (16,33). In neither case was the correlation between CAGE and RNAseq significantly affected. In whole blood, the correlation changed from 0.819 to 0.821 in (i) and to 0.809 in (ii). And, in K562 it changed from 0.838 to 0.836 in (i) and to 0.854 in (ii). Second, we asked a question whether the presence of multiple known transcript isoforms can alter the correlation for known genes. In fact, the lowest correlation of 0.848 (compared to 0.871 for all genes) was observed in blood RNA for UCSC genes with one isoform, likely because of low level expression of such genes (data not shown). CAGE versus RNAseq correlation for genes with more than two and five isoforms was 0.865 and 0.856, respectively. Overall, these results suggest that internal initiation or presence of multiple isoform does not skew our CAGE analysis.
To further validate our counting method, we compared CAGE data with ENCODE RNAseq data in their quantification of vlincRNAs. In addition to the difference in methods of RNA analysis (CAGE versus RNAseq), the two datasets were obtained on different batches of cells, different sequencing platforms (SMS and Illumina) and in some cases on different RNA sub-fractions. To account for these differences, we have limited our analysis to cell lines for which we had SMS RNAseq data so that we could establish maximum possible correlation one could achieve if the comparison was limited to RNAseq method. Thus, we have first compared SMS RNAseq for whole cell total RNAs for K562 and HepG2 with ENCODE RNAseq for K562 (whole cell total RNA) and HepG2 (whole cell long polyA-RNA). We then estimated the same correlations between CAGE (using the internal method) and ENCODE RNAseq and asked a question how close SMS CAGE was to SMS RNAseq in correlating with ENCODE RNAseq samples. The correlations for K562 were 0.829 (SMS RNAseq versus ENCODE RNAseq) and 0.772 (SMS CAGE versus ENCODE RNAseq). The corresponding correlations for HepG2 were 0.746 and 0.7. While there was a drop in correlation due to the change in the method of analysis, this result suggests that SMS CAGE is quite close to SMS RNAseq in its correlation with the ENCODE data for vlincRNAs.
Finally, we validated the results of internal CAGE counting method using real-time RTPCR (Supplementary Table S1). We have estimated relative expression levels of 31 vlin-cRNAs in K562 RNA using real-time RTPCR and compared these values with internal CAGE and SMS RNAseq counting methods (Supplementary Table S1). The corresponding Spearman correlations were 0.802 (RTPCR versus CAGE) and 0.847 (RTPCR versus RNAseq).
All the results above support the notion that CAGE could be used for accurate quantitation of known genes and vlin-cRNAs. Because of the far superior correlations and better sensitivity for vlincRNAs, we have also adopted the 'in-ternal' counting mode for the downstream analysis in this study.
Association of vlincRNAs regulated by retroviral promoters with cancer and pluripotency
The availability of the much broader FANTOM5 CAGE dataset (833 different samples) allowed us to comprehensively establish the association of vlincRNAs with LTRs in their promoters with malignancies and pluripotency (3). We have previously found that vlincRNAs tend to be highly cell-type specific (3,4), therefore we suspected that a metric that relies on median or average would not be informative for this analysis. Indeed, distributions of tumour/normal ratios based on median or average expression of all vlin-cRNAs among 399 normal (excluding stem cells) or 332 cancer samples did not reveal any differences between LTR and nonLTR vlincRNAs (Supplementary Figure S4). The same effect was obtained when using 95th or 98th percentiles of expression (data not shown). Only comparison of distributions of ratios based on maximum expression values (maximum in tumours versus maximum in normal tissues) for each vlincRNA could separate normal and cancer tissues (Supplementary Figure S4, KS test P-value <2.2 × 10 −16 ). The distribution of maximums in the cancer tissues was consistently higher only for LTR vlincRNAs (Supplementary Figure S4). Therefore, we calculated a ratio of maximum expression in cancer versus maximum in normal tissues (RMECN) for each of 1702 out of 3955 vlincRNAs that could be assigned to promoters (Supplementary Table S2). The median RMECN value was 1.706 for vlincRNAs with LTRs ('LTR') and 1.132 for those without LTRs in the promoters ('nonLTR') (KS test P-value = 1.308 × 10 −10 , Mann-Whitney-Wilcoxon test P-value = 8.473 × 10 −11 ) showing the more significant upregulation of LTR vlincR-NAs in cancerous state as compared to the nonLTR vlin-cRNAs. The main reason for the RMECN difference between the LTR and nonLTR vlincRNAs is the lower expression of the former in the normal tissues rather than higher expression in cancers. The average of maximum expression values for LTR and nonLTR vlincRNAs in cancers were respectively 110.5 and 106.1 and in normal tissues 62.7 and 104.0. This suggests that LTR-driven vlincRNAs are silenced in normal cells and then activated in specific cancers (and pluripotent stem cells--see below), while nonLTR vlincRNAs can be highly expressed in either tissue type.
A maximum-based metric could potentially be biased by a few outliers. Therefore, we asked a question in how many cancer cell lines would at least one LTR-based vlincRNA reach its expression maximum if all cancerous and normal samples are compared together. In fact, this happens in 106 out of 332 cancer cell lines (31.9%) compared to 68/399 in normal samples (17%). Therefore, the RMECN signal is based on 106 samples arguing against spurious effects of few cancer cell lines. We next tested this by permuting labels of cancer and normal samples and calculating RMECN for LTR and nonLTR vlincRNAs and then taking the ratio of former versus the latter. In each of 1 million permutations, the ratio of RMECN for LTR vlincRNAs to that of nonLTR vlincRNAs was lower than in the real data. We next removed 10 cancer cell lines with the highest number of extremely expressed LTR vlincRNAs and repeated the permutation analysis. The result was the same (P-value of 10 −6 ).
On the other hand, the higher RMECN value for LTR vlincRNAs could still be explained by relatively few vlincR-NAs with very high maximum values. Therefore, we have then asked whether the fraction of LTR vlincRNAs with maximum expression in a cancer sample is higher than such fraction of nonLTR vlincRNAs. Indeed, 406/611 (66.4%) LTR vlincRNAs have maximum expression in a cancer compared to 614/1091 (56.3%) nonLTR vlincRNAs (Pvalue = 2.3 × 10 −5 , Fisher's exact test). The difference becomes even larger if we remove vlincRNAs that reach maximum expression in either stem or immortalized cell lines with the corresponding values for LTR and nonLTR vlin-cRNAs of 393/588 (66.8%) versus 589/1047 (56.3%) (Pvalue = 1.6 × 10 −5 , Fisher's exact test). All these results support the notion that the observed upregulation of LTR versus nonLTR vlincRNAs based on RMECN are not due to a few outlier samples or vlincRNAs.
Second, we used global metrics based on the relative mass of these ncRNAs. These metrics would assume behaviour of all LTR-driven vlincRNAs as one family in each sample and we tested how well these metrics perform to separate various types of biological samples. To make sure that promoters assigned to vlincRNAs could not associate with known genes, we focused on 1077 (out of 3995) vlincRNAs separated from known genes by at least 50 kb--we will refer to them as distal vlincRNAs. We calculated the normalized fractions of CAGE tags falling into either LTR or nonLTR class, and used them to sort the 833 samples (Supplementary Table S3). Of the top 100 tissues ranked by their distal LTR-associated vlincRNA fraction, 86 (out of 332) were cancerous, 12 (out of 92) were either pluripotent or points of stem cell differentiation time courses and only 2 (out of 399) were normal (expected 40, P-value 3.9 × 10 −24 , Fisher's exact test). On the other hand, the same analysis based on distal nonLTR-associated vlincRNA fraction did not enrich for cancers: of the top 100 tissues, 44 samples were cancers, 10 were of stem cell origin and 46 were normal (the corresponding P-value of cancer sample enrichment was 0.2133).
Third, we then calculated a different summary metric--the fraction of CAGE tags falling into these two classes--distal LTR-and nonLTR-associated vlincRNAs--relative to all (not just vlincRNA) informative (uniquely mapping nuclear non-rRNA) CAGE tags for each tissue (Supplementary Table S3). The corresponding median values for normal, cancerous and stem (any kind) tissues were 0.023%, 0.038% and 0.036% for the LTR-driven vlincRNAs (see distributions of these values in Supplementary Figure S5). Applying the Mann-Whitney-Wilcoxon test to this metric also showed that LTR vlincRNAs were indeed significantly enriched in cancers and stem cells relative to normal samples (Table 2). Interestingly, despite the small sample number, the 10 immortalized cell lines also had significantly higher fraction of LTR vlincRNAs than normal samples ( Table 2). In contrast, the fraction of nonLTR vlincRNAs did not distinguish the different types of tissues, underscoring a more constitutive presence of this subgroup of vlincRNAs ( Table 2). Finally, the availability of two ES differentiation time courses (9) allowed for a more direct proof of association between LTR vlincRNAs and pluripotency. The fractions of distal LTR-driven vlincRNAs decreased as H9 Embryonic Stem Cell (ESC) differentiation progressed. They also dropped during HES3 differentiation (Figure 2). A similar decrease was not observed for the fraction of distal nonLTR vlincRNAs ( Figure 2). These results show a progressive loss of expression of LTR-driven vlincRNAs as measured by their relative mass during the differentiation of ESCs and serve as additional evidence of their function during pluripotency. Altogether those results establish the association vlincRNA with LTRs in their promoter with cancer and pluripotency based on 833 human samples.
CAGE analysis can measure correlation between non-coding RNAs and their targets
Many ncRNAs act by regulating other RNAs either directly or indirectly. Although evidence of this regulation should exist in the FANTOM5 dataset, relatively few long noncoding RNAs have a reliable list of associated targets, especially those of a novel class such as vlincRNAs. miRNAs on the other hand offer perhaps the best characterized list of targets of any class of ncRNA. Therefore, we wanted to test whether analysis of the FANTOM dataset could detect the effect of ncRNAs on their targets, using miRNAs and their predicted targets as a positive control. Therefore, we tested for correlation between the levels of the long primary precursors of miRNAs and experimentally validated mRNA targets of corresponding miRNAs (22). We counted CAGE tags in +/−1 kb window around a pre-miRNA (on the sense
Random targets
Real targets Spearman Correlation Figure 3. Distribution of Spearman correlations between experimentally validated and random targets of miRNAs. For each miRNA in miRTar-Base, median Spearman correlations between its experimentally validated (red) and random (blue) targets were plotted for the primary forms of miR-NAs. The P-values from the KS test that the distribution of the correlations for the real targets is either lower or higher than that for random targets are shown. strand) as estimate of the corresponding pri-miRNAs. The levels of the mRNAs were calculated by counting CAGE tags within the corresponding exons. We then selected a subset of primary miRNAs expressed in at least 1 of 833 samples of the FANTOM5 dataset. Spearman correlation between each of the 369 miRNAs and each of its targets was then calculated in the 833 tissues, and compared to a control analysis conducted with random targets. Only targets expressed in at least 1 of 833 samples were taken. The plots of median correlations for the real and random targets revealed an interesting picture. As expected, the correlations for the real targets had a strong negative tendency, significantly more displaced towards lower territory than the random targets (P-value = 3.1 × 10 −6 , one-sided KS-test) (Figure 3). Similar conclusions were observed by the analysis of the expression levels of the mature forms of miRNAs (data not shown).
Thus, the data above suggests that the FANTOM5 dataset contains information on functional connections between long or short ncRNAs and their targets. While this conclusion was achieved based on a class of ncRNAs (miRNAs and their precursors) with a highly characterized mechanism of action and set of targets, it encouraged us to explore potential targets of vlincRNAs, a much less characterized and understood class of ncRNAs, using similar correlation-based approaches as described in the sections below.
Patterns of correlation of vlincRNAs with adjacent genes
Although vlincRNAs proximal to known genes could be involved in important biological functions, they could also represent un-annotated introns and exons of yet undiscovered extended isoforms of known genes (34,35). Therefore, we explored the cis correlation between vlincRNAs and nearest genes as a function of configuration and distance (Table 3). Four possible configurations were tested: same strand vlincRNA upstream or downstream of the nearest gene and opposite strand head-to-head or tail-to-tail configurations. Numbers of vlincRNAs and genes in each configuration are given in Supplementary Table S4. Eight dif-ferent distance bins were used, ranging from 0-1 kb representing adjacent vlincRNAs and genes to >50 kb, where vlincRNAs likely represent standalone genes. As a control, we used correlations between known genes in the same genomic configuration and distance ( Table 3).
The first observation concerned a relatively high Spearman correlation between vlincRNAs and known genes on the same strand. The adjacent vlincRNA and genes (distance of 0-1 kb, Table 3) had the highest correlation, which decreased as distance increased ( Table 3). The correlation peaked at 0.38-0.39, making it similar to the global correlation between exons of annotated genes and their corresponding introns (0.4) (36). By comparison, nearby annotated genes on the same strand did not show this pattern: their correlation ranged from 0.07 to 0.16 and changed very little with distance (Table 3), as would be expected for independently regulated units. Therefore, some vlincRNAs on the same strand and close to known genes likely represent yet un-annotated extensions of the latter (see below).
However, examination of the Spearman correlations between transcripts on opposite strands--excluding the fact that vlincRNAs could be un-annotated extensions--revealed an un-expected observation: a consistently higher correlation between nearby vlincRNAgene pairs relative to gene-gene pairs of the same relative position and distance ( Table 3). The correlations between tail-to-tail and head-to-head vlincRNAs and gene pairs across all distance bins were 0.15 and 0.146. The same numbers for the gene-to-gene pairs were 0.076 and 0.110 ( Table 3). The higher positive correlations for vlincRNAgene pairs were also statistically more significant than gene-gene pairs in all configurations (Table 3).
While shared regulatory sequences in a bidirectional promoter could potentially explain the correlation of the headto-head orientation, this trivial explanation cannot explain the higher tail-to-tail correlation. Rather, it suggests different mechanisms by which vlincRNA affects expression of nearby genes in cis, a mechanism previously suggested for another class of ncRNAs--activating RNAs (11,12). It also suggested that some of the correlation between vlin-cRNAs and genes on the same strand could potentially be due to that mechanism rather than being trivially explained by vlincRNAs representing novel extensions of the adjacent genes. To our knowledge this represents the first analysis of the global effect of a class of ncRNAs on their neighbouring genes. To explore this in more detail, we took advantage of insulators--genetic elements known to function at least in part as barriers between different chromatin domains (13).
Insulators support functionality of vlincRNAs as activators of adjacent genes in cis
Before investigating behaviour of vlincRNAs and genes separated by insulators, we wanted to ensure that the CAGE dataset provided evidence for function of insulators as genomic barriers. In fact, recent data questioned the role of insulators as barriers between genes (13). We took advantage of the insulator elements identified based on chromatin state segmentation of ChipSeq data by Hidden Markov Model (ChromHMM) in nine cell lines (16,33). Spearman correlations between pairs of vlincRNAs and genes in different genomic configurations separated and non-separated by insulator elements were calculated essentially as above. The same was done for pairs of known genes (Supplementary Table S4). The presence of an insulator element had a strikingly different effect on correlations between different types of nearby elements. As expected from a barrier element, insulators significantly reduced the correlation between adjacent head-to-head pairs of genes from 0.315 to 0.0415 (Supplementary Table S4). It is worth noting that while correlation of gene expression was calculated among all 833 samples, the insulator elements were limited to nine cell lines in which they were originally identified. Still, the observed effect suggests that an insulator functions in more than one cell line. Considering the high probability of co-regulation of two adjacent head-to-head genes by shared promoter elements, the ability of this type of element to nullify the coregulation proves that it functions as a genomic barrier. This property makes this element quite useful in trying to separate expression correlations based on chromatin accessibility and/or promiscuous transcription from true biologically meaningful relationships.
Insulators had little effect on all other configurations of known genes, including nearby gene-gene pairs on the same strand (Supplementary Table S4). While surprising at first, the reason for this un-expected behaviour became apparent when we explored expression levels of genes not separated by insulators. In all configurations, expression of UCSC Genes not separated by insulators was significantly lower than pairs separated by the elements. This was indicated by a much larger fraction of the UCSC Genes pairs where both members were not expressed in any of the 833 tissues (Supplementary Table S5). Thus, presence of insulators correlates with expression of the loci they separate. Overall, this general characteristic caused the correlations between pairs of genes, not separated by an insulator, to be close to zero. Consistent with the function of insulators as genic barriers, one would not expect their deployment between nontranscribed loci--indeed what we observed (Supplementary Table S5). An important conclusion from all this is that if insulators separate genes then they could serve to define genes as well (see below).
When analyzing all vlincRNA-gene pairs across all distance bins, separation by insulators clearly reduced the positive correlation between vlincRNAs and adjacent genes on the same strand, consistent with the genomic barrier function of the former (Supplementary Table S4). Separation by insulators also reduced the positive correlation for the head-to-head vlincRNA-gene pairs on the opposite strand, while having little effect for the tail-tail pairs (Supplementary Table S4). However, even when separated by insulators, vlincRNA-gene pairs always had higher correlations than gene-gene pairs in the corresponding genomic configuration and distance range (Supplementary Table S4, Table 4). These differences in correlations between vlincRNA-gene and gene-gene pairs were highly statistically significant in all configurations, including vlincRNAs and genes located on opposite strands (Table 4).
Strikingly, however, insulators did not reduce the highest correlations between vlincRNA-gene pairs found at close distances (Supplementary Table S4). For example, while the presence of insulators almost nullified correlation between adjacent divergent head-to-head gene-gene pairs (0.315 versus 0.0415, Supplementary Table S4), the presence of insulators did not have the same effect on such vlincRNA-gene pairs. In fact, adjacent divergent headto-head vlincRNA-gene pairs separated by insulators had higher correlation than pairs not separated by insulators (0.28875 versus 0.495, Supplementary Table S4). Assuming that insulators work in the same manner on all adjacent divergent transcripts (coding and non-coding), this observation implies that there are specific effects of vlincRNAs on adjacent genes that cannot be explained by shared chromatin environment. Moreover, the strong activating effect was also seen in the sense vlincRNA-gene pairs separated by relatively short distances and it was not reduced by insulators (0-5 kb, Supplementary Table S4). The effect was also observed in the tail-to-tail vlincRNA-gene pairs (correlation range of 0.11-0.10), but it never reached levels as high as in adjacent vlincRNA-gene pairs of the other three configurations (correlation range of 0.36-0.495) (Supplementary Table S4). In fact, the difference in vlincRNA-gene and gene-gene correlation in pairs separated by insulators decreased with increased distance but always stayed positive in 'all' configurations even at distance of >50 kb (Supplementary Table S3, Table 4). These results argue that the activating effect on nearby genes is a property of vlincRNAs in all configurations and even at longer genomic distances.
vlincRNAs represent novel human genes
Using these characteristics of insulators, we proceeded to define standalone ncRNA genes from vlincRNAs. A vlin-cRNA achieved standalone gene status if it had no gene on the same strand within 50 kb (from either its 5 or 3 end) or if it was separated from any nearby gene by an insulator. In this scenario, a standalone vlincRNA has to be separated by an insulator from 'all' genes within 50 kb on the same strand. Finally, we collapsed the intervals to obtain 1542 unique novel ncRNA genes (Supplementary Table S6), representing ∼72% of 2147 original unique vlincRNAs described previously (3).
Moreover, 722 (46.8%) of the 1542 could be assigned to a promoter further supporting their status as standalone genes. However, the promoter dataset used in this work comes from a limited number of tissues and thus likely does not contain promoters for all vlincRNAs. Therefore, we supplemented this analysis by the FANTOM CAGE clusters representing enriched in sites of transcription initiation from the 833 tissues. As expected, a significant number of additional 256 out of 1542 (16.6%) vlincRNA genes had a CAGE cluster within 5 kb of annotated 5 end. The remaining 564/1542 (36.6%) of vlincRNAs could still represent transcripts regulated by normal RNA Pol 2 promoters and the failure to annotate them as such could be due to technical issues for example their regulation by highly tissue-specific promoters or presence of repeated sequences within these promoters that would hamper their identification.
By definition, most of vlincRNA genes do not overlap with UCSC Genes on the same strand, however, genes shorter than 5 kb can be present inside boundaries vlin-cRNA (3). To estimate whether we are finding truly novel set of transcripts, we compared them with GENCODE v22 database. We searched for overlap of over 50% of vlincRNA gene length with an annotation to label a vlincRNA as present in those databases. The number of such overlapping vlincRNAs was 371/1542 (24.1%). These results show that most (75.9%) of the vlincRNA genes are novel standalone genes.
We compared these vlincRNA genes with previously identified mouse macroRNAs (24). Of the 66 mouse macroRNAs, 60 could be mapped to the HG19 version of the human genome. Of the 1542 vlincRNAs, 15 overlapped the 60 mouse macroRNAs. The significance (P-value 0.004) of the overlap supports our earlier observations that vlincR-NAs tend to be syntenically conserved between human and mouse supporting their functionality (3). However, the fact that almost all vlincRNAs are novel underscores the fact that genes encoding very long non-coding transcripts represent a prominent, still unexplored and under-appreciated feature of mammalian genome.
LTR vlincRNA genes are controlled by the pluripotency transcription factors OCT4, NANOG, SOX2
The association between vlincRNAs and pluripotency described above encouraged us to explore regulation of these RNAs by TFs associated with maintenance of the pluripotent state. OCT4, NANOG and SOX2 TFs are involved in the maintenance of the pluripotent state of human ES cells (37). To investigate this, we took advantage of the ChIPseq data for OCT4, NANOG and SOX2 binding sites from the H9 ES cell line (15). We found that flanking (+/−5 kb) DNA regions around annotated start sites of vlincR-NAs were significantly enriched in binding sites of each of the three TFs relative to random genomic regions (Pvalue <0.001, Supplementary Figure S6). This suggested that these transcripts are indeed regulated by these TFs.
To explore this in more details, we used the time course of H9 or HES3 ES differentiation CAGE FANTOM5 dataset (9). We first identified vlincRNAs associated with promoters which overlap binding sites for these TFs (15) as described in 'Materials and Methods' (Figure 4). If a TF regulates vlincRNA promoters to which it binds, then correlation of its expression level with the level of the corresponding vlincRNAs would be higher than with levels of vlincR- NAs to promoters of which it does not bind. To test this, we calculated the Spearman correlation of expression of each vlincRNA with expression of each of the three TFs in the time course of H9 or HES3 ES differentiation. We then compared distribution of correlations between vlincR-NAs whose promoters have and do not have evidence of TF binding. The 722 promoter-associated vlincRNA genes were also subdivided into 'LTR' and 'nonLTR' categories based on the presence or absence of LTR sequence in their promoters. Thus, for each TF we had five categories vlin-cRNA genes: (i) the ones that could not be assigned to a promoter; (ii) LTR and (iii) nonLTR vlincRNAs without the TF binding (the control group); (iv) LTR and (v) nonLTR vlincRNAs with the TF binding. The distribution of these correlations for each TF and different groups of vlincRNAs is shown in Figure 5 and the corresponding P-values are given in Supplementary Table S7.
We found a surprisingly high correlation between the expression levels of LTR vlincRNAs bound by each of the three TFs and the levels of these TFs in both time courses ( Figure 5). The correlation for this group of vlincRNAs was substantially higher than all other groups--vlincR-NAs without promoters, LTR or nonLTR vlincRNAs without the binding sites and the nonLTR vlincRNAs with the binding sites ( Figure 5 and Supplementary Table S7). This was the case for each TF and in each of the two ES cell lines ( Figure 5, Supplementary Table S7). The correlation for nonLTR vlincRNAs bound by TFs was also sometimes significantly higher than for the non-bound control (Supplementary Table S7, P-value < 0.05, one-sided KS), but the statistical significance was much lower than in the case of the LTR vlincRNAs (Supplementary Table S7). Because the vlincRNA and promoter datasets originated from different cell lines, conceivably some vlincRNAs without assigned promoters might also be regulated by the three TFs. However, this fraction has to be small because the correlation in this group of vlincRNAs was not significantly different from the control group ( Figure 5, Supplementary Table S7).
These results indicate that levels of nonLTR vlincRNAs are primarily controlled by other means than the levels of the three TFs, yet the levels of LTR vlincRNAs in a significant measure depend on the levels of these TFs. Moreover, the correlation with the pluripotency-associated TFs cannot be explained by the general decrease of all LTR vlincR-NAs in the process of differentiation of ES cells (Figure 2), because the control LTR vlincRNAs without the TF binding sites did not exhibit the high level of correlation ( Figure 5, Supplementary Table S7). We therefore will refer to the 101 LTR vlincRNA genes bound by at least one of the three TFs as 'pluripotency-associated vlincRNAs'.
We then repeated the same analysis with promoters of known genes with and without LTR elements. Strikingly, the Spearman correlation distribution for the known genes was different than for vlincRNAs: the genes with the LTR and TF binding sites in the promoters did not exhibit a striking shift towards higher correlation observed in the corresponding vlincRNAs ( Figures 5 and 6). Indeed, LTR ChIPseq+ vlincRNAs had a statistically significant shift towards higher correlations with the mRNA for the three TFs compared to nonLTR vlincRNAs and LTR or nonLTR genes with binding sites (Supplementary Figure S9). The reason for this became apparent however when we plotted the strength of ChIPseq signals at various promoters of vlincRNAs and genes in the H9 ES cell line (Figure 7). It revealed a much stronger signal at the promoters of LTR vlincRNAs for each of the three TFs compared to nonLTR vlincRNAs and LTR or nonLTR promoters of known genes (Figure 7). This apparent strong interaction argues for a stronger level of control of LTR vlincRNAs by the TFs and thus perfectly explains the much higher correlation between the expression levels of TFs and LTR vlincRNAs, which bind the TFs, compared to any other group of vlincRNAs or genes.
Patterns of vlincRNA functionality based on co-expression with annotated genes
If a vlincRNA functions in a certain process, it is logical to suggest that it is also co-expressed with RNAs encod- ing other known functions involved in this process. Furthermore, some of these co-expressed RNAs might encode proteins (or other ncRNAs) that regulate vlincRNAs (like the pluripotency-associated TFs) or these RNAs represent targets of vlincRNAs themselves. Reciprocally, patterns of coexpressed genes can reveal what processes vlincRNAs could potentially be involved in. Therefore, we undertook the next logical step to functionally annotate vlincRNAs by examining whether vlincRNAs had a tendency to correlate with certain biological functions or certain biological processes (summarized in Supplementary Figure S7). As the first step, for each of the 1542 vlincRNA genes, we found annotated protein-coding transcripts positively or negatively correlating with it in the 833 tissues with Spearman correlation with >0.35 or <−0.35 respectively. We then looked for enrichment of certain biological functions or processes among these protein coding transcripts for each vlincRNA (Supplementary Figure S7). We recorded GO terms enriched with unadjusted P-value < 0.05, thereby establishing a list of GO terms for each individual vlincRNA. As a next step, we calculated enrichment of GO terms in particular subgroups of vlincRNAs such as the 'pluripotency-associated vlincRNAs' relative to the terms associated with all 1542 vlincRNA genes. Such GO terms would be expected to represent the molecular functions that a particular group of vlincRNAs correlates with, positively or negatively.
This analysis revealed some interesting findings as shown in detail in Supplementary Table S8 which lists the top 10 GO terms for each category of vlincRNA. Most importantly, the 101 pluripotency-associated vlincRNAs genes (LTR vlincRNAs with binding site for at least one of the pluripotency TFs) had a strong positive correlation with GO terms representing early embryonic processes. For example, 'anterior/posterior axis specification', 'primitive streak formation', 'gastrulation with mouth forming second', 'midbrain development', 'axis specification' and 'gastrulation' had adjusted P-values ranging from 10 −4 to 4.4 × 10 −8 (Supplementary Table S8). This is consistent with the involvement of this category of vlincRNAs in embryonic development as revealed above in the ESC differentiation time course and their regulation by the pluripotency TFs. It also validated this approach to the inference of vlincRNA functionality. This result can not be explained by regulation of vlincRNAs and genes in the above categories by the same three pluripotency factors. Indeed, while the RNA levels of ChIPSeq+ vlincRNAs and UCSC Genes do correlate with those of the three TFs in the stem cell differentiation time courses as shown above, this correlation disappears once all 833 samples are included (data not shown). On the other hand, the relationship with the early embryonic GO categories was calculated based on the entire set of 833 samples.
The nonLTR vlincRNA genes with the binding sites of pluripotency TFs also had an interesting pattern of positively correlating functions that could be classified as 'cellular-level' development functions that deal with cellular proliferation, migration and apoptosis rather than 'embryonic-level' functions found for LTR vlincRNAs. Cellular-level functions are represented by such terms as 'positive regulation of cell development', 'stem cell proliferation', 'regulation of locomotion', 'apoptotic process' and 'cerebral cortex cell migration' enriched with adjusted Pvalues ranging from 5.5 × 10 −6 to 2.4 × 10 −6 (Supplementary Table S8). In this respect, it is noteworthy that the nonLTR vlincRNAs also had strong negative correlation with cellular adhesion categories (Supplementary Table S8), consistent with the positive categories involved in cellular migration and locomotion. On the other hand, the general enrichment of GO terms in vlincRNAs (LTR or nonLTR) without the pluripotency TF binding sites was much less significant than in the ones with the sites (Supplementary Table S8). One potential reason for it is that the vlincRNAs without binding sites for a particular pluripotent TF with known function could likely represent mixtures of vlincR-NAs with different functions. This underscores the enrichment of the specific functions of vlincRNAs with pluripotency TF sites. It is therefore likely that extending such analysis to other TFs would likely reveal additional functional patterns of remaining vlincRNAs.
To exclude a possibility that the above mentioned results could be explained by expression of other genes in For the latter, promoters with unique coordinates were used in cases when one promoter could be assigned to different transcripts. Only promoters that had at least one ChipSeq read were used for this analysis.
the vicinity of vlincRNAs rather than vlincRNAs themselves, we repeated the GO analysis with vlincRNA genes that do not have a UCSC gene within 50 kb of either strand. The results obtained with 39 (out of 101) LTR ChIPSeq+ vlincRNAs confirmed previous results. Of the top 10 GO terms, 6 were associated with early embryonic development: 'anterior/posterior axis specification', 'primitive streak formation', 'gastrulation with mouth forming second', 'gastrulation', 'axis specification' and 'midbrain development' with the corresponding adjusted P-values 1.89 × 10 −6 , 6.69 × 10 −6 , 1.54 × 10 −5 , 2.13 × 10 −5 , 6.97 × 10 −5 and 1.13 × 10 −4 . The corresponding number of nonLTR ChIPSeq+ vlincRNAs was too small for a meaningful P-value analysis. Nonetheless, these results argue that the observed associations are due to expression of vlincRNAs per se rather than other genes in their genomic surroundings.
Majority of 'pluripotency-associated vlincRNAs' has not been previously associated with this function
Activation of ncRNA derived from retrotransposon promoters and enhancers has been previously reported by us (3) and other groups (23,27). Specifically, aided by focusing CAGE analysis of nuclear fraction rich in noncoding transcripts (38), Fort et al. have identified a class of 'NASTs' involved in maintenance of pluripotency in human and mouse ESC (23). We have therefore investigated how many of our vlincRNA genes overlapped with the NAST CAGE clusters. As shown in Table 5 and Supplementary Figure S8, 7.3% (53 out of 722) vlincRNA genes had a promoter with a NAST CAGE cluster. However, as expected considering the ESC-specific nature of NASTs, this fraction increased to 22.8% (23 out of 101) for the 101 'pluripotency-associated vlincRNAs' (LTR ChIPseq+). In addition, 10.1% of nonLTR ChIPseq+ vlincRNAs had a promoter with NAST. Also, as expected, the fraction of ChIPseq-vlincRNAs that had NAST clusters in their promoters was significantly lower (Table 5, Supplementary Figure S8).
Since CAGE clusters could also occur within bodies of transcripts, we then extended the analysis to include promoters and internal regions of vlincRNAs (Table 5). Predictably, the overlap increased and the 'pluripotencyassociated vlincRNAs' still had the highest fraction of association with NASTs: 34.7% (35/101). Overall, 13.3% (96/722) of vlincRNA genes with assigned promoters overlapped a NAST cluster in a promoter or a body of the transcript (Table 5, Supplementary Figure S8). Finally, 8.4% (130/1542) of all vlincRNAs genes had a NAST cluster either in promoter or gene body. While association of every group of vlincRNA gene with NAST cluster was significant either in promoters or in promoters + gene bodies ( Table 5, Supplementary Figure S8) as would be expected for bona fide transcripts detected by multiple methods, the most significant association was seen with the ChIPseq+ vlincRNAs, especially the LTR-driven ones. Still, this work revealed the association of ∼2/3 of the 101 'pluripotencyassociated vlincRNAs' with this biological function.
vlincRNAs localize in the nucleus in a punctate pattern
Not much is known about the mechanisms of vlincRNA function. The only well-characterized member of this class is VAD vlincRNA involved in maintenance of cellular senescence (6). VAD modulates chromatin structure in cis and trans and also affects expression of specific genes in trans (6). Results presented here also suggest that vlincR-NAs could function in cis and potentially in trans based on their correlation with nearby and distal genes. We wondered whether vlincRNAs reported here and VAD might share additional properties that might indicate common functional mechanisms and provide additional evidence for grouping these transcripts into one class. One such property critical for RNA function could be subcellular localization. VAD, the only vlincRNA for which subcellular localization data is available, has a fairly distinct punctate nuclear localization pattern (6). Therefore, we chose two LTR vlincRNA genes (#898 and #1501, Supplementary Table S6) for localization in K562 cells using RNA fluorescent in situ hybridization (FISH) using RNAscope technology (25) (Figure 8). Both vlincRNAs exhibit a highly localized punctate pattern in the nucleus, similar to that of VAD ( Figure 8). Overall, ∼95% of the puncta for the vlincRNAs were nuclear, supporting nuclear localization of these RNAs, unlike the signal from GAPDH probe where the majority of the dots were outside of the nucleus (Figure 8). The two vlincRNAs were expressed in ∼60% cells (63% #1501 and 58% #898), similar to senescence-associated VAD expressed in 73% of senescent cells (6). Among the expressing cells, the number of dots per cell ranged from 1 to >10, with the largest number of expressing cells (∼40%) having just one dot. However, 44.3% of cells had three dots or more similarly to the VAD vlincRNA which could also be localized to multiple (three or more) sub-nuclear locations in 16% of cells (6). Using the DAPI staining pattern of the nuclear DNA (blue), overlaid with the vlincRNA FISH (red), it is obvious that the punctate nuclear pattern of vlincRNA staining is lost in cells that are in the DAPI-bright metaphase to telophase stages. Further, in non-dividing cells, the vlincRNA punctate staining was almost invariably located to DAPI-dark euchromatic regions, suggesting that it emanates from or rapidly localizes to transcriptionally active regions. In summary, at least three vlincRNAs--two reported here and VAD (6)--appear to localize in the nucleus with a similar distinct punctate pattern suggesting potential general similarities in the mechanisms of action of at least some transcripts of this class. However, additional vlincRNAs would need to be tested to determine whether this is a common property of these transcripts.
DISCUSSION
The place occupied by the vlincRNAs within the genome's architecture and their functional parameters constitute perhaps the most important questions one could ask about this class of ncRNAs. We show that the majority (1542 out of 2147) of the vlincRNAs, even those adjacent to known genes, represent standalone ncRNA genes. This conclusion is based on several observations, but most notably by the analysis of insulators--genetic elements initially discovered as barriers separating genomic elements. While more recent work questioned the function of these elements (13), here we show that indeed insulators do act globally as genomic barriers and separate standalone genic units. Using separation by insulators from existing annotations as one of the main criteria, we found that vlincRNAs represent at least 1542 new genes. This includes the identification of hundreds of vlincRNAs (452 out of 1542) adjacent and on the same strand as known genes, yet representing separate genes. Even when separated by insulators, vlincRNAs are positively correlated with the expression of adjacent or nearby genes on the same or opposite strands. As described above, this effect was most striking on the adjacent head-to-head pairs. Assuming that the basic transcriptional and chromatin machinery is the same for known genes and vlincR-NAs, the effect of insulators would be the same on any pair of transcriptional units. Thus, the positive correlation could not be explained by the shared chromatin environment because vlincRNA-gene pairs separated by insulators always had higher correlation than gene-gene pairs. The local regulatory effect appears to peak at closer distances and manifest itself the most in sense and head-to-head configurations of vlincRNA-gene pairs. Overall, the data presented here appears to suggest a paradigm of regulation similar to the recently reported enhancer-like RNAs or activating ncRNAs (ncRNA-a's) (11,12). At present, we cannot totally exclude the possibility of the reciprocal effect whereby adjacent genes positively regulate vlincRNAs. However, by the analogy with the activating RNAs it is logical to suggest that vlincRNAs fulfil similar functions and activate nearby genes in cis.
The positive regulatory effect of vlincRNAs may not be limited to nearby genes. For examples, VAD vlincRNA participates in the activation in trans of the INK4 locus, which contains anti-proliferative genes required for the senescence-associated proliferation arrest (6). Inhibition of HELLP vlincRNA with siRNAs causes mostly downregulation of expression of target genes (39) suggesting positive regulation of its targets. This would be consistent with an idea of rather diverse modes of ncRNA function (40,41), including as intelligent scaffolds guiding various processes associated with regulation of gene expression in nucleus (42). In this case, the punctate pattern of nuclear localization Figure 8. In-situ localization of vlincRNAs using RNA-FISH. K562 human erythroleukemia cells were fixed and analysed by RNA-FISH for with probes against GAPDH and two LTR vlincRNAs vlinc 377 and vlinc 500 originally identified in (4) corresponding to vlincRNA genes with IDs 1501 and 898 correspondingly in Supplementary Table S6. RNA probes were labelled with ATTO red (red), hybridized and washed extensively prior to counterstaining of DNA with DAPI (blue). Confocal microscopy was used to obtain z-stacks of images that were then digitally flattened, and then the red and blue channels were merged to produce the composite image.
could be revealing subnuclear domains where these vlincR-NAs control expression of specific subset of genes.
Perhaps the most notable result revealed here is that a System Biology-based approach integrating different datasets could distinguish subsets of a novel class of ncRNAs--vlincRNAs--based on predicted biological function. One such prominent subset likely functions in early embryonic development and includes vlincRNAs with LTR promoters and binding sites of any one of the three pluripotency-associated TFs. While we do not yet have direct genetic evidence, the conclusion relies on four independent lines of investigation. First, expression of this group of vlincRNAs is highest in pluripotent ES cells and downregulated during their differentiation. Pluripotent cells are known to have more active genomes (43), however, if downregulation of the LTR vlincRNAs during ESC differentiation simply reflected general chromatin silencing, it is not clear why nonLTR vlincRNAs would not follow the same trend. Second, the vlincRNAs have upstream promoters with consensus binding sites for well-established pluripotency TFs. Likewise, there is very strong evidence of regulation of vlincRNAs by these TFs at least during the time course of ESC differentiation. While LTR vlincRNAs in general have a tendency to decrease during the time course of ESC differentiation, only those with the binding sites correlate strongly with the levels of these TFs. Furthermore, binding by the TFs per se does not necessarily equate with regulation, as shown by the nonLTR vlincRNAs. Thus, the presence of multiple features in their promoters--LTR and binding sites--defines the functioning and regulation of these ncRNAs. Third, co-regulation of these 101 novel vlincRNA genes in all 833 tissues with genes enriched in early embryonic functions provides additional strong support of their function. Finally, the promoters of the 101 LTR vlincRNA genes exhibit the strongest binding of each of the three TFs relative to other vlincRNAs and even more importantly, the known genes. Combined, these four lines of evidence argue against spurious, leaky association of vlincRNA with known transcripts, and are more consistent with highly regulated transcription of the vlincRNAs. However, additional experiments are required to completely rule out this possibility for all vlincRNAs.
These results add to the growing evidence that remnants of endogenous retroviruses have had important and far reaching impact on shaping our development as a species. The categories of genes correlating with pluripotencyassociated vlincRNAs included early brain development with intriguing possibilities of involvement of these RNAs in defining species-specific brain functions as has been previously proposed (40,44).
The strength of associations with various functional GO terms for the 210 vlincRNA genes bound by the pluripotency TFs contrasts markedly with relatively low significance of associations for the 512 vlincRNA genes not bound by these factors. The likely scenario is that the latter category contains a mixture of various functionalities yet to be revealed by a comprehensive approach similar to this one. However, this provides a good control for the functional enrichment observed for the vlincRNAs with the binding sites. Interestingly, the 109 nonLTR vlincRNA genes bound by the pluripotency TFs had different patterns of functions. While also related to development, they positively correlated with cellular (including stem cell) proliferation, migration and apoptosis and negatively correlated with cell adhesion. This evidence supports our previous experiments that revealed a much higher fraction of apoptosis after inhibition of nonLTR vlincRNAs compared to LTR vlincRNAs (3).
Functionality of ncRNAs is the coveted goal of modern genetics research. However, as discussed by Mudge et al. (7), efforts to reach this lofty objective require systems biology approaches rather than traditional forward genetics methods from the protein-coding world. These approaches Nucleic Acids Research, 2016, Vol. 44, No. 7 3251 must also aim to achieve integration between distinct large genome-scale datasets, even with the inherent issues with noise (5). Here we show the feasibility of this approach by identifying potential functional properties for 210 new vlincRNA genes. This includes the 101 novel genes encoding non-coding RNAs potentially involved in early development representing a significant addition to our knowledge of molecular actors involved in this process. Additional experiments would be required to unequivocally prove their function and characterize mechanisms of action.
Overall, our results suggest that vlincRNAs represent a hitherto hidden layer of regulation involved in critical biological processes and disease, participating as yet unappreciated molecular components of these processes. As we show here, well-characterized TFs could have additional facets of their function mediated by vlincRNAs. In addition, these ncRNAs could represent functional missing links that connect hits from genome-wide association studies to phenotypes (45).
SUPPLEMENTARY DATA
Supplementary Data are available at NAR Online. | 2016-05-04T20:20:58.661Z | 2016-03-21T00:00:00.000 | {
"year": 2016,
"sha1": "18c280d8fb4aab549afe3a778e86f7133414c932",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/nar/article-pdf/44/7/3233/17438830/gkw162.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bec0b994efa1b649590773ccb5338c0b378b2bb9",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
52958309 | pes2o/s2orc | v3-fos-license | An Ultrasonication-Assisted Cobalt Hydroxide Composite with Enhanced Electrocatalytic Activity toward Oxygen Evolution Reaction
A catalyst toward oxygen evolution reaction (OER) was synthesized by depositing cobalt hydroxide on carbon black. Ultrasonication was applied during precipitation to improve the performance of the catalyst. The ultrasonic-assisted process resulted in the refinement of the cobalt hydroxide particles from 400 nm to 50 nm, and the thorough incorporation of these particles with carbon black substrate. The resulting product exhibited enhanced OER catalytic activity with an onset potential of 1.54 V (vs. reversible hydrogen electrode), a Tafel slope of 18.18 mV/dec, and a stable OER potential at a current density of 10 mA cm−2, because of the reduced resistance of the catalyst and the electron transfer resistance.
Introduction
Clean and renewable energy sources are urgently needed to be developed to address the increasing energy crisis and pollution problems. Hydrogen, as a high-energy density noncarbon energy source, has been considered as a clean energy source, and its production has been one of the top research hotspots. Electrochemical water splitting is an effective approach to convert renewable energy sources to promising hydrogen energy [1][2][3][4][5]. This process involves a hydrogen evolution reaction at the cathode, and an oxygen evolution reaction (OER, 4OH − → O 2 + 2H 2 O + 4e − in alkaline solution) at the anode. The OER is the major obstacle for water splitting, because this process is kinetically sluggish, involving a complex four-electron transfer with high overpotential [6][7][8]. Considerable attention has been paid to developing efficient catalysts for OER with low overpotential and high activity.
IrO 2 and RuO 2 have been demonstrated as the best OER electrocatalysts [9,10]. However, their large-scale applications are blocked by their high prices and limited natural reserves. The first-row transition metal oxides have been extensively investigated because of their natural abundance, low cost, environmental-friendliness, and high catalytic activity [11][12][13][14][15][16]. Studies have suggested that the oxy-hydroxides of nickel, cobalt, and iron are promising electrocatalysts for the OER. Co 3 O 4 [17][18][19] and Co(OH) 2 [20][21][22], synthesized by different methods have been widely investigated because of their high stability and high catalytic activity (theoretically, Co 3 O 4 can close the OER activity of RuO 2 and IrO 2 ). However, the development of oxy-hydroxides of cobalt with high electrocatalytic activity to OER remains a major challenge because of poor electrical conductivity and limited exposed active sites.
To obtain an OER catalyst with better conductivity, transition metal oxide catalysts are mixed with conductive materials, such as carbon black, with the aid of polymeric binders in actual application.
However, the utilization of insulating binders can reduce the electrocatalytic performance [23,24]. An efficient strategy to overcome this shortage is by fabricating nanoparticles of OER catalysts and naturally immobilizing the nanoparticles onto the conductive substrates. Fabrication of composites of oxy-hydroxides of cobalt with functionalized carbon-based materials (e.g., heteroatom-doped graphene and carbon nanotubes) has received special interest. This process can improve the poor conductivity of metal oxides, and the resulting catalysts also usually exhibit bifunctional catalytic activities [25,26]. The morphologies of the transition metal oxides also play a key role in the heterogeneous electrocatalytic reaction, because more catalytic active sites can be exposed to facilitate the OER by refining the particle sizes or by increasing the porosity of the catalysts with greater specific surface area [27,28].
Ultrasonication is a nonconventional technique that has been proven to be superior on operation, product selectivity, and reduced reaction time. Ultrasonication can bring instantaneous generation, and the growth and collapse of micrometer-sized bubbles in liquid medium, which is helpful for the synthesis of nanostructured materials. Recently, the use of ultrasonication in chemistry has grown significantly [29][30][31]. In the present work, we prepared cobalt hydroxide [Co(OH) 2 ] by parallel flow precipitation on carbon black with the assistance of ultrasonication. The particles of Co(OH) 2 were refined and well incorporated with the carbon black substrate, and the resulting catalyst shows enhanced OER catalytic activity.
Synthesis of the Catalyst
A mixed solvent (pH 10) from 30 mL of deionized water and 30 mL of ethanol was placed in a water bath (50 • C) with an ultrasonic generator (40 kHz). Then, 2 g of carbon black (VXC72R, Cabot) was dispersed into the solvent under ultrasonication and mechanical stirring. Afterward, 30 mL of Co(NO 3 ) 2 ·6H 2 O solution (1.0 mol L −1 , 0.03 mol) and 30 mL of KOH solution (2.0 mol L −1 , 0.06 mol) were added into the mixed solvent at 1.0 mL min −1 by two peristaltic pumps. Then, ultrasonication and stirring were terminated, by keeping the reaction system in the water bath for another 10 h, followed by filtering and washing with deionized water until pH 7, drying at 80 • C for 10 h. The resulting catalyst was denoted as ultra-Co(OH) 2 /C. As a control, Co(OH) 2 /C was prepared by the same procedure but without the assistance of ultrasonication during precipitation.
Electrochemical Measurements
The OER catalytic activity of the catalyst was evaluated by electrochemical measurements performed in 0.1 M KOH solution on a CHI760E workstation (Chenhua, Shanghai, China) with a platinum sheet as a counter-electrode, a saturated calomel electrode (SCE) as a reference electrode, and a glassy carbon electrode (Φ = 0.5 cm) coated with the catalyst as a working electrode by pipetting 10 µL of dispersion of catalyst (including commercial RuO 2 , 10 mg mL −1 of concentration) onto the electrode surface and naturally drying in air. The catalyst loading on the working electrode was 0.51 mg cm −2 . Electrochemical linear sweep voltammetry (LSV) was conducted at a scan rate of 5 mV s −1 . The OER potential was recorded by chronopotentiometry at a current density of 10 mA cm −2 . The dispersion of catalyst was also dropped on graphite paper and carried out the chronopotentiometry test for 5 h, and then its micromorphology was observed. The electrochemical impedance spectrum (EIS) was tested over a frequency range from 10 5 Hz to 0.1 Hz. All of the potentials were converted to reversible hydrogen electrode (RHE) using the equation E RHE = E SCE + 0.2415 V + 0.0591 pH (pH 13). Figure 1 shows the preparation of the Co(OH) 2 /C catalysts without/with ultrasonication, and the morphologies of the resulting catalysts. Once the solutions of Co(NO 3 ) 2 and KOH were dropped into the suspension containing carbon black, the Co 2+ ions immediately reacted with the OH − ions to form the Co(OH) 2 precipitates. These precipitates were deposited onto the surface of carbon black to yield the Co(OH) 2 /C catalyst.
Results and Discussion
During the initial stage of precipitation, the Co(OH) 2 particles were small and had greater surface Gibbs free energies. The particles spontaneously assembled to form bigger crystals under quiet conditions. However, crystal growth was destroyed once ultrasonication was imposed on the reaction system. The SEM images in Figure 1 show that the hexagonal crystals of Co(OH) 2 with size of approximately 400 nm can be observed in the Co(OH) 2 /C catalyst. The Co(OH) 2 particles were bigger than that of carbon black, and they were not well incorporated with each other. However, only spherical particles with sizes of less than 50 nm could be observed, and no obvious Co(OH) 2 particles with regular shapes were found in the ultra-Co(OH) 2 /C prepared with the assistance of ultrasonication.
The results indicate that Co(OH) 2 crystal could not grow well because of the disturbance caused by ultrasonication during precipitation. The Co(OH) 2 precipitates existed as small particles, and they were absorbed on the surface of carbon black. Thus, better distribution and incorporation with carbon black substrate was found in ultra-Co(OH) 2 /C. The electrocatalytic performance of the catalysts to the OER was investigated in 0.1 M KOH solution. Figure 3a shows the linear sweep voltammetry (LSV) curves of the OER on the catalysts of Co(OH) 2 /C, ultra-Co(OH) 2 /C, and pure commercial RuO 2 . The OER onset potential of RuO 2 was approximately 1.52 V, while those of Co(OH) 2 /C and ultra-Co(OH) 2 /C were approximately 1.54 V. The similar potentials show that the OER catalytic activity of Co(OH) 2 was close to the OER activity of the benchmark RuO 2 catalyst. However, the OER current densities of the three catalysts were different from each other. At the same potential of 1.66 V, the OER current density was 6.16 mA cm −2 on the ultra-Co(OH) 2 /C, which was greater than the 4.05 mA cm −2 of the Co(OH) 2 /C and the 2.95 mA cm −2 of RuO 2 . The improvement in catalytic performance of ultra-Co(OH) 2 /C can be attributed to its smaller particle size and better conductivity.
The LSV curves were converted to Tafel plots and fitted by the Tafel equation (η = a + blgj, where η is the overpotential, j is the current density, and b is the Tafel slope) to examine the catalytic kinetics of the catalysts, as shown in Figure 3b. As expected, the ultra-Co(OH) 2 /C showed a Tafel slope of 18.18 mV/dec, which was less than the 18.92 mV/dec of the Co(OH) 2 /C and 20.25 mV/dec of RuO 2 . Thus, the ultra-Co(OH) 2 /C could achieve higher current density at a lower overpotential than the Co(OH) 2 /C and RuO 2 catalysts. Figure 4 presents the OER performed at 10 mA cm −2 of current density, and the SEM imagines of the two catalysts after catalyzing OER for 5 h. The electrode potential of ultra-Co(OH) 2 /C remained stable at a lower level in the OER process. However, the overpotential of Co(OH) 2 /C catalyst was greater, and it increased with the reaction time. As shown in the SEM images, the hexagonal crystals of Co(OH) 2 in Co(OH) 2 /C catalyst disappeared after the OER. The broken crystals of Co(OH) 2 in the OER process deteriorated the catalysis performance of the Co(OH) 2 /C catalyst.
These results indicate that the ultra-Co(OH) 2 /C catalyst has better OER catalytic performance than the Co(OH) 2 /C catalyst. This characteristic was due to the fact that the Co(OH) 2 particles were smaller in the ultra-Co(OH) 2 /C catalyst than in the Co(OH) 2 /C catalyst without the disturbance from the ultrasonication during precipitation. Consequently, the Co(OH) 2 in the ultra-Co(OH) 2 /C catalyst had a greater electrochemical specific surface area and more exposed active sites to catalyze the oxygen evolution reaction. To further investigate the charge transfer kinetics during the OER, electrochemical impedance spectroscopy (EIS) was performed at 1.66 V of potential, at which OER were catalyzed both on the two catalysts. Figure 5a shows that the Nyquist curves of the Co(OH) 2 /C and ultra-Co(OH) 2 /C catalysts were composed of a small capacitive semi-circle at the high-frequency region, and a large capacitive semi-circle at the low-frequency region. The smaller impedance of the ultra-Co(OH) 2 /C catalyst (Figure 5b) corresponded to better conductivity. Two peaks existed in the Bode plots of the phase angle versus frequency, as shown in Figure 5c. Thus, the impedance spectra could be fitted by an equivalent circuit model with two time constants, as shown in Figure 5d. Herein, R s represents the resistance of the electrolyte between the reference and working electrodes. R f is the resistance of the catalyst itself. Q is the constant phase element in the catalyst. R ct is the electron transfer resistance in the OER process. C dl is a double-layer capacitance on the catalyst surface.
The values of the equivalent circuit elements were fitted using ZSimpDemo software (3.30d, Ann Arbor, MI, USA). Among of these elements, Co(OH) 2 /C and ultra-Co(OH) 2 /C have R f values of 4.063 ohm cm 2 and 3.205 ohm cm 2 , respectively, and corresponding R ct values of 8.448 ohm cm 2 and 5.487 ohm cm 2 . The reduced resistances reveal the smaller obstacle in the OER. Meanwhile, the 0.0173 F cm −2 of C dl of ultra-Co(OH) 2 /C is greater than the 0.0073 F cm −2 of Co(OH) 2 /C. This result indicates the greater specific surface area and the greater number of active sites of ultra-Co(OH) 2 /C to facilitate the OER.
Conclusions
An OER catalyst ultra-Co(OH) 2 /C was fabricated by a parallel flow precipitate method with the assistance of ultrasonication. The growth of regular hexagonal crystals of Co(OH) 2 was controlled using ultrasonication, and they decreased from approximately 400 nm to less than 50 nm. The smaller sizes of Co(OH) 2 particles exhibit more active sites on the catalyst surface while they promote the formation of a more stable catalytically active film, upon mixing with carbon black. Due to these reasons, the ultra-Co(OH) 2 /C showed better OER catalytic activity with an onset potential of 1.54 V and a Tafel slope of 18.18 mV/dec, which ensured that the ultra-Co(OH) 2 /C worked well at a more stable and lower potential in the actual OER process. The conductivity of the catalyst was also improved because of the smaller particle of Co(OH) 2 and the better incorporation with carbon black. The resistance R f of the ultra-Co(OH) 2 /C itself and the electron transfer resistance R ct during the OER were reduced by 21% and 35% compared to those observed in the case of Co(OH) 2 /C, without the assistance of ultrasonication, leading ultimately to improved catalytic performance in the oxygen evolution reaction.
Conflicts of Interest:
The authors declare no conflicts of interest. | 2018-10-27T16:49:07.638Z | 2018-10-01T00:00:00.000 | {
"year": 2018,
"sha1": "3adb3619b03283be356aa1ded21e7d18b84d1f76",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/11/10/1912/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3adb3619b03283be356aa1ded21e7d18b84d1f76",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
227246409 | pes2o/s2orc | v3-fos-license | COVID-19: Short and Long-Term Effects of Hospitalization on Muscular Weakness in the Elderly
The COVID-19 pandemic has recently been the cause of a global public health emergency. Frequently, elderly patients experience a marked loss of muscle mass and strength during hospitalization, resulting in a significant functional decline. This paper describes the impact of prolonged immobilization and current pharmacological treatments on muscular metabolism. In addition, the scientific evidence for an early strength intervention, neuromuscular electrical stimulation or the application of heat therapy during hospitalization to help prevent COVID-19 functional sequels is analyzed. This review remarks the need to: (1) determine which potential pharmacological interventions have a negative impact on muscle quality and quantity; (2) define a feasible and reliable pharmacological protocol to achieve a balance between desired and undesired medication effects in the treatment of this novel disease; (3) implement practical strategies to reduce muscle weakness during bed rest hospitalization and (4) develop a specific, early and safe protocol-based care of functional interventions for older adults affected by COVID-19 during and after hospitalization.
Introduction
The World Health Organization (WHO) has declared the novel coronavirus SARS-CoV-2 (hereinafter COVID-19) disease a global pandemic. It menaces current lifestyle and is prompting irreversible consequences in health, economic and social stability. This novel coronavirus can produce severe disease symptoms among people of all ages; however, it has been evidenced that older adults with multi-morbidity are at the highest risk of poor prognosis due to COVID-19 [1]. In fact, old age has been reported as a significant independent predictor of mortality in the COVID-19 pandemic. For example, the mortality rate for COVID-19 infection on the Chinese population in the >80 years age group was 21.9% compared with 1.3% in the 50-59 age group [2,3].
Common complications from COVID-19 infection are acute viral pneumonitis evolving to acute respiratory distress syndrome (ARDS), acute kidney injury (AKI), proinflammatory hypercoagulable state with thromboembolic events, sepsis and cardiac injury. Complications are believed to derive from a pro-inflammatory state with cytokine release [4,5].
It is noteworthy that COVID-19 infected patients tend to experience a prolonged hospitalization or intensive care unit (ICU) stay, with an average of three weeks in the ICU [6]. Commonly, bed rest is prescribed in infected patients in order to minimize the metabolic demand and orientate resources towards the recovery process. However, it has been evidenced that long periods of immobilization and rest in hospital and ICU produce a negative impact on several body systems. As an example, a period of four to six weeks of bed rest has been shown to cause muscle wasting, loss of muscle force generation capacity (6% to 40% muscle strength) and changes in contractile proteins (muscle protein turnover) among others [7].
The sequelae of prolonged immobility on SARS patients' musculoskeletal system has been previously documented [8]; however, little is known about the consequences in muscular function and deterioration associated with this novel coronavirus. Thus, it is possible to suggest that the convergence of certain drug therapy treatment and long periods of bed rest might negatively impact muscular weakness, accelerating a functional decline in elderly patients infected with COVID-19 ( Figure 1).
COVID-19: Impact of Pharmacological Treatment on Muscle Metabolism
Empirical therapies including glucocorticosteroids as immune-modulatory agents are being prescribed [11]. Moreover, in COVID-19 ICU-admitted patients, ventilatory, renal support and hemodynamic stability are pursued under a regime that includes high-flow nasal cannula, invasive mechanical ventilation, vasoactive and continuous renal replacement therapy (CRRT). Noradrenalin It seems important to keep in mind that the aging process is accompanied by an inherent sarcopenic process, characterized by an accelerated loss of muscle mass. Hence, the risk of suffering sarcopenia and fragility increases with age [9]. Unfortunately, there is no evidence to date about the impact of infection for COVID-19 on this degenerative process and it is unclear whether patients under the age of 70 years will experience premature frailty as a consequence of the disease. In this novel coronavirus scenario, Abbatecola et al. [10] have recently coined a new concept named 'COVID spiraling frailty syndrome' as a crucial entity with a negative impact on elderly people, derived from the convergence of factors such as advanced age and the presence of comorbidities. Furthermore, correlations between the average hospital length of stay, doses of the pharmacological treatments and the consequent neuromuscular damage need to be explored further.
Thus, the purpose of this paper was to (1) determine which potential pharmacological interventions have a negative impact on muscle quality and quantity; (2) define a feasible and reliable pharmacological protocol to achieve a balance between desired and undesired medication effects in the treatment of this novel disease; (3) implement practical strategies to reduce muscle weakness during bed rest hospitalization and (4) develop a specific, early and safe protocol-based care of functional interventions for older adults affected by COVID-19 during and after hospitalization.
COVID-19: Impact of Pharmacological Treatment on Muscle Metabolism
Empirical therapies including glucocorticosteroids as immune-modulatory agents are being prescribed [11]. Moreover, in COVID-19 ICU-admitted patients, ventilatory, renal support and hemodynamic stability are pursued under a regime that includes high-flow nasal cannula, invasive mechanical ventilation, vasoactive and continuous renal replacement therapy (CRRT). Noradrenalin is currently recommended as a first choice followed by vasopressin [12]. Some of these drugs have a known interference with protein synthesis and muscular metabolism and may potentially contribute to physical function impairment in survivors of COVID-19 infection. For example, systemic glucocorticosteroids (GCs) have well-known anti-inflammatory and immunosuppressive properties. Therapy with GCs is applied to clinical practice in order to inhibit a cytokine storm and manage inflammatory-induced lung injury in COVID-19 patients [13]. Different treatment schedules are used depending on the patient's characteristics, clinical severity and the preference of the physician. In the clinical setting, the standard methylprednisolone-equivalent dosages range from 0.5-1 mg/kg with a variable duration depending on the clinical response. The results from a preliminary trial suggest the benefit of using a low dose of dexamethasone, 6 mg/day, or 32 mg/day of methylprednisolone as an equivalent, in severely ill COVID-19 patients [14]. However, high doses (>1-1.5 mg/kg/day of methylprednisolone) are frequently administered in the clinical setting. For example, pulses of methylprednisolone ranging from 125 to 1000 mg/day during one to five days are commonly used with severe patients. Scientific evidence and clinical experience support this practice too [15].
The delicate clinical condition due to the risk of respiratory failure warrants its use. However, as they usually induce a wide range of side effects, adverse outcomes derived from their use are observed and should be managed accordingly.
Past studies have suggested that the use of systemic corticoids for treating acute lung injury during the 2003 SARS-CoV emergency contributed to muscular weakness and decreased functional capacity observed in survivors at follow-up visits [16]. It is important to note that corticosteroid therapy can also exert muscular damage through an impairment in the electrical excitability of muscle fibers, a decrease in the number of thick filaments and a reduction in anabolic protein synthesis along with an increased protein degradation [17]. Indeed, muscular weakness is a well-known chronic side effect observed in patients undergoing a long-term treatment with GCs. It has also been evidenced that the administration of GCs over a short period of time can induce an early-onset myopathy in critically ill patients, which is characterized by a progressive weakening of several muscle groups.
Steroid-induced myopathy is most frequently observed in patients treated with >10mg/day of prednisone or its equivalent for a few weeks [18]. Generally, the higher the dose, the greater the likelihood of developing histological changes more rapidly. A few authors have reported acute myopathy even with standard doses (40-60 mg/day of prednisone) and independently from the route of administration or the kind of corticosteroid [19]. As an example, Hanson et al. [20] showed findings of acute myopathy after five days of treatment with a dose equivalent of 60 mg per day of methylprednisolone. Furthermore, old age and several ageing mechanisms may increase the risk of steroid-induced myopathy. Thus, it is plausible that treatment with GCs may exert a role in muscle deterioration in older COVID-19 survivors. Unfortunately, further clinical investigations are needed to better define the relationship between dose-response used in COVID-19 infected patients and the damage exerted to muscular quality.
Finally, it seems important to remark that certain pharmacological interventions used in the ICU environment may further exacerbate muscular damage. For example, the use of B-adrenergic vasoactive agents such as norepinephrine to treat circulatory failure has been independently associated with ICU-acquired weakness [21]. The use of neuromuscular blocking agents during mechanical ventilation has been classically considered as a risk factor; however, it is not clear to date if they actually play a role in the development of muscular weakness [22].
COVID-19: Impact of Hospitalization and ICU on the Musculoskeletal System
It has been evidenced that the hospitalization rates for COVID-19 increase with age and that older adults are at the highest risk of hospital admission [23]. For many years, the negative functional consequences of prolonged hospitalization have been well recognized. In the context of COVID-19, recent epidemiological studies have reported an average in-hospital length of stay of 20 days [24,25] with an average ICU stay of three weeks [6]. This seems important because the number of days of bed rest during hospitalization or ICU stay is today regarded as a predictive factor for the deterioration of neuromuscular properties [26]. It has been evidenced that in young people, a bedridden period of three weeks has a greater negative impact on functional capability than 40 years of aging [27]. Remarkably, even a short period of absolute rest (up to 10 days) may trigger skeletal muscle wasting. Kortebein et al. [28] found a substantial loss of muscle strength and power (knee extension p = 0.004, knee flexion p = 0.003 and stair ascent power p = 0.01) after 10 days of bed rest in healthy elderly people (60-85 years old).
It should be noted that older patients are especially vulnerable to these changes, with a higher risk of functional dependence loss and cognitive decline after discharge [29]. In fact, it is likely that pre-frail patients admitted to ICU due to critical COVID-19 complications who undergo prolonged body immobilization (more than 10 days) experience a bilateral muscle weakness and decreased muscle strength with devastating consequences on functional capacity [26].
Ageing is accompanied by an inherent sarcopenic process characterized by an accelerated loss of muscle mass and function; however, prolonged bed rest with muscular disuse can precipitate this "catabolic crisis" causing skeletal muscle atrophy (low muscle mass and low muscle function) [30]. Thus, a premature functional disability is expected.
Studies have shown how absolute rest can lead to various adverse effects such as changes in total muscle mass, metabolic activity, muscle denervation and a loss of contractile force with increasing fatigue and reduced muscle strength [31,32] (Figure 1).
The aetiology of the coronavirus-related muscle weakness in hospitalized older patients seems to involve several interdependent processes. Firstly, a long period of bed rest seems to be predominant in COVID-19 patients. It is known that becoming temporarily bedridden with a lack of muscular weight-bearing activities is usually followed by an alteration in muscular protein homeostasis. This imbalance may occur quickly and be secondary to an accelerated muscle protein breakdown and a suppression of muscle protein synthesis [33]. This catabolic process is mediated by neurohormonal disorders and systemic inflammation. As a result, alterations in muscle mass and structure are commonly observed among these patients. Moreover, a reduction in the strength of fast-twitch fibers compared with slow-twitch fibers is also likely to happen with a consequent deterioration in resistance capacity [26]. Secondly, other factors such as severe illness, sepsis, mechanical ventilation, parenteral nutrition and certain drug therapies involved in COVID-19 treatment might further accelerate this weakening process [34]. Additionally, other mechanisms have a compounding effect on neuromuscular performance impairment. For example, becoming temporarily bedridden has been further associated with muscle fiber denervation, neuromuscular junction damage and membrane hypoexcitability [26].
Other factors such as cellular bioenergetics are also involved. It is believed that mitochondrial dysfunction may play an important role in the physical function impairment observed in COVID-19 patients. Muscle mitochondria deterioration and therefore the muscle's capacity to utilize O 2 is only one part (and not the most important one) of the story. The main cause of the acute reduction of VO 2 max during bed rest is a marked reduction of cardiac output, i.e., of the capacity to deliver O 2 to the working muscles. Cardiac output is markedly reduced because of a reduction in pre-load (to which the loss of muscle mass/muscle pump contributes substantially) and contractility of the heart. While muscle deterioration takes weeks to happen, at least in healthy individuals, the loss of VO 2 max resulting from a drop in cardiac output is very rapid (1% loss per day from day one of bed rest) [35].
Absolute bed rest, mechanical ventilation and the hyperinflammation status in elder patients can trigger a reduction in muscular mitochondrial content and a decrease in phosphorylation enzyme activity. Patients' fatigue resistance and endurance capacity reduction after recovery is attributed to these processes [36][37][38].
Remarkably, clinical practice and research articles point out that the virus itself can cause myopathic changes unrelated to pharmacological treatment or a critical illness state. Leung et al. reported muscle atrophy and focal necrosis in infected patients based on muscular tissue samples, suggesting that a few of these changes may be the consequence of the activation of local cytokine-mediated pathways alone [39]. There is increasing evidence that COVID-19 patients typically present with general weakness and myalgias that may persist even for weeks after the acute phase. Surprisingly, in many ambulatory and hospitalized patients, acute skeletal muscle damage is usually presented as the initial clinical manifestation in some COVID-19 patients, with high concentrations of creatine kinase [40]. Furthermore, a clinical study during the SARS epidemic found elevated concentrations of serum creatine kinase in 32% of subjects [41].
The evaluation of muscle weakness is recommended to tailor the intervention strategies and to assess their effect. Muscle weakness covers both muscle function and muscle structure. Muscle function is underlined by the three concepts of muscle strength, muscle power and muscle endurance. Muscle strength is measured routinely in clinical settings especially for the diagnosis of sarcopenia and frailty. Handgrip strength is the most used choice for the assessment of overall muscle strength [42]. It is an easy measure and only requires a handheld dynamometer. It is recognized to be easily applicable both in research and in clinical settings [43]. Additionally, muscle structure, muscle mass and muscle quality can be quickly assessed at the bedside with an ultrasound technique [44,45].
This scenario highlights the necessity of awareness of the harmful effects of prolonged rest and the factors aforementioned in order to prevent the functional decline in frail and non-frail ICU-admitted or hospitalized patients who are infected by COVID-19.
COVID-19: Non-Pharmacological Strategies
Two phases have been identified in hospitalized patients. The first acute phase is characterized by a respiratory syndrome and the second, more prolonged phase, prevailing the neuromotor sequelae and impaired functional status. Muscular structure and neuromuscular function decrease exponentially with a combination of pharmacological treatment, extended bed rest and mechanical ventilation, as previously mentioned.
Preventative non-pharmacological actions have been proven to positively impact long-stay patients; however, early intervention is frequently underutilized. The negative consequences in health of this scenario highlight the importance of an early health intervention in elderly people hospitalized with COVID-19.
Before implementing any interventions requiring close patient contact, a protocol must be established and adhered to in order to prevent virus transmission. For instance, all masks and materials, namely, machines, elastic bands and dumbbells, must be sterilized before and after use. Both the patient and the healthcare professional must follow general hygiene measures including using disposable tissues when coughing or sneezing, washing and disinfecting hands thoroughly and maintaining physical distance when possible. It is also imperative that the healthcare professionals use single use disposable gloves, wear a disposable fluid-resistant coverall/gown, a face mask respirator (FFP2/FFP3), eye/face protection, overshoes and protective headgear. In addition, cleaning and disinfection procedures should be applied in hospital rooms regularly in order to prevent and control the spread of pathogens. Specifically, health services should implement environment restructuring measures (separate equipment, create ground marks and deactivate digital access), provide hand sanitizers or washbasins thoroughly inside facilities, schedule specific cleaning times, schedule training times, encourage the use of individual bottles and observe air conditioning specification with respect to air exchange [46,47].
The following section provides scientific evidence of early functional beneficial interventions among hospitalized older adults.
Early Strength Intervention
Physical function reduction in patients hospitalized with chronic diseases is preventable; advanced age, acute and chronic disease and illness, functional limitations and deconditioning all contribute to older adults' vulnerability to functional decline during hospitalization [48]. As a result, the benefits of strength interventions during the hospital stay have been previously evidenced [49]. An early intervention during the hospitalization period has been associated with a better recovery [50]. Kalish et at. [51] found positive outcomes on physical, psychological and social factors after mobilizing hospitalized adults after the review of 36 studies.
Among the elderly, despite there not being a standardized exercise strength protocol, an intervention of low to moderate resistance training appears to be the most effective to fight against the loss of muscle mass, strength and functional capacity [30]. Courtney et al. [52], in a study carried out with hospitalized elderly patients (>65 years old), found improvements after an exercise intervention. As part of the study, a feasible resistance exercise program was developed aimed at strengthening the muscles of the lower limbs such as gluteus, quadriceps, hip flexors and hip adductors/abductors, among others [53]. It included several resistance band exercises completed with 2-3 sets of 10 repetitions (3-5 seconds contractions/rep).
The positive effects of an early intervention in patients hospitalized with pneumonia have been also been reported [54]. Therefore, an early, controlled muscle mobilization and resistance training is utterly important among older adults hospitalized because of COVID-19 to counteract the negative impact on the neuromotor system and physical function.
Ideally, it should be implemented as soon as possible to obtain the maximum benefit. However, during the COVID-19 acute phase, the risk of rapid desaturation lies with changes in position that can potentially alter the ventilation/perfusion ratio and hemodynamic stability. Consequently, much consideration has been given to when to start these interventions, i.e., medical boards in various countries generally advise against these measures in the acute phase especially in critical patients [55]. Nevertheless, a few institutions tend to start them during mechanical ventilation [56].
A recent scientific statement emphasized the importance of gradually starting rehabilitation therapy for patients affected by COVID-19 to avoid acute hypoxemia [6]. In patients with a severe respiratory condition reflected by bilateral interstitial radiological signs with PaO 2 /FiO 2 < 300 or a low Glasgow Coma Scale level, a progressive introduction of passive mobilization exercises and frequent position changes against gravity should be attempted to prevent deconditioning. Clinical parameters (temperature, SpO 2 , SpO 2 /FiO 2 , dyspnea, respiratory rate, thoracic-abdominal dynamics, blood pressure, heart rate, peripheral perfusion and Glasgow Coma Scale) should be thoroughly monitored. Criteria for interrupting the therapy include a respiratory rate >40 or <5 breaths/min, SpO 2 < 88%, systolic blood pressure >200 or <90, diastolic blood pressure >110 or <60, heart rate >110 or <40, instauration of arrhythmias or angor pectoris, peripheral circulation deficits, agitation and neurological impairment [57].
Routines including limb mobilization are essential to avoid side effects on the musculoskeletal system (pressure ulcers, critical illness myopathy, plantar flexor contractures or heterotopic ossification) especially for those patients after a prolonged pronation passive mobilization therapy following ARDS [58]. A few institutions employ lower body ergometers to achieve passive leg movements; this could be of special interest wherever the nurse-to-patient ratio may interfere with clinical assistance [56].
When clinical improvement is attained, a progressive resistance training (i.e., low to moderate intensity) performed with resistance bands and free weights of body weight in the post-acute phase can be considered as a safe intervention for the elderly. Active mobilization (simple bed exercises, bedside sitting and standing), accompanied by progressive muscular strengthening should be considered whenever the clinical situation allows it [59].
Neuromuscular Electrical Stimulation
The use of electrical stimulation is an emerging strategy of intervention with positive effects in seriously ill hospitalized patients who are unable to perform resistance exercises. The effectiveness of this therapy has been evidenced in immobilized patients after a spinal cord injury, congestive heart failure and chronic obstructive lung disease [60]. In a recent paper, it has been found that daily neuromuscular electrical muscle activation (seven days a week) induced muscle mass preservation in hospitalized geriatric patients; however, the results showed limited evidence of effects on the whole body functional capacity [61]. Rodriguez et al. [62] found positive effects on the strength of biceps and quadriceps after a 13-day intervention of neuromuscular electrical simulation in septic patients. Contrarily, Zinglersen et al. [63] found positive changes on functional measures in a combined short intervention in hospitalized elderly patients that were not attributable to neuromuscular electrical stimulation.
Despite the apparent safety of electrical stimulations in the initial stages of treatment, it is highly important to develop effective protocols of interventions for hospitalized elders that can be combined with either dynamic mobilization or functional strength exercises.
Heat Therapy
The benefits of thermotherapy interventions have been previously evidenced in animal, in vitro and human studies. The application of heat on the whole body contributes to muscle recovery attenuating cellular damage and protein degradation [64]. Moreover, heat therapy induces capillarization, muscle hypertrophy and mitochondrial biogenesis in skeletal muscle models [65]. Hesketh et al. [66] found positive microvascular adaptations after a six week intervention of heat therapy in young sedentary participants; however, no changes on muscle architecture (mitochondrial density) were found. Heat therapy can be useful for muscular pain and body inflammatory responses producing an analgesic effect for patients during hospitalization and ICU stays [67].
The novel scenario of COVID-19 and the constellation of the explained features should alert healthcare professionals to the possibility of an accelerated decline on physical function among the elderly. The use of concomitant interventions might be one of the major challenges for healthcare providers in order to ameliorate the functional decline of elderly patients, enhancing their recovery process and improving global survival rates.
COVID-19: Long-Term Consequences
The long-term consequences and sequelae of COVID-19 are still unclear. The interest in long-term outcomes after recovery from SARS-CoV and MERS-CoV global epidemics has increased over the last years. Clinical investigations have reported that in the following months after SARS-CoV discharge, elderly patients experienced a significant functional decline, reduced musculoskeletal fitness, poor cardiorespiratory fitness and a lower quality of life than control subjects (shortly after discharge and at 24 months follow-up) [68,69]. While the literature has not yet fully revealed the effect on people surviving the COVID-19 pandemic, a few authors suggest that elderly survivors will most likely have reduced health derived from both the cardiorespiratory dysfunction sequelae and the ICU or hospitalization stay [70], as seen in previous epidemics. A recent longitudinal study carried out in Wuhan has shown that three months after hospital discharge, 28.3% and 4.5% of survivors suffered from physical decline and myalgias, respectively [71].
Interestingly, it has been observed in clinical practice that after overcoming COVID-19 disease, both in ambulatory care and hospital settings, a vast number of elderly patients have reported prolonged general weakness and muscular fatigue for several days and weeks. This is especially noticeable in ICU survivors as they encounter a significant loss of muscle thickness and appreciable motor incoordination [72]. Special attention has been paid to the release of pro-inflammatory cytokines on the potential damage inflicted to the myoneural junction and muscle architecture.
Survivors after prolonged hospitalization and ICU stays generally sustain some degree of functional disability even years after the initial insult [73]. Motor dependence has been evidenced in 60-70% of patients after an ICU stay [74]. In addition, these patients frequently have a decreased ability to walk, impaired balance and reduced mobility as well as a neurocognitive decline, leading to a worsening in instrumental activities (IADL) and activities of daily living (ADL) performance [75].
Usually, a longer period of immobilization comes with a greater functional loss and slower rehabilitation after discharge. Moreover, the length of stay has been shown to have a positive correlation with a risk of falls and injuries [76]. Furthermore, the inflammatory process that frequently accompanies a more severe disease course needing bed rest further exacerbates this process, prompting frailty in pre-frail adults following a more rapid change in muscle wasting, neuromuscular damage and cognitive impairment [29,77].
These long-term consequences are expected to come at a significant additional cost with increased nursing home admission and hospital readmissions. Costs derived from pharmacotherapy, medical complications management and neuropsychological outcomes should be considered [73]. In relation to the above, it is also important to consider the wellbeing of caregivers such as burnout, time off work and financial hardships [78].
Conclusions
Older people diagnosed with a COVID-19 infection are at a higher risk of severe muscle weakness and atrophy that may have negative consequences on functional disability. Therefore, it is extremely important to: (1) determine which potential pharmacological interventions have a negative impact on muscle quality and quantity; (2) define a feasible and reliable pharmacological protocol to achieve a balance between desired and undesired medication effects in the treatment of this novel disease; (3) implement practical strategies to reduce muscle weakness during bed rest hospitalization and (4) develop a specific, early and safe protocol-based care of functional interventions for older adults affected by COVID-19 during and after hospitalization.
Conflicts of Interest:
The authors declare no conflict of interest. | 2020-11-26T09:06:07.484Z | 2020-11-24T00:00:00.000 | {
"year": 2020,
"sha1": "bbe4948b22bae1f543e5088da74b1736a11a19f4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/17/23/8715/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1512ac40c7f633fc54c765d7740c5b15047b2392",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
1651214 | pes2o/s2orc | v3-fos-license | Systematic review of β blocker, aspirin, and statin in critically ill patients: importance of severity of illness and cardiac troponin
Non-cardiac critically ill patients with type II myocardial infarction (MI) have a high risk of mortality. There are no evidence-based interventions to mitigate this risk. We systematically reviewed the literature regarding the use of medications known to reduce mortality in patients with cardiac troponin (cTn) elevation due to type I MI (β blockers, statin, and aspirin) in studies of critically ill patients without Type I MI. All PubMed publications between 1976–2/19/16 were reviewed. Search terms included: β blocker or aspirin or statin and intensive care unit (ICU) or critically ill or sepsis; 497 primary references were obtained. Inclusion criteria were as follows: (1) study population consisted of critically ill patients in the ICU with non-cardiovascular illnesses, (2) mortality end point, (3) severity of illness (or injury) was measured, and (4) the antiplatelet agent was primarily aspirin. Retrospective investigations, prospective observational studies, meta-analysis, systematic review, and randomized controlled trials were included; case reports were excluded. 25 primary references were obtained. The data were extracted and tabulated using data collection headings as follows: article title, first author/year/reference number, study type/design, population studied, outcome and intervention, and study question addressed. Evidence was not graded as the majority of studies were non-randomized (low-to-moderate quality). 11 studies were found through bibliography reviews for a total of 36 references. In conclusion, β blockers, statins, and aspirin may play a role in reducing mortality in non-cardiac critically ill patients. Benefit appears to be related to severity of illness, for which cTn may be a marker.
INTRODUCTION
Critically ill patients are defined as those having an acute impairment of one or more vital organ systems such that there is a high probability of imminent or life-threatening deterioration in condition. 1 Despite significant resources provided for the treatment of the critically ill patients in the intensive care unit (ICU), the short-term mortality rate remains high. 2 3 Elevated serum cardiac troponin (cTn) is a common finding in this population, including those patients who do not initially present as cardiac emergencies, with 61% prevalence in patients with sepsis alone. 4 Serum cTn is often measured to 'rule out' coronary arterial plaque rupture-or type I myocardial infarction (MI) 5 -as a cause for observed hypotension, arrhythmias, or chest pain in the critically ill patients, since treatment with urgent revascularization of the obstructed artery would be recommended.
Obstructive coronary disease, however, is not often found in critically ill patients with cTn elevation. 6 7 Rather, cTn elevation in this setting is thought to be due to myocardial oxygen supply/demand mismatch, categorized as a type II MI. 5 8 In this context, it is not uncommon for patients to lack the signs and symptoms caused by plaque rupture which either may not be present or may be masked by sedation or the underlying illness (figure 1). Importantly, even in the absence of obstructive coronary disease, elevated cTn correlates with the severity of illness and is an independent predictor of death, 4 6 9-25 although not all studies support this conclusion. [26][27][28][29][30] To further complicate the situation, comorbidities such as hypotension, renal failure, or sepsis increase the risk of coronary intervention which, if performed, may cause more harm than benefit.
While the existing literature is robust regarding interventions and treatments known to reduce mortality for patients who present with type I MI, it lags behind in identifying interventions that reduce mortality in critically ill patients with type II MI and no prior coronary disease, despite the elevated mortality risk in this group. A common strategy has been to treat with β blockers, statin, and aspirin while the patient recovers, assuming that obstructive coronary disease is present, and then pursue testing for coronary disease once the patient has stabilized (figure 1). Early mortality reduction with β blockers, aspirin, or statin may provide a window of opportunity for subsequent evaluation and treatment of clinically significant coronary disease once the patient has stabilized from the underlying illness and the risks of intervention are mitigated.
This strategy has not been tested prospectively in critically ill patients with type II MI because there is little data in the literature to show benefit. To address this gap in knowledge, our group performed a retrospective study that included nearly 20,000 medical and surgical patients who were admitted to Veterans Administration Medical Center ICUs from 2007 to 2008. 31 Our data revealed two important findings: (1) that cTn level correlated with the severity of illness, and (2) there was a 30-day mortality reduction in patients taking aspirin, β blockers, and/or statins in a troponin-dependent fashion. Aspirin and/or β blocker use was associated with a 30-day mortality reduction, but only if cTn levels were high (β blockers: OR=0.80 (0.68, 0.94), p=0.0077; aspirin: OR=0.81 (0.69, 0.96), p=0.0134). The 30-day mortality was reduced in patients taking statins, but only if there was no or mild elevation of cTn (OR=0.66 (0.53, 0.82), p=0.0003).
To the best of our knowledge, there is only one other report that demonstrated treatment benefit from medical intervention in a troponin-dependent fashion (see the 'Results' section). 11 Given the correlation between serum cTn and severity of illness that has been replicated in many studies, we extended our search to examine the impact of β blockers, aspirin, and statins in a critically ill population in which the severity of illness was also documented. Since β blockers and aspirin were associated with reduced mortality in patients who had high cTn but not in those with normal or intermediate levels of cTn, we hypothesized that β blockers and aspirin would also be associated with reduced mortality in patients with a comparably high severity of illness. Likewise, since statins were associated with reduced mortality in critically ill patients with normal or intermediate levels of cTn, we hypothesized that we would see a mortality benefit in critically ill patients with lower severity of illness. This is of great importance for several reasons. First, there are no known interventions that reduce mortality in critically ill patients with type II MI, a group at high risk of early death. Second, critically ill patients without cTn elevation may still benefit from statin use. All critically ill patients, therefore, may have mortality benefit from treatment with β blocker, aspirin, or statins.
The goal of this systematic review is to determine whether there is a literature base to support additional investigation of clinical treatment pathways using β blockers, statins, and/or aspirin in critically ill patients based on cTn or, in the absence of cTn levels, the severity of illness.
Data source and searches
Literature searches were performed using PubMed February 19, 2016 and included the literature through 1976.
Study selection
Titles were scanned and abstracts reviewed using exclusion and inclusion criteria following PRISMA-P guidelines. 32 33 Search terms included: β blockers or aspirin or statin and ICU or critically ill or sepsis; 497 primary references were obtained. Inclusion criteria were as follows: (1) study population consisted of critically ill patients in the ICU with non-cardiovascular illnesses, (2) mortality end point, (3) severity of illness (or injury) was measured, and (4) the antiplatelet agent discussed was solely (7 studies) or largely (3 studies) aspirin. All languages were included. Retrospective investigations, prospective observational studies, meta-analysis, systematic review, and randomized controlled trials were included; case reports were excluded. Thirty-six primary references were obtained (figure 2).
Figure 1
Assessment of the critically ill patient with troponin elevation. There is a lack of evidence-based interventions in high-risk critically ill patients with troponin elevations and no (or uncertain) acute coronary syndrome (ACS). ICU, intensive care unit; CAD, coronary artery disease.
Data extraction and quality assessment
The data were extracted and tabulated using data collection headings as follows: article title, first author and year, study type/design, population studied, outcome and intervention and study question addressed. Evidence was not graded as the majority of studies were non-randomized.
Data synthesis and analysis
The articles were obtained in full length and read by two independent reviewers (FGR and MBC). Eleven additional studies were found through bibliography reviews for a total of 36 references.
Aspirin and antiplatelet agents
Systematic review of the literature investigating the role of aspirin in critical illness suggests a potential benefit in patients who are more severely ill (see online supplementary table S1). Mechanisms of benefit may be related to coagulopathy, coagulopathy being associated with increased organ failure and death. 34 Furthermore, organ failure contributes cumulatively to mortality. 35 Therefore, interrupting the coagulation cascade has been examined as a method to limit organ failure in many investigations with different agents. Data regarding the antiplatelet agent aspirin are robust, but largely retrospective. None of the literature discussed here measured cTn, however investigators measured and adjusted for the severity of illness or injury, in the case of critically injured trauma patients. The data suggest that aspirin may mitigate development of acute lung injury, [36][37][38][39][40] acute respiratory distress syndrome, 36 41 systemic inflammatory response syndrome, 42 and mortality, 37 42 43 but these investigations need to be confirmed in appropriate prospective clinical trials.
The largest investigation of aspirin in the critically ill is a propensity-matched retrospective study of 7945 patients at risk for SIRS, APACHE II Score 17-18 (1445 patients in each group; this APACHE II Score predicts a mortality of 26-30%). 42 Aspirin users had a significantly reduced mortality compared to non-users (10.9% aspirin users vs 17.2% non-users, HR 0.43, p<0.001), with benefit also in the sepsis-only group (27.4% aspirin users vs 42.2% non-users, 95% CI −18.8% to −8.6%). An APACHE II score of 17-18 is consistent with the report from Poe et al, 31 in which the patients who benefitted from aspirin use had a 30-day mortality of 30%, as well as the highest cTn. This supports the proposal that selected critically ill patients with a high severity of illness score may benefit the most from aspirin use.
Another retrospective, propensity-matched investigation of 1149 critically ill patients with high severity of illness (APACHE II 25-29, predicted mortality >50%) found that prehospital use of aspirin was associated with a decreased risk of acute respiratory distress syndrome (ARDS) (multivariate-adjusted OR, 0.659; 95% CI 0.46 to 0.94; p=0.023), 41 but only a trend towards decreased mortality in aspirin users (OR 0.697; 95% CI 0.47 to 1.03, p=0.075). Although other medications were recorded, β blockers as a class were not separately analyzed. Given the high risk of mortality in this patient population, it is possible that β blocker administration confounded this result. Similarly, in a study of patients with severe sepsis, 39 reduced incidence of ARDS/acute lung injury (ALI) was observed in the antiplatelet-treated group (OR 0.5, 95% CI 0.35 to 0.71, p<0.001), but no mortality benefit was demonstrated (OR 0.73, 95% CI 0.46 to 1.16), p=0.19). 39 In this investigation, APACHE III scores (55)(56)(57) are associated with an anticipated mortality of ∼15-20%, which would correlate to the 'intermediate cTn' group of Poe et al, a cohort that did not have an associated mortality benefit with aspirin. Once again, severity of illness may be predictive of outcome with aspirin use.
A substudy of the prospectively enrolled Glue Grant found benefit from aspirin. 37 High-risk trauma patients who received transfusions (a risk factor for postinjury pulmonary dysfunction) were enrolled. Of 839 patients, 128 patients were on antiplatelet agents; 66% aspirin alone, 20% 'other' antiplatelet agents, and 14% received aspirin and another antiplatelet agent. Despite the fact that patients on antiplatelet agents were older, had more comorbid illnesses and were more severely injured, patients on antiplatelet agents had significantly less multi-organ failure and lung dysfunction with a non-significant trend towards decreased mortality.
A retrospective study of 979 septic patients (mean APACHE II score 22-23, p=0.16) showed mortality benefit from aspirin. 44 Logistic regression analysis determined that being on aspirin or a non-steroidal anti-inflammatory agent (NSAID; ibuprofen, diclofenac, or indomethacin) was associated with decreased mortality (aspirin (ASA) OR 0.57, 95% CI 0.39 to 0.83, NSAID OR 0.5, CI 0.26 to 0.94); however, being on both agents eliminated the benefit of either agent (ASA and NSAID OR 1.12, CI 0.55 to 2.25), indicating that NSAID use must be considered in clinical trials.
Even patients with severe gastrointestinal bleeding may have a mortality benefit with aspirin. A total of 717 patients admitted for non-variceal upper gastrointestinal bleeding from 1993 to 2010 were studied. 45 The primary outcome was in-hospital mortality. Despite the fact that patients on ASA only were older and had more comorbidities, multivariate analysis showed that being on ASA only was an independent predictor of reduced in-hospital mortality (OR 0.26, 95% CI 0.13 to 0.53, p=0.0002).
A propensity-matched study of 3855 patients investigating aspirin use in patients at risk for acute lung injury found no benefit from aspirin (OR 0.67; 95% CI 0.44 to 1.01, p=0.055); however, the APACHE II score was relatively low, between 9 and 12 (approximate mortality risk 15%). 38 The reduced benefit may have been related to the reduced risk in this cohort. A retrospective study of mixed admissions to medical and surgical ICUs supports this assertion. 46 Patients with an APACHE II score of 21 or greater had a significant benefit from antiplatelet pretreatment as compared to patients with lower APACHE II scores. Severity of illness, or cTn levels if they are indeed a biomarker of severity of illness, may need to be considered when prescribing aspirin. 31 37 One nested cohort trial of 763 ICU patients (20% receiving aspirin) combined results from two RCTs and showed potential harm from aspirin therapy. 47 The authors did not provide a mechanism for this difference as compared with considerable literature showing benefit, rather they highlighted the need for prospective, randomized investigation.
In summary, systematic review of a series of large, retrospective studies of critically ill patients suggests that aspirin provides clinical benefit, particularly when the severity of illness is high, although this finding is not uniform. This supports the need for prospective, randomized, controlled investigations of aspirin in the critically ill, with careful evaluation of the possible role of severity of illness and troponin positivity.
Statins
Statins hold promise in the treatment of sepsis, primarily via immunomodulatory and anti-inflammatory mechanisms. 48 Systematic review of the literature describing statin use in the critically ill revealed a confusing story, but one that may be clarified by addressing severity of illness. Observational cohort investigations, retrospective and prospective, have demonstrated a potential benefit of statins in patients with sepsis, 49 but the bulk of data from prospective, randomized, blinded, placebo-controlled trials suggests otherwise. [50][51][52][53] The remainder of this discussion will focus on results from prospective randomized-controlled trials and their meta-analyses.
The conclusion from recent systematic reviews and meta-analyses is that there is insufficient evidence to support the use of statins to derive a mortality benefit in patients with severe sepsis. 54-57 A total of 1894 patients were involved in randomized, blinded, and controlled clinical trials of statin therapy in sepsis. Primary outcome was 28-day mortality, 50 in-hospital mortality, 51 number of ventilator-free days for a maximum of 28 days, 52 hemodynamic parameters, 58 and IL-6 levels. 53 None of the trials showed mortality benefit except in patients who were prior users of statin (Kruger; n=77 patients, OR 0.17 (0.03 to 0.85), p=0.03). 53 Patients in these trials had a high severity of illness, with approximate APACHE II scores ranging from 19 to 25 and estimated mortality rates between 30% and 53%. It was concluded that severely septic patients do not benefit from statin use, although it is not clear if prior use impacts mortality. 53 Severity of illness may play an important role in determining whether a patient will benefit from statins, as suggested by a large retrospective cohort analysis 31 and one randomized, controlled, double blind study. 31 59 Among 16,208 critically ill patients with a severity of illness comparable to APACHE II score of 14 or less, logistic regression analysis showed that patients taking statins had an associated 30-day mortality benefit (OR 0.66; 95% CI 0.53 to 0.82, p=0.0003). 31 Patients with a higher severity of illness (equivalent to APACHE II score >19) did not have an associated mortality benefit with statin use (OR 0.99 (95% CI 0.82 to 1.19), p=0.91). This retrospective study could not adjust for prior statin use, however. It is possible that the entire effect observed may be related to prior statin use rather than severity of illness.
A randomized, placebo-controlled, double blind investigation of statin use in septic patients on the ward showed that 40 mg of atorvastatin in statin-naïve patients prevented conversion of sepsis to severe sepsis (4% vs 24%, p=0.007). 59 The mean APACHE II score for this group correlated to an anticipated mortality rate of 15%.
This systematic review of prospective, randomized, controlled studies of statins in the critically ill suggests that statin use may benefit patients who are less severely ill on admission, supporting an immunomodulatory effect beneficial at early stages of illness. The impact of prior use of statins also needs to be clarified. It is possible that most benefit derives from prior use, but at least one randomized trial of septic ward patients suggests that statin-naïve patients may benefit, as well.
β blockade Systematic review of the literature reveals that β blockade may reduce mortality in critically ill patients who are more severely injured or ill, or who have higher cTn levels on admission. Only one prospective, randomized, placebocontrolled clinical investigation of the impact of β blockade on mortality in the critically ill has been published. 60 A total of 154 patients who presented with hemodynamically unstable septic shock requiring norepinephrine were investigated. After careful resuscitation, the patients randomized to treatment were given intravenous esmolol. The primary outcome was a reduction in heart rate to 80-94 bpm and maintained for 96 hours, which was achieved; the mean reduction in heart rate in the esmolol group was 18 bpm ( p<0.001). Surprisingly, at 28 days, the esmolol group had a mortality rate of 49.4%, whereas the control group was 80.5% ( p<0.001). The authors pointed out that historically, patients with the highest heart rates were most at risk of death [61][62][63][64][65] and hypothesized that perhaps the highest risk patients would benefit most from β blockade.
Several prospective observational studies suggest β blockers may reduce mortality in patients with traumatic brain injury (TBI). A non-randomized, prospective, observational study of 440 TBI patients was performed in which propranolol was given within the first 24 hours of ICU admission at the discretion of the trauma or neurosurgery attending doctors. 66 Propranolol-a non-cardioselective β blocker-was chosen based on a retrospective investigation of 1755 TBI patients which showed that there was no in-hospital mortality benefit from β blockade in general (427 patients, multivariable analysis OR 0.85, 95% CI 0.536 to 1.348). However 78 propranolol users (18%) alone had significant mortality benefit (OR 0.199, 95% CI 0.043 to 0.92) despite being more severely injured. 67 In the prospective observational investigation, propranolol use was independently associated with reduced early mortality (OR 0.25 (0.08, 0.74, p=0.012)). 66 A second prospective observational study of propranolol use with 38 moderate-to-severe TBI patients (28 propranolol group, 10 control patients) supported its safety, but did not show a mortality benefit (10% vs 10.7%, p=0.9). 68 This may reflect the fact that 80% of patients in the non-propranolol group had been given other forms of β blockade during their admission. Clearly, additional high-quality prospective investigations are necessary to show benefit of β blockade in TBI patients as well as investigating the importance of cardioselectivity; however, this early work indicates that there is little harm. Support for lack of harm and possible benefit in TBI patients also comes from considerable retrospective work, [69][70][71][72] all of which suggests that β blockade has a protective effect despite higher risk injuries.
β blockade also appears to show mortality benefit in critically ill non-TBI trauma patients. 11 73 74 Martin et al 11 investigated the role of cTn elevation in 1081 trauma patients and asked whether β blockers and aspirin impacted outcomes. Consistent with the majority of similar investigations, the authors found increased mortality in trauma patients who also had elevated cTn (29% of patients with elevated cTn, mortality 16% in cTn negative vs 44% in cTn positive groups, OR 3.0, p<0.01). APACHE II score was an independent predictor of mortality ( p=0.001). Seven per cent of patients were administered β blockers, primarily to patients with elevated cTn (11% vs 6%, p=0.01). β blocker use was associated with a 50% reduction in mortality, but only in the group with elevated cTn (cTn elevated: 38% vs 16%, p<0.01, cTn not elevated: 14.3% vs 16%, p=0.77). Adjusting for multiple factors still resulted in a strong mortality benefit (OR 0.50; 95% CI 0.24 to 1.02, p=0.057). This is consistent with the work of Poe et al, 31 which showed similar benefit across the critical illness spectrum in patients with high cTn as compared to patients with normal or mildly elevated levels.
Considerable retrospective work shows an association with β blockade and reduced mortality in the nontraumatic critically ill. A large propensity-adjusted cohort study used Denmark's prospective, population-based medical database. 75 Thirty-day mortality was measured in 8087 ICU patients taking β blockers prior to admission as compared to non-users. Similar to other studies, despite the fact that β blocker users were older and had more comorbid diseases than non-users, there was significant association of mortality benefit with cardioselective agents (OR 0.70; 95% CI 0.58 to 0.83). A retrospective cohort study comparing heart-rate lowering agents in patients admitted to the ICU in acute respiratory failure showed no mortality benefit among 188 chronic obstructive pulmonary syndrome (COPD) patients. 76 This population, however, had a severity of illness (APACHE II [19][20] that may have also benefited from aspirin use, if the work of Poe et al 31 is accurate. Mortality benefit, therefore, may have been masked by aspirin use which was not addressed in this study. Additional retrospective cohort studies of non-critically ill patients with COPD support the safety of β blockade. [77][78][79] Three investigations showed early mortality benefit with β blocker use in non-critically ill patients with COPD 77 78 or respiratory arrest. 79 The majority of benefit was seen in patients with cardiovascular disease in a study of noncritically ill COPD patients 78 and respiratory arrest patients. 79 When patients with cardiovascular disease are excluded, a mortality benefit remained, adjusted HR 0.68 (95% CI 0.46 to 1.02). 77
SUMMARY AND CONCLUSIONS
Reasonable conclusions that can be cautiously extracted from this systematic review are as follows: 1. Cardiac troponin levels may be a biomarker for severity of illness and multiorgan failure, thus providing a potential method to rapidly identify those patients at greatest risk of death for whom β blockers and aspirin may have the greatest benefit. 2. Aspirin appears to reduce acute lung injury, occurrence of ARDS, and may also impact mortality in critically ill patients. The most severely ill may obtain the most benefit. This needs to be tested in prospective, randomized, blinded, and controlled investigations. 3. Prospective, randomized, and controlled investigations show that statins appear to have no benefit in reducing mortality or illness in severely septic patients. Some data suggest overall safety in septic patients. This impacts patients who have other indications for statins, such as cardiovascular disease. Close surveillance of liver and muscle biomarkers is recommended due to the potential for liver and muscle dysfunction in the severely ill. 4. Conclusions cannot yet be drawn regarding the benefits of statins in less severely ill patients. In this population, statins may reduce the severity of illness and may have a mortality benefit, but this also needs to be confirmed with appropriate prospective interventional trials. 5. In the one prospective, randomized, controlled study of β blockers in adequately resuscitated septic patients, β blockers appear to have significant mortality benefit, although there has not yet been such an investigation designed with a mortality end point. It is possible that cardioselectivity may be important, although esmolol (cardioselective) and propranolol (non-selective) have shown mortality reduction in small studies of patients with sepsis and TBI, respectively. Side-by-side prospective studies of both classes of β blocker would be necessary to address this question. 6. All medications must be recorded and considered in future studies of critical illness. This systematic review cumulatively suggests that statin therapy may provide immunomodulatory benefits at early stages of illness before multiorgan failure develops, whereas aspirin and β blockers may reduce mortality in the sickest of patients for whom platelet aggregation, microvascular obstruction, and organ dysfunction may occur.
A treatment pathway that needs to be tested prospectively is summarized in figure 3. Potentially, all non-acute coronary syndrome (ACS) admissions to the ICU would trigger a baseline cTn measurement that could guide appropriate medical management. After excluding the possibility of ACS and assessing patient-specific risks and benefits, it is possible that the addition of β blockers, statins, and/or aspirin would reduce mortality in critically ill patients depending on the severity of illness, a surrogate marker being the admission cTn level. Prospective, observational studies addressing these questions must be developed. | 2017-07-12T20:57:34.875Z | 2017-01-30T00:00:00.000 | {
"year": 2017,
"sha1": "b4bec53c5c79b110ca5052ea4a29acaa64832920",
"oa_license": "CCBYNC",
"oa_url": "https://jim.bmj.com/content/jim/65/4/747.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "b38eec8b7cc473156bfab4f720f36ff8585eac2f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252422887 | pes2o/s2orc | v3-fos-license | A Comparative Evaluation of Dentogingival Tissue Using Transgingival Probing and Cone-Beam Computed Tomography
Background and Objective: Gingival biotype can be assessed using a variety of invasive and non-invasive procedures, such as direct probing, transgingival probing, ultrasound-guided approaches, and, for the more sophisticated, cone-beam computed tomography. The aim of this study was to evaluate gingival biotype in relation to transgingival probing and cone-beam computed tomography (CBCT). Materials and Methods: This study included a total of two hundred healthy individuals. Gingival thickness was assessed and measured from the right and left maxillary central incisor teeth using CBCT and transgingival probing of the attached gingiva. The measurements were analyzed with regard to tooth type (central incisor). Linear measurements for gingival biotype were measured using both methods. Correlations and differences between measurement methods were assessed. Results: The mean age of study participants was 32.49 ± 8.61 years. The radiographic measurements on CBCT were 1.34 ± 0.17 mm for the right central and 1.28 ± 0.21mm for the left central. The transgingival probing measurements were 1.31 ± 0.18 for the right central and 1.22 ± 0.21mm for the left central. Conclusion: As per the results of this study, there is a significant positive correlation between transgingival probing and CBCT measurements of gingival biotypes.
Introduction
Gingival phenotype is a term that describes clinical differences in the morphology of gingival tissue [1]. Two types have been defined, namely thick and thin, according to the thickness of the gingivae [1,2]. Gingival phenotype is measured due to its relevance to prognosis in cases involving inflammation and trauma across several dental disciplines. A proper analysis of the soft tissue biotype is critical for the effectiveness of restorative, periodontal, and dental implant treatments, as sufficient thickness of the soft tissue can prevent negative consequences succeeding surgical operations, orthodontic therapies, and prosthodontic treatments [1][2][3][4].
Gingival biotype refers to the diameter of the faciopalatal proportions of the gingiva [1]. Gingival biotype is considered "thin" if this measurement is equal to or lesser than 1.5 mm, and it is considered "thick" if it is equal to or more than 2mm [4]. Gingival measurements, including breadth and thickness, vary significantly between individuals and are related to tooth form and shape [5]. Individuals with thin gingival tissues experience slightly higher levels of recession than those possessing wider and more robust gingiva. Most notably, the masticatory keratinized mucosa is sparse in other parts of the oral cavity, particularly the hard palate, making the covering of surgical roots by extricating connective tissue more challenging in such people [6].
Among the factors that may affect the prognosis of dental procedures, gingival biotype is a key source of concern. It has the potential to influence the end result of periodontal procedures, root planning surgeries, and the placement of dental implants. Different types of gingival biotypes react variably to inflammation as well as to surgical and restorative procedures. Consequently, identifying gingival tissue biotypes prior to treatment planning is crucial [7]. When treating patients that possess a gingival biotype that is thin, special care must be taken [8]. The value of having an appropriate biotype can be separated into two categories. Firstly, it improves initial wound covering by increasing vascularity, site protection, and tissue regeneration. Secondly, it is less susceptible to gingival/mucosal recession and mechanical discomfort, and it can form barrier that covers restorative margins [9].
Gingival thickness can be measured using a variety of invasive and non-invasive procedures, such as direct probing, transgingival probing, ultrasound-guided approaches, and, as of recently, cone-beam computed tomography (CBCT). Periodontal probing-based gingival biotype assessment is a straightforward, somewhat objective, and clinically useful approach [7,9]. Chen et al. employed a digital voltmeter to describe two types of gingival biotypes present in natural dentition: thick and thin [10]. In a study by Becker and colleagues, the authors suggested three different periodontal morphotypes: scalloped, flat, and prominent scalloping of the gingiva. When the height of the bone interproximal to the midfacial height was measured, the following results were obtained: flat = 2.1 mm, scalloped = 2.8 mm, and pronounced scalloped = 4.1 mm [11].
The utilization of ultrasonic instruments is a non-invasive modality for assessing thickness and is a reproducible procedure [12], but its disadvantages include challenges in maintaining transducer directionality [13], lack of device availability [14], and high prices. These considerations may be to blame for the device not being a standard component of the clinician's arsenal. A simplified method for distinguishing thin gingival tissue from thick gingival tissue, based on the visibility of the periodontal probe from the gingival edge, has also been presented [15][16][17].
Recently, cone-beam computed tomography (CBCT) imaging has been used as an advanced diagnostic aid in assessments of the width of the oral hard and soft tissues [16]. Because of its greater diagnostic capacity, CBCT imaging techniques have been largely employed for scanning hard tissues. Unlike transgingival probing and ultrasonic devices, the CBCT approach provides a representation of the teeth, gingival tissues, and the remaining periodontal tissues. Furthermore, evaluations can be conducted multiple times on a single image acquired by gingival soft tissue CBCT imaging; that is not possible with other modalities. According to Fu et al. [17], CBCT gives precise measurements of the thickness of bones and labial soft tissues. He concluded that measurements obtained by CBCT might be a more objective method than direct measurements for defining the thickness of the hard and soft tissues.
In his study, Cao [11] applied three methods with respect to failing teeth to determine the width of the gingival biotype. For evaluation, he used eye inspection, probing of the periodontal tissues, and direct measurement. Before extraction, the biotype of the gingival tissues was classified, by visual inspection and evaluation with a periodontal probe, as either thick or thin. Following tooth extraction, the thickness of the gingiva was measured directly, using a tension-free caliper, to the nearest 0.1 mm [11].
CBCT can be used as a non-invasive method for assessing the biotype of gingival tissues and determining the thickness of facial gingiva and cortical bone. The purpose of this study was to assess the accuracy of CBCT in measuring gingival biotype when compared to the transgingival probing method in the aesthetic zone of maxillary teeth.
Methodology
This descriptive cross-sectional study included 200 dental implant candidates who were sent to a radiology clinic for CBCT scans. The ethical review committee of the Altamash Institute of Dental Medicine (number AIDM/ERC/06/2021/04) approved the study. Patients with at least two maxillary anterior teeth were chosen using convenience sampling, and their informed consent was obtained. Individuals possessing maxillary anterior teeth, practicing sufficient oral hygiene, without clinical symptoms of loss of attachment or inflammation, and who were systemically healthy were chosen for the study.
Patients with periodontal pockets, gingival recession, cervical abrasion, root caries, restoration, periapical disease, inflamed gingiva, tilted or rotated teeth, or fracture teeth were excluded from the study. Also excluded were women who were pregnant or nursing, patients with any history of trauma to the teeth, discoloration, or serious misalignment, and those who were taking any medication or receiving radiation treatment.
Clinical Examination
Clinically, transgingival probing was used to assess gingival biotype, i.e., the thickness of the gingiva, in maxillary anterior teeth, such as both central and lateral incisors. A local anesthetic of lidocaine with adrenaline 1:200,000 was used to provide an infiltration nerve block. Following anesthesia, an endodontic K file number 20 ossessing a stopper was used for transgingival probing of connected gingival tissue at the mid-labial region, equidistant between the marginal gingiva and the mucogingival junction. The K file was inserted perpendicularly to the gingiva and penetrated until it reached the bone. The gingival thickness was determined with a vernier caliper calibrated in millimeters and accurate to one decimal point, as in Figure 1.
Methodology
This descriptive cross-sectional study included 200 dental implant candidates who were sent to a radiology clinic for CBCT scans. The ethical review committee of the Altamash Institute of Dental Medicine (number AIDM/ERC/06/2021/04) approved the study. Patients with at least two maxillary anterior teeth were chosen using convenience sampling, and their informed consent was obtained. Individuals possessing maxillary anterior teeth, practicing sufficient oral hygiene, without clinical symptoms of loss of attachment or inflammation, and who were systemically healthy were chosen for the study.
Patients with periodontal pockets, gingival recession, cervical abrasion, root caries, restoration, periapical disease, inflamed gingiva, tilted or rotated teeth, or fracture teeth were excluded from the study. Also excluded were women who were pregnant or nursing, patients with any history of trauma to the teeth, discoloration, or serious misalignment, and those who were taking any medication or receiving radiation treatment.
Clinical Examination
Clinically, transgingival probing was used to assess gingival biotype, i.e., the thickness of the gingiva, in maxillary anterior teeth, such as both central and lateral incisors. A local anesthetic of lidocaine with adrenaline 1:200,000 was used to provide an infiltration nerve block. Following anesthesia, an endodontic K file number 20 ossessing a stopper was used for transgingival probing of connected gingival tissue at the mid-labial region, equidistant between the marginal gingiva and the mucogingival junction. The K file was inserted perpendicularly to the gingiva and penetrated until it reached the bone. The gingival thickness was determined with a vernier caliper calibrated in millimeters and accurate to one decimal point, as in Figure 1.
2.2.Radiographical Examination
A Hyperion X9 digital imaging system with cone-beam computed tomography (CBCT) accompanied by NNT image software (v. 4.6, desktop version) and a LCD monitor with1280 × 1024 pixel resolution was used to examine gingival tissue biotypes and measurements of the maxillary teeth of the anterior region.
Patients' heads and chins were stabilized during CBCT scans. The plane of occlusion was horizontal, and the center was at the mid-sagittal plane. In the patient's mouth, a plastic lip retractor was inserted such that the buccal mucosa and cheek didnot come into contact with the teeth's facial aspects. The patients were encouraged to maintain their tongue in the lower half of the oral cavity, as described by Khan [15]. Soft tissue cone-beam computed tomography is the name given to this technique (ST-CBCT). The soft tissue scanning mode was only used in the maxilla, with a mean exposure period of 11-12.3 seconds; im-
Radiographical Examination
A Hyperion X9 digital imaging system with cone-beam computed tomography (CBCT) accompanied by NNT image software (v. 4.6, desktop version) and a LCD monitor with 1280 × 1024 pixel resolution was used to examine gingival tissue biotypes and measurements of the maxillary teeth of the anterior region.
Patients' heads and chins were stabilized during CBCT scans. The plane of occlusion was horizontal, and the center was at the mid-sagittal plane. In the patient's mouth, a plastic lip retractor was inserted such that the buccal mucosa and cheek did not come into contact with the teeth's facial aspects. The patients were encouraged to maintain their tongue in the lower half of the oral cavity, as described by Khan [15]. Soft tissue cone-beam computed tomography is the name given to this technique (ST-CBCT). The soft tissue scanning mode was only used in the maxilla, with a mean exposure period of 11-12.3 s; images were taken in continuous mode at 60-65 KVp and 8-10 mA. The scan's field of view (FOV) was 11 × 8 mm, with a resolution of 300 m. The teeth that were selected were seen in a sagittal cross-sectional view at the midline, relating to the long axis of the tooth for all measurements. When viewing a specific tooth, proper attention should be paid to ensure that the cross-sectional view, i.e., from the crown to the apex, is in one plane.
CBCT images were generated and analyzed using software (Veraviewepocs 3DF40 J.Morita MFG CORP. Kyoto, Japan). Subsequently, a senior radiologist analyzed and measured the gingival thickness of the maxillary anterior teeth on a CBCT sagittal section. The thickness measurements were taken at the mesial-distal midpoint of each tooth, as shown in Figure 2. All of the data were entered into a specially developed application.
ages were taken in continuous mode at 60-65KVp and 8-10 mA. The scan's field of view (FOV) was 11×8 mm, with a resolution of 300 m. The teeth that were selected were seen in a sagittal cross-sectional view at the midline, relating to the long axis of the tooth for all measurements. When viewing a specific tooth, proper attention should be paid to ensure that the cross-sectional view, i.e., from the crown to the apex, is in one plane. CBCT images were generated and analyzed using software (Veraviewepocs 3DF40 J.Morita MFG CORP. Kyoto, Japan). Subsequently, a senior radiologist analyzed and measured the gingival thickness of the maxillary anterior teeth on a CBCT sagittal section. The thickness measurements were taken at the mesial-distal midpoint of each tooth, as shown in Figure 2. All of the data were entered into a specially developed application.
2.3.Statistical Analysis
The data collected were entered and analyzed using the computer application SPSS version 23. Quantitative variables, such as age and radiographic measurements obtained via CBCT, were represented in the mean and standard deviation forms. Qualitative data, such as gender and transgingival measurement method, were represented as percentages and their frequencies. A Bland-Altman plot was computed to determine the association between the clinical method (transgingivalprobing) and the radiographic method (CBCT) for the diagnosis of gingival biotype. p-value≤0.05 was considered to be statistically significant.
Results
A total of 200 systemically healthy subjects having maxillary central incisors and lateral incisors were included in this study to measure gingival biotype, or gingival thickness, using transgingival probing and CBCT. The difference in mean gingival thickness was not statistically significant when comparing the measurements obtained by transgingival probing and those obtained by CBCT imaging of the facial surface of teeth (p >0.05), as shown in Figure 3.In addition, according to the thin and thick categories, no significant difference among methods was observed in gingival biotype thickness, as shown in Table 1.
Statistical Analysis
The data collected were entered and analyzed using the computer application SPSS version 23. Quantitative variables, such as age and radiographic measurements obtained via CBCT, were represented in the mean and standard deviation forms. Qualitative data, such as gender and transgingival measurement method, were represented as percentages and their frequencies. A Bland-Altman plot was computed to determine the association between the clinical method (transgingival probing) and the radiographic method (CBCT) for the diagnosis of gingival biotype. p-value ≤ 0.05 was considered to be statistically significant.
Results
A total of 200 systemically healthy subjects having maxillary central incisors and lateral incisors were included in this study to measure gingival biotype, or gingival thickness, using transgingival probing and CBCT. The difference in mean gingival thickness was not statistically significant when comparing the measurements obtained by transgingival probing and those obtained by CBCT imaging of the facial surface of teeth (p > 0.05), as shown in Figure 3. In addition, according to the thin and thick categories, no significant difference among methods was observed in gingival biotype thickness, as shown in Table 1.
The correlations between the measurement methods are shown in Figure 4. There was a significant positive correlation between transgingival probing and CBCT (r = 0.95; p < 0.01) for the right side. There was a positive and strong correlation between methods, and also for the thick and thin gingival categories (r = 0.94, p < 0.01 and r = 0.912, p < 0.01, respectively). Results were similar for the left side; a significant positive correlation between transgingival probing and CBCT was observed (r = 0.894; p < 0.01),as was a strong correlation for the thick and thin categories. The correlations between the measurement methods are shown in Figure 4. There was a significant positive correlation between transgingival probing and CBCT (r = 0.95; p <0.01) for the right side. There was a positive and strong correlation between methods, and also for the thick and thin gingival categories (r = 0.94, p <0.01 and r = 0.912,p <0.01, respectively). Results were similar for the left side; a significant positive correlation between transgingival probing and CBCT was observed (r = 0.894; p <0.01),as was a strong correlation for the thick and thin categories.
Intra-class correlation was also computed to observe the relationship between measurement methods;the results showed high consistency between methods (ICC = 0.94; 95%CI:0.91-0.96; p <0.01 for the right side and ICC = 0.87; 95%CI: 0.82-0.91; p <0.01 for the left side). The agreement between methods is shown in a Bland-Altman plot ( Figure 5). Intra-class correlation was also computed to observe the relationship between measurement methods;the results showed high consistency between methods (ICC = 0.94; 95% CI: 0.91-0.96; p < 0.01 for the right side and ICC = 0.87; 95% CI: 0.82-0.91; p < 0.01 for the left side). The agreement between methods is shown in a Bland-Altman plot ( Figure 5).
4.Discussion
The biotype of gingival tissues can be determined through visual examination (direct), the probing of periodontal tissues, and direct measurements with endodontic spreaders, endodontic files, and calipers [17]. For research and clinical purposes, the
Discussion
The biotype of gingival tissues can be determined through visual examination (direct), the probing of periodontal tissues, and direct measurements with endodontic spreaders, endodontic files, and calipers [17]. For research and clinical purposes, the measurement of the buccolingual aspects of the gingival tissue thickness is only useful for examination purposes if the adjectives "thick" and "thin" are focused on. There are different non-invasive and invasive methods used to check gingival tissue thickness, for example, ultrasonic devices, probe transparency (Tran), and CBCT imaging [18][19][20].
The utilization of ultrasonic instruments to assess the width of gingival tissues is a noninvasive technology that has been demonstrated to be repeatable [21], but the disadvantages of this technique are the difficulty of conserving the transducer [22], definitiveness, device inaccessibility, and high costs [23]. These considerations may be to blame for the device not being part of the clinician's conventional armamentarium. To distinguish the thin gingival biotype from the thick gingival biotype, a simplified method is used that considers the visibility of the periodontal probe from the gingival edge [24].
Recently, CBCT scans have beenused as an improved diagnostic aid for assessing the thickness of the soft and hard tissues [20]. According to Fu [17] and Aisri [25], CBCT imaging gives precise measures of the thickness of both bones and labial soft tissues. They found that CBCT dimensions may be a more objective modality than direct measurements for defining the thickness of both soft and hard tissues. To investigate the thickness of the palatal mucosa, many studies have been carried out using transgingival probing, but only a few studies have been conducted that utilized soft tissue CBCT to assess the thickness of facial gingival tissue [4,8,26]. In this study, we evaluated the association between the thickness of the soft tissues of the teeth in the maxillary anterior region and the thickness of the gingival biotype, with the help of soft tissue CBCT and transgingival probing.
For this study, the age of participants ranged from 18 to 50 years, with a mean age of 35.13 ± 7.75 years. Most of the participants (28 individuals, or 70%) were between the ages of 18 and 40. Out ofa total of 40 participants, 22 (55%) were females, and 18 (45%) were males, for a ratio of 1.2:1 between females and males. The correlation between transgingival probing and CBCT assessment of gingival biotype had a Spearman's correlation coefficient of 0.985 and a p-value of 0.0001; thus, it is statistically significant, with an r value of 0.401.10. Similarly, El Khalifa et al. discovered a substantial positive association between transgingival probing and CBCT assessments of gingival biotypes.
To date, there has been no specific definition of how a thick gingival biotype differs from a thin gingival biotype. Amongst the possible causes, it is noted that gingiva thickness is measured at various vertical levels. Previously, intrusive procedures were utilized to evaluate gingival thickness; direct measurement [26] was used but had several disadvantages, including an invasive approach, the lack of repeatability and precision, inappropriate angles, and the level of pressure. To circumvent such constraints, non-invasive approaches, such as ultrasonic devices [27] and cone-beam computed tomography [28], were developed; however, such modalities are technique-sensitive and have high costs. The accuracy of several techniques, including manual evaluation by the utilization of calipers after tooth extraction [29], measurement using a syringe with an endodontic depth marker, and CBCT radiography, is limited by the absence of reference objects. Recently, a technique was developed that was a modified radiographic technique [30] presented by Alpiste-Illueca [31], who discovered that crown width to crown length ratio and gingival width are contrasting morphometric measurements that can serve as replacement dimensions to predict the thickness of the gingival biotype located on the cementoenamel junction.
Gürlek et al. [29] provided a simple technique for determining periodontal type that relies on the visibility of free gingival tissue while gingival grooves in the teeth are probed. The most frequently used approach for distinguishing thin and thick gingival biotypes is the visual evaluation of the visibility of the periodontal probe through the sulcus. If the periodontal probe can be seen through the gingival edge or sulcus, the gingival tissue biotype is characterized as thin. The capacity of gingival tissue to hide the color of any underlying material is required for producing attractive outcomes, particularly with regard to implants and restorative dentistry, and subgingival metals are extensively employed for this reason [31]. The simplest technique for detecting a thin gingival biotype is to use a periodontal probe (metal in nature) in the sulcus to measure gingiva width; the tip of the periodontal probe appears to be transparent through the gingiva [32]. Periodontal probing methods are regularly used during periodontal and implant treatments because they are less invasive than alternatives [13,20,33].
Both hard and soft tissues can be seen and measured with CBCT. Several authors observed that CBCT measures both soft and hard tissue with reliability and accuracy. They noted that CBCT images might be a more objective means of determining hard and soft tissue thickness than direct measurements [28,31,33]. CBCT offers a more accurate image of the tooth, gingiva, and other periodontal structures compared to ultrasonic devices and transgingival probing. Furthermore, the dimensions of a particular tooth may be measured several times with the same image acquired by ST-CBCT, which is not possible with other techniques [33].
Stein et al. [34] conducted a comparison analysis with a total of 60 participants and discovered links between the buccal bone and gingival tissue width. Nonetheless, the contrast in their study was not conducted at the same level. Instead, the gingival biotype thickness was assessed supracrestally, whereas the thickness of the bone was measured posterior to the alveolar crest. In comparison, La Rocca et al. [35] found no noteworthy link between the outcomes of CBCT imaging and transgingival probing in an in vivo investigation of 90 maxillary teeth, despite the fact that the comparison in their study was not conducted at an equivalent level. Despite such contradictory findings, and in spite of our study's small sample size, we found a strong positive connection between transgingival probing and CBCT measurements of gingival biotypes.
Conclusions
As per the findings of our study, there was a considerable positive relationship between transgingival probing and CBCT measurements of gingival biotypes. As a result, we urge that CBCT imaging be utilized to measure hard and soft tissue thickness; in addition, we recommend that gingival biotype be defined in all periodontal disease patients in order to deliver the predicted restorative and surgical treatment outcomes. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Written informed consent was obtained from the patients to publish this paper. Data Availability Statement: Not applicable.
Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results. | 2022-09-22T15:31:04.316Z | 2022-09-01T00:00:00.000 | {
"year": 2022,
"sha1": "ed57e3b983f8ccff1957310f12a08634cfddae15",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1648-9144/58/9/1312/pdf?version=1663639471",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5fb4e9d338ee2f9535caf792190f994c67987d87",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
182380151 | pes2o/s2orc | v3-fos-license | Motivations, success, and cost of coral reef restoration
Coral reef restoration is an increasingly important part of tropical marine conservation. Information about what motivates coral reef restoration as well as its success and cost is not well understood but is needed to inform restoration decisions. We systematically review and synthesize data from mostly scientific studies published in peer‐reviewed and gray literature on the motivations for coral reef restoration, the variables measured, outcomes reported, the cost per hectare of the restoration project, the survival of restored corals, the duration of the project, and its overall spatial extent depending on the restoration technique employed. The main motivation to restore coral reefs for the projects assessed was to further our ecological knowledge and improve restoration techniques, with coral growth, productivity, and survival being the main variables measured. The median project cost was 400,000 US$/ha (2010 US$), ranging from 6,000 US$/ha for the nursery phase of coral gardening to 4,000,000 US$/ha for substrate addition to build an artificial reef. Restoration projects were mostly of short duration (1–2 years) and over small spatial extents (0.01 ha or 108 m2). Median reported survival of restored corals was 60.9%. Future research to survey practitioners who do not publish their discoveries would complement this work. Our findings and database provide critical data to inform future research in coral reef restoration.
Introduction
The world's coral reefs are under threat from stressors such as destructive fishing practices, overfishing, coastal development, and water pollution (Burke & Spalding 2011). These stressors are exacerbated by climate change (Hoegh-Guldberg et al. 2007;Hughes et al. 2017), which will increase the frequency of severe coral bleaching events and reduce the extent of healthy and mature reef assemblages globally in the future (Frieler et al. 2012). Active restoration (the process of actively assisting the recovery of an ecosystem that has been degraded, damaged, or destroyed; SER 2004) may be necessary when natural ecosystem recovery is precluded (Perrow & Davy 2002). Coral reef restoration is often regarded as a small-scale emergency action to buy time (Shaver & Silliman 2017) while we tackle climate change and poor water quality at larger spatial scales (Hughes et al. 2017;IPCC 2018). However, repeated restoration might be necessary to promote recovery of reefs after successive bleaching events. Under these circumstances, reefs providing sources for new coral larvae would play an important role in accelerating the recovery of damaged systems (Hock et al. 2017). In order to conserve coral reefs and maintain ecosystem functions in the face of external threats, we may even need to consider new interventions such as engineered habitats or assisted evolution .
When assessing the published scientific literature, the specific objectives of coral reef restoration projects are dominated by investigating the biological response of corals to transplantation (Hein et al. 2017), yet the underlying motivations are unknown (Bernhardt et al. 2007;Aradóttir et al. 2013). Motivations for restoration are defined as "answers to the question of why ecosystems should be restored" (Clewell & Aronson 2006). Similar to restoration of other marine coastal ecosystems (Bayraktarov et al. 2016), publicly available data on the cost and success of restoration techniques are sparse and inconsistently reported (Edwards & Gomez 2007;Iacona et al. 2018). Given the recent investments in coral reef restoration globally, knowing the cost of employing a specific technique and its likelihood of success are essential for planning for and developing of new restoration projects.
In this article, we review and analyze empirical results from mostly peer-reviewed published coral reef restoration projects (including some gray literature and personal communications) to explore the motivations for the projects using the framework developed by Clewell and Aronson (2006) (and used in evaluating terrestrial restoration by Hagger et al. 2017) and the ecological, economic, and social outcomes measured according to Wortley et al. (2013). We report on coral reef restoration success by synthesizing a database (available online doi:10.5061/dryad.sv798dm) containing information on cost, survival, project spatial extent, and duration for projects employing six common coral reef restoration techniques.
Data Collection
We conducted a systematic literature search using Web of Science (Core collection; Thomson Reuters, New York, New York, U.S.A.) and Scopus (Elsevier, Atlanta, Georgia, U.S.A.) and the title search terms "(coral reef* OR coral*) AND restor*", as well as "(coral reef* OR coral*) AND rehab*". The search returned 141 studies spanning 27 years (1991( -March 2018. An EndNote (Version X7.0.2; Thomson Reuters.) search was then performed within the full text using the search terms (cost* OR feasib* OR surviv*). Additional information was gathered by following on citations, personal communication with three practitioners, and inspecting diverse restoration databases and webpages. The systematic literature search returned 87 studies. Information on the restoration technique employed, cost per hectare of the restoration project, project spatial extent and duration, and survival of restored corals was extracted as "observations." A data source may contain more than one observation, e.g. if a different coral species was examined or the restoration was carried out at multiple sites.
Data from plots and figures were extracted graphically by using WebPlotDigitizer (https://automeris.io/WebPlotDigitizer/). Data for costs and survival were calculated for the database as follows. Where only cost per coral colony was provided, calculations for the restoration cost per hectare were estimated by assuming a transplanting schedule with four coral colonies outplanted per m 2 , or 40,000 coral transplants per hectare (Edwards & Gomez 2007) to convert cost per colony into cost per unit area. Based on a survival rate of restored organisms of 60.9%, 65,681 coral transplants would be required to populate 1 ha and yield 40,000 live corals. This median survival of restored corals was calculated from all observations reporting on survivorship in the database. Each survival value was averaged over the reported pre-transplant, transplant, and post-transplant survival per observation.
All reported costs were adjusted for inflation in each respective country based on consumer price index (CPI) to a base year of 2010 prices. Data required for economic conversion were downloaded from the World Bank Development Indicators (The World Bank 2019a, 2019b, 2019c. For a subset of countries, CPI data were unavailable, and these observations were excluded from further analyses, with the original data retained in the database for reference. If the CPI for a particular year was unavailable, CPI information for the next closest year to the year of data collection reported in the study was employed. If the study did not report the data collection year, the year preceding the publication date was used. For restoration costs incurred in 2018, the previous year CPI (2017) was used for conversion. For studies where local currencies were reported, data were first converted to U.S. dollars using the foreign exchange rates from the Penn World Tables (Heston et al. 2012) and later adjusted to the respective country's inflation based on CPI to a base year of 2010 prices.
A second economic conversion was carried out to account for effects based on differences in the countries' income. This conversion was based on the gross domestic product as a function of purchasing power parity (GDP [PPP]). Therefore, we first converted U.S. dollar to the local value of one U.S. dollar in the developing country and then adjusted for inflation to obtain international dollar (Int$) at a base year of 2010 prices. The difference in cost from economic conversion based on CPI versus GDP (PPP) highlights the greater value (i.e. purchasing power) of one U.S. dollar relative to the official exchange rate of the foreign currency in countries with less developed economies.
Motivations for Coral Reef Restoration
For restoration motivations, variables measured, and reported outcomes, one set of response variables was extracted from primary studies (publications describing original research) where one study describes one restoration project. An exception was the study by Edwards and Gomez (2007), which contained information on five restoration projects. We searched within the abstract and the introduction of each study for the primary, secondary, and tertiary reasons for the restoration project using categories adopted from Clewell and Aronson (2006) and Hagger et al. (2017). Motivations for each coral reef restoration project were classified as idealistic (social, political, or cultural reasons seeking atonement for environmental degradation or reconnection with nature), experimental (seeking experimental data to answer ecological research questions or improve restoration approach), pragmatic (focusing on the ecosystem services that can be derived from a restored coral reef into the future), legislative (related to legal and policy requirements of governments and other large institutions to recover ecosystems which have suffered environmental impacts, e.g. habitat loss from development, mining, or ship grounding), and/or biotic (motivations aligned with the desire to enhance biodiversity). The categories idealistic, experimental, pragmatic, legislative, and biotic are not necessarily mutually exclusive but comprise a typology that facilitates their systematic description (Clewell & Aronson 2006).
Variables Measured
Variables measured were grouped into "attribute categories" and "sub-attribute categories" (Table S1, Supporting Information) modified from the International Standards for Ecological Restoration by the Society of Ecological Restoration (McDonald et al. 2016). Variables measured during monitoring of the restoration site were categorized as related to ecosystem function and processes (e.g. survival, growth, and productivity) but also the physical environment of the restoration site (temperature, water currents, turbidity, and pH) and the level of threats (invasive species, predators, and physical damage). Note that the International Standards classify variables such as survival, growth, and productivity under the attribute "ecosystem function" while these would be categorized as variables measuring the biological response of the ecosystem to the restoration intervention in other studies (e.g. Hein et al. 2017).
Restoration Outcomes
We searched in the abstract, results, and discussion of each study to identify the type(s) of outcomes reported as results. These were categorized as ecological, economic, and/or social (sensu Wortley et al. 2013) to provide an understanding of the broader outcomes being addressed.
Coral Reef Restoration Techniques
Information on six major techniques for coral reef restoration was extracted from the studies reviewed: (1) Direct transplantation: harvesting of coral colonies or fragments from a donor site and the transplantation of these to a restoration site without the intermediate step of growing coral fragments in a nursery (Edwards & Clark 1998).
(2) Larval enhancement: flow-through facilities are used to aid fertilization and competent coral larvae are then seeded to a reef restoration site. The reared coral larvae can be either free swimming or attached to substrate (Chamberland et al. 2017).
(3) Coral gardening: involves two phases (Rinkevich 1995(Rinkevich , 2000(Rinkevich , 2005(Rinkevich , 2008(Rinkevich , 2014(Rinkevich , 2015a(Rinkevich , 2015b: (1) the collection of coral fragments from donor colonies and growing these in field-based (in situ) or land-based (ex situ) nurseries and (2) outplanting the nursery-grown corals to the degraded reef using materials such as epoxy, cement, or cable ties to attach the corals to the substratum (Montoya-Maya et al. 2016). The analyses were carried out for "Coral gardening" indicating the whole process using this technique for restoration and for its components "Collection and nursery phase" and "Transplantation phase" since this is how data were reported by the studies. Note that the individual components of the coral gardening approach cannot be considered as standalone restoration approaches. (4) Substrate addition (artificial reefs): utilizes structures deployed on the seabed to mimic characteristics of a natural reef and is often followed by active coral transplantation (Clark & Edwards 1994, 1995. (5) Substrate stabilization: damaged substrate (e.g. after storms or ship groundings on reefs) is restored to facilitate natural coral settlement and reef recovery (Lindahl 2003). (6) Substrate enhancement with electricity: mimics the chemical and physical properties of reef limestone deposition, by encouraging the precipitation of calcium and magnesium on artificial substrates using electrical currents (Goreau et al. 2003). The technique was originally developed by Wolf Hilbertz in the 1970s and has led to the commercial product Biorock.
Results
Of the 87 studies extracted, 71 were primary sources (publications describing original research) and 16 were secondary sources (studies referring to original research published elsewhere, e.g. reviews). The 71 primary sources reported on 75 separate restoration projects. A single study could contain more than one observation. A total of 335 observations were retrieved from all primary and secondary studies. The motivations and reported outcomes were recorded at the restoration project level taking into account primary sources only. One-third of the studies (n = 25) were from less economically developed countries (low income and lower middle income), 13 were from countries with an upper middle income, and the remaining studies were from high-income countries (n = 45) (Fig. 1). Four studies contained no information on the country in which the restoration project was carried out.
Reasons for Coral Reef Restoration
The dominant primary motivations for the restoration of coral reefs in the restoration projects reviewed were experimental reasons, such as to improve the restoration approach and answer ecological research questions, accounting for 65.3% of the projects (Fig. 2). A total of 17.3% of the projects carried out coral reef restoration because of an environmental impact such as hurricanes, ship groundings, or heavy storms and were inspired by legislative reasons. Restoration for biodiversity enhancement (biotic reasons) was the motivation for 10.7% of the projects. Pragmatic reasons for enhancing ecosystem 1980, 1985, 1990, 1995, 2000, 2005, 2010, 2013, 2015, and 2017 services such as fisheries production motivated 4.0% of the projects. Social reasons (idealistic), e.g. restoration for education purposes or awareness of the general public and restoration for biodiversity offsetting (legislative), were negligible (<2.0% of the projects). The secondary and tertiary motivations, where reported, were dominated by experimental reasons to restore coral reefs (Fig. 2).
Variables Measured and Reported
The majority (70.7%) of projects measured variables relating to ecosystem function (Fig. 3), including survival of restored corals, growth, and productivity (Fig. 4). This was followed by species composition (25.3%) indicated by metrics such as species present, richness or distribution, and physical conditions (2.7%) e.g. water temperature, salinity, turbidity, and water currents. The number of projects reporting on social aspects of restoration, such as measuring environmental awareness or community engagement was negligible (1.3%).
Coral Reef Restoration Outcomes Reported
The majority (76.0%) of coral reef restoration projects reported an ecological outcome only (Fig. 5). Some studies reported on social and ecological outcomes (13.3%), followed by ecological and economic outcomes (8.0%) while the least number of studies reported on all three outcome categories (2.7%). All projects reported at least an ecological outcome. Economic outcomes were not reported at all.
Out of a total of 335 observations, 17.3% (or 29.9% of 87 studies) reported on project cost and 14.3% (or 27.6% of the studies) described what restoration components these costs included. From all 58 observations (26 studies) reporting on cost, 37.9% of the observations (or 38.5% of the studies) reported on both capital (planning, purchasing, land acquisition, construction, financing) and operating (maintenance, monitoring, and equipment repair/replacement) costs highlighting that the cost values reported here are likely lower than the real total project cost.
Costs, survival rates, and project spatial extent varied substantially among the six restoration techniques (Table 1, Fig. 6). The median cost for substrate addition − artificial reef were highest with 3,300,000 ± 44,000,000 US$/ha (n = 10; median ± SD; Table 1). The cheapest restoration technique was the collection and nursery phase component of the coral gardening approach with median cost of 28,000 ± 20,000 US$/ha (n = 5; Table 1) and a survival of 52.1 ± 31.5% (n = 85; Fig. 6B). The variability in cost estimates was large and spanned five orders of magnitude (from 6,000 to 143,000,000 US$/ha) between the different restoration techniques (Table 1). The five case studies reporting on project duration of substrate stabilization projects lasted longest with 2.0 ± 3.6 years while studies employing artificial reef enhanced by electrical field were of shortest project duration with 0.3 ± 0.3 years (n = 10; Fig. 6C). The techniques of larval enhancement (n = 3) and substrate addition (n = 12) were deployed over the largest spatial extents with median values of 1,500 ± 849 m 2 and 2,342 ± 5,270 m 2 , respectively (Fig. 6D). The 12 observations reporting on the coral gardening approach in its entirety and the six observations on building artificial reefs enhanced by electrical fields were reported for the smallest spatial scales and had median values of 12 ± 3,854 m 2 and 5 ± 0 m 2 of reef restored, respectively (Fig. 6D). Because the area of coral reef that is ultimately restored depends on the restoration schedule and is not directly related to the spatial area covered by a coral nursery, our reported observations on the collection and nursery phase of the coral gardening approach do not contain area estimates (Fig. 6D).
The median reported cost from all observations (n = 59) using the gross domestic product as a function of purchasing power parity GDP (PPP) was 72,861 ± 18,810,155 Int$/ha (at base year 2010). Cost in Int$/ha for the different coral reef restoration techniques are shown in Table 2. Motivations for coral reef restoration differed between cost ranges. We separated the 27 observations from studies that reported both on motivations and cost into ranges of (1) <100,000 US$/ha, (2) between 100,000 and <1,000,000 US$/ha, and (3) >1,000,000 US$/ha. The primary motivation for (1) was to improve the restoration approach and answer ecological research questions (9 out of 12 observations). For (2) motivations covered biodiversity enhancement (n = 1), biodiversity offset (n = 1), improve restoration approach (n = 2), and restoration after environmental impact (n = 2). For (3) motivations were biodiversity enhancement (n = 1), improve restoration approach (n = 3), and restoration after environmental impact (n = 5).
Discussion
Here we present information on the most commonly applied coral reef procedures from published literature for which motivations, variables measured, cost, success, project spatial extent, and duration have been reported. The techniques described here are not exhaustive (e.g. the method of micro-fragmentation [Forsman et al. 2015;Page et al. 2018] has not been considered), but are in broad agreement with those described by the coral reef restoration review by the Australian Reef Restoration and Adaptation Program (Boström-Einarsson et al. 2018). Coral gardening is among the most commonly used methods to restore coral reefs and consists of two subsequent phases (Rinkevich 1995(Rinkevich , 2000(Rinkevich , 2005(Rinkevich , 2008(Rinkevich , 2014. However, few studies report on the whole process from harvesting coral fragments from donor colonies, growing them in intermediate nurseries to outplanting corals to the degraded reef. Therefore, we report on the overall approach and on each phase individually. The coral reef restoration projects assessed in this study are dominated by experimental motivations inspired by improving restoration approaches and answering questions of ecological concern. Our findings suggest that there is still ongoing development in the methods used to restore coral reefs. However, given that those charged with the task of carrying out large-scale coral reef restoration may rarely publish restoration results and most studies in the literature examined here were conducted by academics, we may be missing a substantial body of knowledge related to coral reef restoration. Bridging the divide between academic researchers and practitioners is a challenge in many conservation fields (Sunderland et al. 2009;Milner-Gulland et al. 2010); doing so will help advance the field of marine restoration ecology.
The majority (71%) of restoration projects in this review measured variables which can partly inform on ecosystem function (Fig. 3) such as survival of restored corals, growth, and productivity. This concurs with Hein et al. (2017) who described that 88% of studies in the published literature measured survival and/or growth of restored corals as an indicator of coral reef restoration effectiveness. In comparison, a study on terrestrial restoration suggested that measurement of at least diversity, vegetation structure, and ecological processes would be necessary to assess the long-term persistence of an ecosystem (Ruiz-Jaen & Mitchell Aide 2005). It is questionable whether measuring item-based success indicators such as survival of restored organisms will be able to provide any information about the holistic ecological perspective of restoring ecosystem function and services.
Although many studies (Yap 2000;Van Diggelen et al. 2001;Epstein et al. 2003;Hernández-Delgado et al. 2014) recommend the assessment of social, economic, and cultural factors when assessing the effectiveness of coral reef restoration, this study confirms that the numbers of studies assessing social or economic factors have been negligible or are not existent. Reported coral reef restoration outcomes in this study were largely ecological (76%) and rarely focused on economic (e.g. job creation, obtaining higher fishing yields) or social aspects (community involvement, awareness, and environmental education) of the restoration project. This is in agreement with the study by Wortley et al. (2013) showing that over 90% of the ecological restoration projects spanning many terrestrial ecosystems and as reviewed from the scientific literature reported on ecological attributes while only 1% reported on social, economic, and ecological attributes altogether. This may reflect the relatively recent history of coral reef restoration and the small scale of the projects. There is a need to assess variables related to ecosystem services provided by the restoration of coral reefs which have positive economic impacts. Future restoration projects could make this a priority, particularly if social and economic outcomes (e.g. environmental education, changes in perception/behavior due to awareness, community empowerment and stewardship, creation of jobs, capitalization of ecosystem services such as tourism, fisheries, coastal protection, etc.) are used as justification for these projects.
Our review of published scientific literature, reports and other available sources shows a large variation in cost reported for the different techniques used in coral reef restoration. While the median reported cost over all observations is around 400,000 US$ to restore 1 ha of coral reef habitat, costs reported for the techniques most commonly applied ranged between 6,000 and 4,000,000 US$/ha. Because coral reef restoration studies rarely report on cost in a consistent manner, it is important to note that the values provided here are estimates only. Reporting on the total cost of coral reef restoration projects can be challenging. This is especially true because not all restoration projects are carried out for the same reasons or have the same specific objectives. For costs that have been reported, there is a large variation within and between intervention type, location, and study species. Estimates of total restoration cost (including both capital and operation costs to account for the different components of a restoration project) require a standardized approach when reporting on cost. Similar issues with extracting restoration costs from the literature were also noted for other marine coastal ecosystems, e.g. seagrass, mangroves, saltmarsh, and shellfish (Bayraktarov et al. 2016). Coral reefs, as particularly charismatic ecosystems, attract volunteers willing to participate in the restoration work which adds to the difficulty to account for the real cost of labor. A productive approach for further work would be to design a survey to elicit information from coral reef restoration practitioners to collate total restoration costs in a standardized way. Ideally, guidelines for standardized reporting on costs for management interventions need to be followed (e.g. by Iacona et al. 2018) in order to compare restoration to other conservation interventions.
Survival of restored organisms is often reported as an item-based success indicator by studies on marine coastal restoration (Bayraktarov et al. 2016). Here, survival differed between the life stages of corals at which the restoration technique is targeted, with high survival rates (>50%) for coral gardening and low survival rates (<25%) where techniques are focused on coral larvae. Yet, survival has been assessed only in the short term (1-2 years after restoration) and the number of long-term studies such as Haisfield et al. (2010) and Garrison & Ward (2012) is small. Previous work has highlighted that survival of restored organisms is not a good indicator for the overall project success (Bayraktarov et al. 2016) because of the different life histories among species and lack of standardization between restoration projects. In addition, coral reef restoration literature is likely biased toward studies reporting on success (Bayraktarov et al. 2016), while only a few studies have described failures (e.g. Fox 2004;Fadli et al. 2012;Cooper et al. 2014).
Here we propose an avenue toward setting clear objectives and quantifying success in achieving those objectives. Ideally, overall restoration project success should be measured as progress toward the specific project objectives. These should be specific, measurable, achievable, realistic, and time-bound (SMART) (McDonald et al. 2016). At least one of the objectives should aim at returning the ecosystem function and resilience of an ecosystem through restoration (Ruiz-Jaen & Mitchell Aide 2005). The project would also need to account for the effect of environmental stochasticity (extreme events such as floods, heat waves, hurricanes), which is why the selection of the site for restoration is critical. Of equal importance are social factors related to whether local communities will adopt and maintain the restored ecosystem-thus an early involvement of local stakeholders is crucial for overall restoration success into the future (McDonald et al. 2016). The fact that biological responses to the restoration intervention such as survival together with growth and productivity continue to be the dominant variables measured, as we report in this review, may indicate that coral reef restoration is still at an early phase (Hein et al. 2017).
With a median restored area of 0.01 ha (108 m 2 ) coral reef restoration reviewed here has been mainly practiced on small, experimental scales. However, restoring corals at larger scales is possible (e.g. 0.52 ha and 0.15 ha; Montoya-Maya et al. 2016;Chamberland et al. 2017). Because studies in this article are mainly research-based, they have inherent limitations such as the period of funding. This may be one of the reasons why the projects reported here tend to be relatively short and over small experimental scales. For instance, a review that evaluated non-peer-reviewed sources and surveyed practitioners reported a median restored area of 300 m 2 (Boström-Einarsson et al. 2018) in comparison to the 108 m 2 reported here. This gap may be closed by gathering data from those charged with the task of carrying out large-scale coral reef restoration by either a survey or other means.
Implications for Practice
Coral reef restoration has been practiced at small experimental scales while a transition to large, whole reef system scales is poised to occur. Innovation enabling coral reef restoration at an industrial scale is already happening by harvesting wild coral spawn slicks (Doropoulos et al. 2019) and employing assisted evolution to make restored corals resilient to climate change (van Oppen et al. 2017). A fast advancement of coral reef restoration is driven by international agreements toward restoration such as the Aichi Biodiversity Target 15 and Sustainable Development Goals by the Convention on Biological Diversity, The Bonn Challenge, or the recently announced United Nations 2021-2030 Decade on Ecosystem Restoration. These agreements stimulate increased funding for restoration. In Australia for example, funding of AUD100 million has been recently allocated toward the research and development phase of the Reef Restoration and Adaptation Plan, which aims to develop, trial, and deploy coral reef restoration interventions for the Great Barrier Reef (GBR) (https://www.gbrrestoration.org/the-program). While being a substantial investment, this amount of funding may appear rather small when taking into account the size of the GBR (35 million hectares) and the median cost of 400,000 US$ to restore 1 ha of coral reef as we found from the published literature. Although this figure may discourage in a first instance, the range of costs observed in this study differs by orders of magnitude for different restoration techniques, which represents an important source of information for the coral restoration community and could be used to target future provision of limited funds. Further research and development in the coral restoration space should focus on interventions that are both inexpensive and successful. A wide range of costs within intervention categories offers hope for future minimizing of costs to a level where restoration projects can become achievable at scale. This review reveals that there is a large variability in the cost reported for different coral reef restoration techniques from the primary literature, reports, and published data repositories. These cost ranged from a median value of 28,000 US$/ha for the collection and nursery phase of the coral gardening approach to 4,000,000 US$/ha for substrate addition to build an artificial reef. Cost values were relative estimates only and did not always report on the total project cost. Survival of restored corals, often reported as the major variable monitored, ranged between 17.3% and 77.5%, depended on the life history that the technique targeted, and was largely documented for short time frames only (1-2 years after restoration). Most published projects were motivated by the desire to elicit experimental data for the purpose of increasing our knowledge or improving the methodological approach and reported an ecological outcome while social and economic aspects of the restoration were rarely assessed. Published literature may be biased toward experimental small-scale restoration projects carried out by academics while large-scale projects by restoration practitioners may not be captured-this remains to be assessed and is an important area of future research. A comparison of costs and success between the interventions used in coral reef restoration could be enabled by reaching out to those charged with the implementation of coral reef restoration projects and eliciting their knowledge through a survey on cost and success. | 2019-06-07T21:33:40.917Z | 2019-06-17T00:00:00.000 | {
"year": 2019,
"sha1": "99115e2fe85164dd67b945a0d702b79ac1d3c4b5",
"oa_license": "CCBYNC",
"oa_url": "https://eprints.qut.edu.au/199169/1/57395460.pdf",
"oa_status": "GREEN",
"pdf_src": "Wiley",
"pdf_hash": "c1bceddb12abab5dd4bef8c8e057851f474c3173",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geography"
]
} |
231642508 | pes2o/s2orc | v3-fos-license | Naphthalene: irritative and inflammatory effects on the airways
Objective This cross-sectional study determined whether acute sensory irritative or (sub)chronic inflammatory effects of the eyes, nose or respiratory tract are observed in employees who are exposed to naphthalene at the workplace. Methods Thirtynine healthy and non-smoking male employees with either moderate (n = 22) or high (n = 17) exposure to naphthalene were compared to 22 male employees from the same plants with no or only rare exposure to naphthalene. (Sub)clinical endpoint measures included nasal endoscopy, smell sensitivity, self-reported work-related complaints and the intensity of naphthalene odor and irritation. In addition, cellular and soluble mediators in blood, nasal lavage fluid (NALF) and induced sputum (IS) were analysed. All measurements were carried out pre-shift on Monday and post-shift on Thursday. Personal air monitoring revealed naphthalene shift concentrations up to 11.6 mg/m3 with short-term peak concentrations up to 145.8 mg/m3 and 1- and 2-naphthol levels (sum) in post-shift urine up to 10.1 mg/L. Results Acute sensory irritating effects at the eyes and upper airways were reported to occur when directly handling naphthalene (e.g., sieving pure naphthalene). Generally, naphthalene odor was described as intense and unpleasant. Habituation effects or olfactory fatigue were not observed. Endoscopic examination revealed mild inflammatory effects at the nasal mucosa of exposed employees in terms of reddening and swelling and abnormal mucus production. No consistent pattern of cellular and soluble mediators in blood, NALF or IS was observed which would indicate a chronic or acute inflammatory effect of naphthalene in exposed workers. Conclusions The results suggest that exposure to naphthalene induces acute sensory irritative effects in exposed workers. No (sub)chronic inflammatory effects on the nasal epithelium or the respiratory tract could be observed under the study conditions described here.
Introduction
Naphthalene (CAS-Nr. 91-20-3) is a white crystalline solid that evaporates at room temperature, has a characteristic tarlike odour, and can be smelled at concentrations as low as 0.44 mg/m 3 (0.084 ppm) (Amoore and Hautala 1983). Naphthalene is mainly used for the synthesis of phthalic anhydride. However, the use of naphthalene as a pore-forming agent in the manufacture of abrasives is an important niche application. There, naphthalene is used openly rather than in closed production cycles. Naphthalene is also a component of tobacco smoke and a residue in tar-containing building products.
Exposure to naphthalene at the workplace or via the environment occurs primarily via inhalation. As sson as naphthalene is taken up, it is mainly metabolized to 1-and 2-naphthol by cytochrome P450 (CYP) monooxygenases and various dihydroxynaphthalenes and quinones.
A wide variety of effects of naphthalene have been previously described. For example, local P450 metabolism in 1 3 the airways and formation of electrophilic metabolites followed by activation of transient receptor potential ankyrin 1 channels (TRPA1) on trigeminal nerve endings is a potential pathway how naphthalene can elicit sensory irritation responses in the upper airways and the eyes (Chiu et al. 2012;Lanosa et al. 2010). Naphthalene might also induce neurogenic inflammation which is supposed to be a physiological sign of the transition from a reversible stimulation of the trigeminal nerve fibers to an adverse health effect (Brüning et al. 2014). Because trigeminal nerve endings are able to release neuropeptides (e.g., Substance P) that regulate immune cell responses and are also equipped with cytokine receptors (e.g., IL-1β, TNF-α receptors) that can respond to inflammatory mediators, prolonged exposures to naphthalene might also trigger responses from the immune system. This kind of bidirectional neuro-immune crosstalk has been recently described for pain and inflammatory diseases (Pinho-Ribeiro et al. 2017).
Currently, there is inadequate evidence in humans for the carcinogenicity of naphthalene and it is grouped in category 2B by the International Agency for Research on Cancer of the WHO (IARC 2002). However, naphthalene causes tumors in the nose and lung of rodents after inhalation (NTP 1992(NTP , 2000. The respiratory and olfactory epithelia of the nose have been found to be particularly sensitive. Essentially, cytotoxic effects and chronic inflammatory reactions have been suggested as decisive factors for the observed carcinogenic effects in the respiratory tract of rats and mice (Bailey et al. 2016). Although the relevance of naphthalene-induced tumors in rodents remain largely unclear in humans due to considerable anatomical and physiological differences (Morris and Shusterman 2010), the basic mechanism by which naphthalene causes these tumors in rodents (chronic inflammation of the respiratory tract) is also valid in humans.
To study whether acute and (sub)chronic local irritant and possible inflammatory effects on the nasal epithelium in humans can be observed after occupational exposure to naphthalene we investigated workers in five production plants of the abrasive industry. This collective has been chosen because naphthalene is used openly in the abrasive industry and, at the same time, the workplaces are characterized by low co-exposures to other workplace contaminants thus limiting the influence of potential confounding factors on health outcome. These assumptions have been confirmed specifically for our collective. In example, previously published data on exposure to naphthalene in the collective presented here (Weiss et al. 2020) revealed high naphthalene shift concentrations up to 11.6 mg/m 3 with short-term peak concentrations up to 145.8 mg/m 3 (e.g., sieving pure naphthalene). In addition, biological monitoring revealed 1-and 2-naphthol concentrations in post shift urine up to 10.1 mg/L. In contrast, past measurements by air sampling in three out of the five companies suggested only low exposures to dust with the inhalable fraction ≤ 5.5 mg/m 3 and the respirable fraction ≤ 1.0 mg/m 3 (unpublished company data). In addition, in two out of the five companies concentrations of crystalline silica up to 100 µg/m 3 have been measured.
Here, we report the work-related complaints about odor annoyance and eye, respiratory and mucous membrane irritation during work, and the odor perception and sensory irritation before and after work. (Sub)clinical signs of irritation, inflammation and damage to the nasal mucosa were investigated by nasal endoscopy, whereas smell sensitivity was assessed by the Sniffin' Sticks test. Furthermore, changes in humoral and cellular compositions of blood, nasal lavage fluid (NALF) and induced sputum (IS) were examined.
Study design and subjects
A cross-sectional and a cross-week design has been chosen ( Fig. 1) by assessing health complaints and effects with a preference on sensory irritation and nasal inflammation in 61 employees of 5 companies (Germany: 3; Austria: 2) of the abrasive industry on Monday pre-shift and Thursday post-shift. Overall, the study design offered the possibility to assess acute effects post-shift on Thursday and (sub)chronic effects pre-shift on Monday.
Based on previously published data on exposure assessment in air, results on biological monitoring, and workplace description and working history, the participants could be divided into a highly (n = 22) and a moderately exposed (n = 17) group to naphthalene, and a reference group (n = 22) (Weiss et al. 2020). The highly exposed group included employees directly exposed to naphthalene while mixing, sieving, moulding, or pressing naphthalene containing formulations. The moderately exposed participants worked in post-processing areas which were located in close proximity to workstations with open handling of naphthalene. The persons in the reference group had not been directly working with naphthalene for the past 10 years and either worked in spatially separated offices or, evidenced by exposure assessment, in post-processing areas with no or miniscule exposures to naphthalene. All investigated persons were male and non-smokers to exclude confounding effects of tobacco smoke, a well-known respiratory irritant. For this purpose, smoking status was pre-assessed by both, questionnaire and cotinine in urine. Persons with urinary cotinine concentrations above 100 µg/L were considered smokers (Haufroid and Lison 1998) and excluded from the study (prior group assignment). Atopy status was determined serologically, testing specific immunoglobulin IgE antibodies in the serum to a variety of environmental allergens (sx1 Phadiatop, Phadia Upsala Sweden).
Work-related complaints
Self-reported work-related complaints were examined by structured questionnaire pre-shift on Monday. The questionnaire comprised 21 symptoms including both specific irritative effects at the eyes, nose or throat, and non-specific symptoms (e.g., headache). The list of symptoms was chosen from the subtest of the Swedish Performance Evaluation System (SPES, Iregren et al. 1997). Instead of a 6-point rating scale with values ranging from 'not at all' (0) to 'very, very much' (5) subjects were simply asked "Do you experience the following symptoms at work or immediately afterwards?" (Yes/No). The ratings were combined with those on self-reported complaints on ocular irritation (burning eyes, dry eyes, watering eyes), nasal irritation (itching nose, dry nose, running nose, stuffy nose, frequent sneezing, nosebleeds), pharyngeal symptoms (coughing spells, shortness of breath), and general complaints (headache, dizziness, nausea, and perspiration). To verify whether complaints are work-related or could be attributed to other causes (e.g., asthma, allergies, wearing contact lenses), the study subjects were also asked whether there would be an improvement or absence of complaints after work, the weekend, a vacation, and whether the complaints had also occurred on other occasions.
Perception ratings of odor and irritation
Chemosensory perception ratings were assessed pre-shift on Monday and post-shift on Thursday by using 'labeled magnitude scales' (LMS) (Green et al. 1996). Participants were asked "Please indicate whether and to what extent you currently experience the indicated sensation". Olfactory descriptors were used to rate different percepts: odor intensity, annoyance, and nausea. Trigeminal descriptors were used to rate eye (burning, tickling) and nasal irritation ('sneeze', prickling, sharp, pungent). The LMS rating was administered via personal computer monitor. Each descriptor could be rated by six categories with quasi-logarithmic spacing between each category including 'barely detectable' (10), 'weak' (55), 'moderate' (165), 'strong' (355), 'very strong' (530), and 'strongest imaginable' (1000).
Furthermore, a naphthalene odor sample was evaluated using the polarity profile method (Sucker and Hangartner 2012), also known as semantic differential scaling. The scale consisted of 29 pairs of adjectives (X-Y) to describe different sensory experiences (e.g., strong-weak, cold-hot, pleasant-unpleasant etc.). Each pair was rated on a 7-point scale with values from − 3 ('extremely X') via 0 ( neither X nor Y) to + 3 ('extremely Y'). The obtained profile of the naphthalene odor was compared to established representative profiles of "fragrance" and "stench" (Sucker and Hangartner 2012) to rate the perceived naphthalene odor profile for each subject. To exclude a potential odor sensitivity bias the Chemical Sensitivity Scale (CSS) (Nordin et al. 2003(Nordin et al. , 2004 was used on Monday pre-shift.
Otorhinolaryngological examination
The examination of the nose and throat (ENT) was performed Monday pre-shift and Thursday post-shift. Endoscopy of the nasopharyngeal area was performed using a 30° rigid endoscope (Storz ® , Tuttlingen, Germany) to examine the mucosa on both sides. Recorded images were evaluated by two experts independently and double-blinded, i.e., to the other expert and to the participants' exposure group. Reddening and swelling of the nasal mucosa and the degree to which the nasal mucus production was serous or purulent was graded from zero to 2 (not present: 0; moderate: 1; pronounced: 2). Results for each nasal side (left vs. right) were first recorded separately and then combined to obtain a "total endoscopy score" that ranged from 0 to 16. The level of inflammation increases with an increasing score (Soler et al. 2016;Poletti et al. 2018).
Smell sensitivity was assessed with the Sniffin' Sticks odour threshold test (Burghart Medizintechnik, Wedel). For this purpose, n-butanol was presented in 16 increasing concentration steps. Normosmia were defined as a score ≥ 6.75, hyposmia as a score between 1.25 and 6.5, and anosmia by not beeing able to identify the highest concentration (score = 1) (Hummel et al. 1997).
Inflammatory markers
Blood samples (serum), nasal lavage fluid (NALF) and induced sputum (IS) were collected Monday pre-shift and Thursday post-shift. NALF was obtained as described earlier (Raulf-Heimsoth et al. 2000). IS was collected by inhalation of isotonic (0.9%) saline aerosol, generated by an ultrasonic nebulizer for 10 min. All samples once taken were immediately cooled and sent to IPA, and processed on the same day. The samples were analysed for their composition of cellular and soluble markers to investigate early signs of irritation/ inflammation on the systemic (serum) and on the local level such as the nasal mucosa (NALF, IS). The concentration of Club cell secretory protein (CC-16) was determined using a sandwich ELISA from BioVendor (Brno, Czech Republic). In addition to total cell numbers and differential cell counts of NALF and IS cells, concentrations of IL-8, MMP-9, IL-6, TIMP-1, 8-iso-PGF2α and LTB4 were determined by immunoassays based on monoclonal or polyclonal antibodies and according to the recommendations of the manufacturers (Raulf-Heimsoth et al. 2011). C-reactive protein (CRP) was determined in serum samples obtained on Monday pre-shift only.
Statistical analysis
Statistical evaluation was performed with SPSS (v. 22.0) and Graph Pad Prism (v. 5.04). A non-normal distribution of the data was assumed. Descriptive statistics (arithmetic mean, median, standard deviation, minimum and maximum) were calculated. Depending on the outcome parameter (ordinal scaled, interval-scaled) the Fisher's exact test and the Kruskal-Wallis test was used to compare the three exposure groups (reference, moderately, highly exposed). To detect changes during the week, both the Mann-Whitney test and ANOVA was applied to compare the pre-shift results on Monday vs. post-shift on Thursday. To calculate the relationship between naphthalene exposure and effect parameters Spearman's rank correlation coefficient (r S ) was used. To assess the relationships between exposure and endpoints of acute irritative and inflammatory, personal shift measurements (air) and 1-and 2-naphthol in post-shift urine on Thursday was used, whereas naphthol levels in preshift urine on Monday was used to assess the relationship between exposure and (sub)chronic effects. For evaluation of the polarity profiles, the similarity of the naphthalene odor profile to representative profiles of the concepts of "fragrance" and "stench" was calculated with the Pearson product-moment correlation (r). Overall, the significance level (p = 0.05) was corrected using the Bonferroni method.
Characteristics of the study group
The characteristics of the study groups are summarized in Table 1. The duration of exposure to napththalene was comparable in the two exposure groups (p = 0.392). No increased frequency of atopy (p = 0.569) or respiratory allergy (p = 0.777) in the two exposure groups compared to the references could be observed. The same was true for chronic diseases (p = 0.363), nasal diseases (p = 0.419), and diseases of the respiratory tract (p = 1.000). The three groups differed only slightly in terms of age (p = 0.042), i.e., those in the highly exposed group were on average 7 years younger than in the other two groups. The employees in the highly exposed group also described themselves as less sensitive to odors and chemicals than in the other two groups (p = 0.006).
Exposure assessment (air and biological monitoring) in all groups has been previosly outlined in detail (Weiss et al. 2020). In brief, median naphthalene concentrations in air (Thursday) in the highly and moderately exposed group was 6.30 mg/m 3 and 0.59 mg/m 3 , whereas it was 0.13 mg/m 3 in the controls and thus well below the former occupational exposure limit of 0.5 mg/m 3 in Germany (AGS 2011) ( Table 2). In individual cases, especially during sieving of pure naphthalene, short-term measurements revealed concentrations up to 145.8 mg/ m 3 in the group of highly exposed workers. Similar differences have been observed for the sum of 1-and 2-naphthol in post-shift urine samples (again Thursday) of the highly (1256 μg/g creatinine) and the moderately exposed group (108 μg/g). In contrast, median exposure was 10 μg/g in controls and thus well below the reference level of the general population in Germany (DFG 2019).
Work-related complaints and perception ratings
Significantly more work-related eye (p = 0.001) and nasal complaints (p = 0.016) were reported in the highly exposed group (eye: n = 14; nose: n = 12) compared to the reference group (eye: n = 5; nose: n = 4). Additionally, a higher percentage of nasal but not eye complaints was also reported in the moderately exposed group (eye: n = 2; nose: n = 10) compared to the reference group. No differences were found between the groups in terms of pharyngeal complaints (p = 0.869) or other more general complaints (e.g., headaches) (p = 0.365). Employees in the highly exposed group attributed their eye complaints to the naphthalene exposure, whereas employees in the reference group attributed it to visual display unit (VDU) work. Employees in the highly exposed group further explained that eye and nose irritation were noticeable only when handling naphthalene directly. Positive correlations between work-related eye and nasal complaints with exposure were observed. In example, the correlation between eye complaints with naphthalene in the air was r S = 0.400 (p = 0.001; n = 61) and with post-shift biomonitoring results r S = 0.410 (p = 0.001; n = 61). A similar result was obtained when the product of exposure duration (years in this work) and naphthalene concentration in air (in mg/m 3 ) was used (r S = 0.337; p = 0.008; n = 61) in terms of estimating the cumulative exposure to naphthalene. This evaluation is based on the assumption that the exposure for each study subject is relatively constant over time. For nasal complaints the correlation with the naphthalene concentration in the air was r S = 0.290 (p = 0.023; n = 61), with the post-shift biomonitoring results r S = 0.333 (p = 0.009; n = 61), and with the exposure index r S = 0.173 (p = 0.182; n = 61).
After the end of the shift, generally less complaints were reported albeit no complete absence of complaints could be found (Table 3). Sensory irritation at the nose and eyes, and odor intensity/annoyance were assessed with values between 'barely detectable' and 'weak'. However, seven employees still reported moderate to strong eye or nose irritation. Three employees were even very annoyed by the naphthalene odor and reported 'strong' and 'strongest imaginable' odor intensity/annoyance.
The naphthalene odor was judged as a stench (r = 0.93) and the opposite of a fragrance (r = − 0.78). The odor was also described as intensive (median 6, range 2-7) and unpleasant (median 6, range 3-7). No differences between the exposure groups were found and no habituation effect to the naphthalene odor during the study week could be identified (values not shown).
Otorhinolaryngological examination
The endoscopic examination of the nose revealed no (sub) clinical signs of irritation, inflammation or damage of the nasal mucosa on Monday (p = 0.365) ( Table 4). On Thursday, mild inflammatory effects were observed in the two exposure groups compared to the reference group (reference vs. moderately exposed: p = 0.0001; reference vs. highly exposed: p = 0.022), whereas no differences were found between the moderately and the highly exposed group (moderately vs. highly exposed: p = 0.145). However, the results revealed no dose-response relationship in terms of an increasing number of inflammatory effects or increasing severity with increasing exposure. The effect of measurement time was statistically significant [F (1,57) = 22.653, p = 0.0001], also the exposure group effect [F (1,57) = 4.281, p = 0.018], and the interaction effect (F (2,57) = 4.490, p = 0.015). The correlation between the total endoscopic score measured on Thursday with the corresponding naphthalene concentration in the air was r S = 0.340 (p = 0.008; n = 60), with the post-shift biomonitoring results r S = 0.286 (p = 0.027; n = 60), and with the exposure index r S = 0.287 (p = 0.026; n = 60).
Time
Reference (n = 22) Moderately exposed (n = 17) Highly exposed ( Table 4 Cross-week changes of clinical findings of the nasal mucosa (reddening, swelling) und mucus production (purulent, serous) (total endoscopic score), comparison of study groups (median; min-max) Time Reference (n = 22) Moderately exposed (n = 17) Highly exposed (n = 22) Total endoscopic score Monday 1.5 (0-11) 2.5 (0-12) 3.0 (0-8) Thursday 2.5 (0-9) 5.0 (3-13) 4.5 (0-13) purulent mucous production showed that, generally, purulent secretion was most often observed. Purulent secretion occurred in half of the moderately (n = 9) and highly (n = 11) exposed employees, whereas it occurred only in one third of the reference group (n = 5). However, purulent secretion was generally at a low level. Altogether, the maximum endoscopic scores were not higher than "2", with one exception of "3" in the highly exposed group on a scale between 0 (no effects at all) and 16 (in case all four endpoints were present at the highest score of 2 and in both sites of the nose). When looking at the results of the Sniffin' Sticks test, only one employee from the reference group exhibited significantly impaired olfactory response (anosmia). A total of 13 employees had a decreased sense of smell (hyposmia). No elevated prevalence of chronically impaired olfactory response was observed in the moderately (normosmia: n = 15) or highly exposed (normosmia: n = 17) group compared with the reference group (normosmia: n = 16) (p = 0.695).
Inflammatory markers in serum, NALF and sputum
No significant differences between exposed groups and the controls could be observed for CC16 in serum neither on Monday before starting work (p = 0.452) nor on Thursday after the end of the shift (p = 0.074). However, all CC16 values were lower on Thursday post-shift compared to Monday pre-shift (Table 5). This difference was somewhat more pronounced in the highly exposed group (p = 0.0001), compared to the reference (p = 0.014) and the moderately exposed group (p = 0.008). The effect of measurement time was statistically significant [F (1,54) Cellular and humoral parameters in NALF and IS are summarized in Tables 5, 6 with the exception of IL-6 which was below the limit of detection in more than 50% of the collected serum, NALF and IS samples (data not presented). In NALF, the total cell count, the epithelial cell number, and the level of neutrophils did not differ by exposure and showed no changes during the study week. Concentrations of total protein, 8-Iso-prostane, LTB 4 , Substance P, IL-8, MMP-9 or TIMP-1 were unaffected and no statistically significant exposure and cross-week effects were observed. Similar to NALF, no differences could be observed for the measured cellular and humoral parameters in IS, neither between the three groups nor between Monday pre-shift and Thursday post-shift. The data also did not indicate an exposure-dependent increase of CRP.
Discussion
This cross-sectional study was conducted at workplaces in the manufacture of abrasive materials where naphthalene is used in open processes as a pore-forming compound and is the most important hazardaous substance. The goal was to evaluate the irritative and inflammatory effects of naphthalene on the eyes and the airways in exposed workers by using a series of various enpoint measures including self-reported work-related complaints and intensities of odor and irritation perception, nasal endoscopy, smell sensitivity, and inflammatory markers in serum, NALF and IS. A previous report on this cohort revealed that naphthalene in air was high and, specifically in working areas with direct handling of naphthalene, short-term peak concentrations up to 145.8 mg/m 3 could be observed (Weiss et al. 2020).
The results were compared to controls who were working in spatially separated parts of the manufacturing areas or in separate office buildings. It was not possible to find controls without any naphthalene exposure at all due to the wide transmission of naphthalene in these plants. Nevertheless, as the naphthalene concentrations in the three exposure groups differed by at least two orders of magnitudes, the assessment of exposure-dependent effects was possible.
The odor of naphthalene was assessed as intense and unpleasant by the workers. There were no obvious habituation effects. While handling naphthalene directly (e.g., sieving pure naphthalene), the exposed subjects described acute sensory irritating effects at the nose and the eyes during work. Once exposure stopped and workers left their workplace they did not report any acute sensory irritating effects anymore.
In the general population, there is a percentage of approximately 20% hyposmia/anosmia (Desiato et al. 2020) which is largely due to an increased prevalencce of olfactory loss with aging (Oleszkiewicz et al. 2019). In our study the number of employees with a decreased olfactory function (21%) has therefore not exceeded the expected numbers. With regard to nasal inflammation, the degree of reddening and swelling of the nasal mucosa and abnormal mucus production (serous or purulent secretion), represented by the total endoscopic score, have been rated as only slight to moderate by ENT specialists and compared to the controls.
Our results showed that the serum concentration of CC16, a sensitive biomarker of lung injury (Heldal et al. 2013), was lower on Thursday post-shift compared to Monday pre-shift. This was seen in all three exposure groups, but with a more pronounced effect in the highly exposed group. Although the CC16 levels did not differ significantly between the three groups, neither on Monday pre-shift nor on Thursday post-shift, the values tend to be lower in the highly exposed group. CC16 is released by epithelial cells into the serum and is a result of acute exposures to chemicals. In contrast, at chronic exposures with subsequent tissue damage, the concentrations tend to be low. In example, decreased CC16 in serum have been reported in chronic exposure to cigarette smoke after smoke-induced Club cell toxicity (Hermans and Bernard 1999;Raulf et al. 2017). However, the interpretation of slightly lower CC16 levels in workers highly exposed to naphthalene should be cautioned because CC16 levels are controlled by various factors and conditions rather than exposure alone (LaKind et al. 2007). In example, CC16 in serum is influenced by, among others, its production rate by Club cells, the permeability of the pulmonary epithelial barrier (diffusion rate from NALF into serum) and its renal clearance by glomerular filtration (LaKind et al. 2007).
Overall, although there was an indication of mild inflammatory changes, no consistent pattern of (inflammatory) effects was seen, neither in the moderately nor in the highly exposed group. Particularly with regard to various biomarkers, there was no difference in the cellular and mediator composition between the moderately and highly exposed groups. Compared to our results, changes of concentrations of biological markers signs of inflammation) found in collectives with a known exposure to a strong irritant or toxic pollutant such as tobacco smoke, and also in collectives with known lung and/or airways' diseases (e.g., chronic obstructive lung disease (COPD), asthma, pneumonitis, lung fibroses) are by far more striking.
One of the strengths of our study is the cross-week design, including single-shift exposure assessments, which allowed the evaluation of both (sub)chronic and acute health effects due to exposure to naphthalene. The exposed employees, with only few excpetions, worked for many years at the companies, so that any long-term effects of naphthalene could be clarified. Furthermore, all study subjects also served as their own controls as they were also examined pre-shift on Monday rather than Thursday post-shift only. Therefore, it was also possible to analyze any short-term effects intra-individually.
Annother strength of our study is the limitation of potentially confounding factors such as exposure to other chemical irritants. Naphthalene is considered the only relevant exposure in the abrasives industry. Nevertheless, we carefully evaluated the overall exposure situation at these workplaces with a particular preference on dust and on organic substances (such as phenol and aldehydes) that may also cause sensory irritation. Exposure to inhalable and respirable dust, especially from ceramic grain or silica, was possible in the mixing and sieving areas rather than naphthalene alone thus specifically in the group with high exposure to naphthalene. This grain is essential in the production of ceramic grinding wheels and therefore inevitable at these workplaces. In addition, specifically in the group with moderate exposure to naphthalene, exposure to dusts from hardened grinding wheels was possible. However, these particles were neither water-soluble nor chemically reactive. It is noteworthy that the majority of particles according to their mass concentrations measured was inhalable but not respirable. Past measurements by air sampling in three companies revealed low exposures to respirable dust (≤ 1.0 mg/m 3 ). In addition, crystalline silica at low concentrations (≤ 100 µg/m 3 ) was only detected in two companies. We also examind the results of the clinical examinations and of the biological markers specifically for one plant where no polymer grinding wheels were produced. The results did not differ from those of the total cohort. Finally, former company measurements of phenol and formaldehyde, substances that may cause sensory irritantion, showed concentrations below current occupational exposure limits in Germany (formaldehyde: 0.3 ppm; phenol: 2 ppm). All in all, the influence of dust and other well-known sensory irritants is considered of minor relevance for the outcome of our study thus confirming naphthalene of being the primary exposure source in our collective.
Finally, we carefully checked additional confounding factors which might have influenced the outcome of our study. This included, among others, work activities at home (e.g., painting or welding), the presence of acute colds or other airways diseases, and other critical medical conditions which may influence sensory irritation and nasal inflammation. In addition, outliers (in terms of extreme values) were checked for whether the respective subject showed any distinctive feature or for methodological errors. None could be found.
The major limitation of our study is the small number of participants. This limited number was a direct result of our primary effort to aim for study participants who are almost exclusively exposed to naphthalene. Such 'monoexposures' are rare to find even within the industrial sector of abrasive materials. Naphthalene is only used in the production of high-quality abrasives which, in turn, is only carried out by few companies in Europe with a total workforce of about 200 employees. In addition, it was necessary to carry out our studies in non-smokers only.
Another shortcoming which might influence the results is a 'healthy-worker effect', i.e., individuals who are susceptible to acute and (sub)chronic effects of sensory irritants such as naphthalene have left the workplace prematurely, whereas those who are less sensitive (or insensitive) remain. Indeed, our results show that employees in the highly exposed group reported themselves as slightly less sensitive to odors and chemicals compared to those in the other two groups. However, according to the information provided by the company doctors, the overall fluctuation of the workforce was extremely low. In addition, olfactory fatigue could not be observed in those participants regularly exposed to naphthalene suggesting that, in contrast to rodents (West et al. 2003), tolerance does not mask relevant effects in our study group. Consequently, no significant 'healthy-worker effect' in our study could be verified.
In summary, because reliable results from epidemiologic studies were lacking, a cross-sectional study at workplaces in the abrasives production where naphthalene is the only relevant chemical exposure was conducted. Our results in humans suggest exposure-related eye and nasal complaints due to naphthalene in the highly and moderately exposed group. Sensory irritation and odor perception ("stench") was almost exclusively limited to the highly exposed group and associated with direct exposures to naphthalene while mixing and sieving. In contrast, no consistent pattern of nasal or pharyngeal irritant and inflammatory effects could be observed, neither with nasal endoscopy nor with inflammatory markers in serum, NALF and sputum. | 2021-01-19T15:03:21.684Z | 2021-01-19T00:00:00.000 | {
"year": 2021,
"sha1": "40b5422899994d8d57580dc98deecbfb65e96684",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00420-020-01636-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "40b5422899994d8d57580dc98deecbfb65e96684",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
53989905 | pes2o/s2orc | v3-fos-license | OPTICAL-MICROPHYSICAL PROPERTIES OF SAHARAN DUST AEROSOLS AND COMPOSITION RELATIONSHIP USING A MULTI-WAVELENGTH RAMAN LIDAR, IN SITU SENSORS AND MODELLING: A CASE STUDY ANALYSIS
A strong Saharan dust event occurred over the city of Athens, Greece (37.9N, 23.6E) between 27 March and 3 April 2009, was followed by a synergy of three instruments: a 6-wavelength Raman lidar, a CIMEL sun-sky radiometer and the MODIS sensor. The BSC-DREAM model was used to forecast the dust event and to simulate the vertical profiles of the aerosol concentration. Due to mixture of dust particles with low clouds during most of the reported period, the dust event could be followed by the lidar only during the cloud-free day of 2 April 2009. The lidar data obtained were used to retrieve the vertical profile of the optical (extinction and backscatter coefficients) properties of aerosols in the troposphere. The aerosol optical depth (AOD) values derived from the CIMEL ranged from 0.33-0.91 (355 nm) to 0.18-0.60 (532 nm),
Introduction
Atmospheric aerosols have a large impact on the planetary radiation budget, and are thought to exert a net cooling effect on climate (Andreae, 1995;Ramanathan et al., 2001;Heinold et al., 2007;Levin and Cotton, 2009;Ramanathan and Feng, 2009;Lohmann et al., 2010).The cooling effect associated with anthropogenic aerosol is thought to partially mitigate greenhouse gas warming, but estimates of the radiative forcing pattern still remain complex and highly uncertain, owing to the large spatio-temporal variability of aerosol dust and their complex interaction with atmospheric constituents, radiation and clouds (Satheesh et al., 2005;Forster et al., 2007;Min et al., 2009).As mineral dust accounts for about 75 % of the global aerosol mass load and 25 % of the global aerosol optical depth (Kinne et al., 2006) and is a key player in Earth's climate (Mahowald et al., 2006;Balkanski et al., 2007, Bierwirth et al., 2009;Otto et al., 2009;Müller et al., 2011) affecting precipitation (Yoshioka et al., 2007), it is very important to quantify its effects on Earth's radiative forcing, both in the short-wave (0.3-4 µm) and longwave (4-50 µm) spectral regions (Sokolik and Toon, 1999;Sokolik et al., 2001).For the assessment of the radiative effects of dust, it is imperative to obtain accurate data on the vertical profiling of its optical and microphysical properties, as well as its chemical composition around the globe.Although specific dust experiments (e.g.SAMUM 1 and 2) (Ansmann et al., 2011 and references their in) focused also on the estimation of the radiative effects of dust, their results had a rather regional (Saharan region) and a limited temporal coverage; therefore, systematic vertical profiles of aerosol optical-microphysical and chemical data around the globe are still missing, and open questions about the aerosol role on climate yet exist.
The gap concerning the vertical profiling of the aerosol properties can be filled by synergy of systematic lidar (ground-based, airborne and space-borne measurements to derive the optical-microphysical properties) and of in situ measurements (to derive the optical-microphysical-chemical properties) (e.g.Kandler et al., 2009;Weinzierl et al., 2009;Lieke et al., 2011), which are able to provide much of the information required to constrain models and reduce uncertainties associated with radiative forcing estimate.Although the in situ airborne measurements are information-rich, they remain extremely expensive and limited in both time and space.
Our study aims to fulfill the existing gap on the vertical profiling of the aerosol dust properties, focusing on the retrieval of the vertical profiling of the optical, microphysical and composition of aged dust aerosol particles associated with a strong saharan dust event, as they interact with anthropogenic particles in the lower free troposphere over an urban site (Athens, Greece).In the following sections we first present the dust forecasting model, instrumentation and methodology used for retrieving the aerosol properties (Sect.2).An analysis of the dust event then follows with an emphasis on the days with optimal lidar retrievals (Sect.3).We provide final remarks and a summary of the work carried on this paper in Sect. 4.
The NTUA 6-wavelength Raman lidar system
The National Technical University of Athens (NTUA) lidar system is located on Campus in the city of Athens (37.97 • N, 23.79 • E, 200 m a.s.l.), and has been continuously operating since the initiation of the EARLINET project (Bösenberg et al., 2003) in February 2000.The compact 6-wavelength NTUA Raman lidar system (Mamouri et al., 2008) is based on a pulsed Nd:YAG laser emitting simultaneously at 355, 532 and 1064 nm.The lidar signals are detected at six wavelengths 355, 387, 407, 532, 607 and 1064 nm.The system has been quality-assured by performing direct intercomparisons, both at hardware (Matthias et al., 2004) and software levels (Böckmann et al., 2004;Pappalardo et al., 2004).
To obtain reliable and quantitative lidar aerosol retrievals, several techniques and methods have to be combined.The standard backscatter lidar technique is appropriate to retrieve aerosol parameters mostly for small aerosol optical depths (AOD<0.2-0.3 in the visible), assuming a reference height in an aerosol-free area (e.g. the upper troposphere).Under such conditions, the Klett inversion technique (Klett, 1985) is used to retrieve the vertical profile of the aerosol backscatter coefficient (b aer ) at the respective wavelengths, during daytime.The resulting average uncertainty on the retrieval of b aer (including both statistical and systematic errors corresponding to a 30-60 min averaging time) in the troposphere is of the order of 20-30 % (Bösenberg et al., 1997).To overcome this large uncertainty, the Raman N 2 lidar technique was adopted using the methodology of Ansmann et al. (1992).Since the Raman lidar signals are quite weak, the Raman technique is mostly used during nighttime, when the atmospheric background is low.
In the case of the Raman technique, the measurement of the elastic backscatter signals at 355 and 532 nm, as well as that of the N 2 inelastic-backscatter signals at 387 and 607 nm, respectively, permits the determination of the extinction (a aer ) and b aer coefficients independently of each other (Ansmann et al., 1992) and thus, of the extinction-tobackscatter ratio, the so-called lidar ratio (LR=a aer /b aer ) at both wavelengths (355 and 532 nm).The LR values depend on the chemical composition of aerosols (absorption characteristics), the size distribution and shape characteristics (Ansmann et al., 2003), while the other lidar-derived parameters, at wavelengths λ 1 and λ 2 (in nm), such as the Ångström backscatter-related (ABR ) and the Ångström extinction-related (AER ) exponent (Ansmann et al., 2002), depend on the particle size, shape and the wavelength dependence of the absorption coefficient, respectively (Ansmann et al., 2003).The relative errors of b aer and a aer , of LR, of ABR and AER are mainly due to the presence of noise on the received lidar signal.Ad-ditionally, the lidar backscatter profile must be calibrated at a reference height region with negligible aerosol scattering (i.e., with only Rayleigh scattering).This uncertainty in the calibration region in the upper aerosol-free troposphere (at 355-532-1064 nm) may lead to further errors.Finally, by assuming all errors are random, uncorrelated and assigning reasonable uncertainties at the input parameters mentioned above and the lidar overlap function, the remaining systematic uncertainties are of the order of 10-20 % on b aer , and 10-15 % on a aer (Ansmann et al., 1992;Mattis et al., 2002).Therefore, the corresponding uncertainty on LR is of the order of 14-25 %, while on ABR and AER is if the order of 20-35 % and 20-30 %, respectively.To reduce the relative errors on the vertical profile of b aer (to about 15-20 %) retrieved before the local sunset time (∼19:00 UT) when the Klett technique is used, we applied the LR values retrieved by the Raman technique (Ansmann et al., 1992) from the nighttime period of the same day as the Saharan dust event occurred (2 April 2009).
The CIMEL sun-sky radiometer
The sun photometric observations reported in this paper were performed by a CIMEL sun-sky radiometer, which is part of AERONET (http://aeronet.gsfc.nasa.gov)(Holben et al., 1998).The instrument is located on the roof of the Research Center for Atmospheric Physics and Climatology of the Academy of Athens (37.99 • N, 23.78 • E, elevation: 130 m).The site is located in the city center and 10 km from the sea.This sunphotometric station is operated by the Institute for Space Applications and Remote Sensing (ISARS) of the National Observatory of Athens (NOA).The CIMEL data used in this study will provide information about the columnar AOD, aerosol size distribution, aerosol microphysical properties, and Ångström exponent (α).The inverted aerosol size distributions refer to aerosol radius ranging from 0.01 µm to 15 µm.The expected accuracy for the AERONET inversions is of the order of 15-25 % for radius greater than 0.5 µm and 25-100 % for radius less than 0.5 µm.The AERONET data products description and accuracy along with the technical specifications of the CIMEL instrument are given in detail in Holben et al. (1998Holben et al. ( , 2006) ) and Smirnov et al. (2000).
The MODIS instrument
The Moderate Resolution Imaging Spectroradiometer (MODIS) was launched in December 1999 on the polar orbiting Terra spacecraft and since February 2000 has been acquiring daily global data in 36 spectral bands from the visible to the thermal infrared (29 spectral bands with 1 km, 5 spectral bands with 500 m, and 2 spectral bands with 250 m nadir pixel dimensions).The MODIS aerosol products are only created for cloud-free regions.The columnar AOD values are retrieved by MODIS at 550 nm (http://modis-atmos.gsfc.nasa.gov/products.html) for both oceans (best) and land (corrected) (Tanré et al., 1997;Kaufman and Tanré, 1998;Levy et al., 2007;Russel et al., 2007;Remer et al., 2008;Redemann et al., 2012).The main sources of uncertainty in the retrieval of the AOD in this case are from instrument calibration errors, cloud-masking errors, incorrect assumptions on surface reflectance and aerosol-size distribution, selection (Remer et al., 2005;Levy et al., 2010).The pre-launch conditions suggested that 1 standard deviation of retrievals would fall within ± (0.03 + 0.05 AOD) over ocean and ± (0.05 + 0.15 AOD) over land.These error bounds, derived pre-launch, are referred to as the expected error (EE) (Remer et al., 2005;Levy et al., 2010;Kleidman et al., 2012).
To minimize the uncertainties on the MODIS AOD product several validation studies have been performed during pre-launch and post-launch procedures, regarding the AOD measurements using ground-based instrumentation (Chu et al., 2003;Remer et al., 2005;Misra et al., 2008;Papadimas et al., 2009;Prasad and Singh, 2009).More recently, Levy et al. (2010) performed a global evaluation of the MODIS Collection 5 (C005) dark-target aerosol products over land, showing that more than 66 % (one standard deviation) of MODIS-retrieved AOD values compared to AERONET observed values within an expected error (EE) envelope of ±(0.05 + 15 %), with high correlation (R = 0.9).According to the same authors, Terra's global AOD bias changes with time, underestimating by ∼0.005, after the year 2004.However, although validated globally, MODIS-retrieved AOD does not fall within the EE envelope in all regions of the planet (Levy et al., 2010).
In this study we used MODIS C005 data, which were recently evaluated and validated for the Greater Mediterranean Basin (29.5 • N-46.5 • N and 10.5 • W-38.5 • E) against 29 AERONET stations, as described by Papadimas et al. (2009).The same study found that when comparing C005 to C004 data, the correlation coefficient increases from 0.66 to 0.76, and the slope of the linear regression fit from 0.79 to 0.85 whereas the offset decreased from 0.12 to 0.04, and the scatter of compared data pairs from 0.15 to 0.12.On the other hand, they found a significant decrease of AOD values over land (by 25.8 %) for AODs > 0.2.However, the MODIS C005 data still overestimate/underestimate the AERONET AOD values smaller/larger than 0.25, but to a much smaller extent than C004 data.More precisely, collocated MODIS retrievals with data from our AERONET station are evaluated, and the good comparison revealed justifies the MODIS retrieval for the day under study.For that day, the scattering angle of the MODIS observation used for comparison with AERONET was 133.26 • and the aerosol type retrieved over land was equal to 2.
The BSC-DREAM dust model
The Barcelona Supercomputing Center -Dust Regional Atmospheric Model (BSC-DREAM) (Nickovic et al., 2001) has been delivering operational dust forecasts over the North Africa-Mediterranean-Middle East and over Asia in the last years (currently at www.bsc.es/projects/earthscience/DREAM/).The model simulates the 3-dimensional field of the dust concentration in the troposphere.The dust model takes into account all major processes of dust life cycle, such as dust production, horizontal and vertical diffusion, advection wet and dry deposition, while the chemical aging and aerosol-cloud interactions are not taken into account.The model also includes the effects of the particle size distribution on aerosol dispersion.The model numerically solves the Euler-type mass partial differential equation by integrating it spatially and temporally.The dust production is parameterized using near surface turbulence and stability as well as soil features.The dust production mechanism is based on viscous turbulent mixing close to the surface and on soil moisture content.
In BSC-DREAM, for each soil texture type, fractions of clay, small silt, large silt and sand are estimated with typical particle size radii of 0.73, 6.1, 18 and 38 µm, respectively.For the present study, BSC-DREAM simulation is initialized with 24-hourly (at 12:00 UTC) updated NCEP (National Centers for Environmental Prediction) 0.5 • ×0.5 • analysis data and the initial state of the dust concentration in the model is defined by the 24-h forecast from the previousday model run (because there are not yet satisfactory threedimensional dust concentration observations to be assimilated).The resolution is set to 0.33 • × 0.33 • (∼50 km) in the horizontal, while in the vertical the model domain extends up to 15 km height within 24 layers.For long-range dust transport studies, only the first two classes (0.73 and 6.1 µm) are relevant for the analysis particles since their lifetime is larger than about 12 h.
Derivation of the aerosol microphysical and chemical properties using models
The measured vertical profiles of the aerosol backscatter and extinction coefficients at multiple wavelengths can be inverted to derive particle microphysical parameter profiles (Müller et al., 1999;Veselovskii et al., 2002Veselovskii et al., , 2009;;Osterloch et al., 2011).However, this kind of inversion is an illposed problem requiring regularization (Engl et al., 2000), therefore, "unique" (unambiguous) solutions will never be available.Furthermore, an application of this technique to dust needs to account for the particles non sphericity, given that backscattering by irregularly shaped particles is weaker than by equivalent-volume spheres (Mishchenko and Hovenier, 1995).To address this issue, Mishchenko et al. (1997) suggested approximating the dust particles with a mixture of polydisperse, randomly oriented spheroids, as they can mimic the aerosol optical properties.Dubovik et al. (2006) have included the spheroid model in the AERONET retrieval algorithm, while Merikallio et al. ( 2011) studied the applicability of spheroidal model particles for simulating the single-scattering optical properties of mineral dust aerosols.Veselovskii et al. (2010) introduced the spheroid model into the lidar retrieval of dust particles physical properties, by assuming that aerosols are a mixture of spheres and randomly oriented spheroids with a size-independent shape distribution.This assumption is applied to all particles.However, we must keep in mind that for the fine mode the optical properties of spheres and spheroids are very close and important differences occur only for the coarse mode.Besides, the output result is not very sensitive to the exact type of shape distribution (not shown), justifying the assumption of sizeindependent shape proposed.Moreover, our numerical simulations demonstrate that for 10 % uncertainty of input optical data (backscatter and extinction coefficients) the dust particle volume density and the effective radius can be estimated to within 30 %.
The unknown shape of the aerosols remains an unresolved issue, although progress has been recently made both in the case of Saharan dust (Gasteiger et al., 2011a) and volcanic ash (Gasteiger et al., 2011b) by assuming mixtures of absorbing and non-absorbing irregularly shaped mineral dust particles and spheroids, respectively.For dust particles in SAMUM-2, Gasteiger at al. (2011b) found that irregularly shaped dust particles with typical refractive indices, in general, have higher linear depolarization ratios than corresponding spheroids, and improved the agreement with the observations.However, these models remain too complicated and time consuming for their implementation in inversion algorithm, so in our study we are using a simplified spheroidal model, which allows reasonable estimation of the dust particle parameters.
In this paper the microphysical properties of the aerosols in the lower free troposphere, inside the dust layer, were retrieved using the regularization technique (Veselovskii et al., 2002(Veselovskii et al., , 2004(Veselovskii et al., , 2010)), which used as input the vertical profiles of the aerosol extinction a aer (at 355-532 nm) and backscatter coefficients b aer (at 355-532-1064 nm) retrieved from the elastic (during daytime only b aer was retrieved by the elastic channels) and Raman (during night time b aer and a aer were retrieved, independently) backscattered lidar signals (obtained at 5 different wavelengths: 355-387-532-607-1064 nm).The inverted aerosol microphysical properties are the effective radius (r eff ), the total number (N ), the surface area (S) and volume (V ), as well as the real and imaginary parts of the particle refractive index (m R and m i , respectively), within different layers in the lower troposphere (1.8-3.5 km height asl.).In our approach we do not consider the spectral dependence of the refractive index, or the chemical composition of the aerosol particles.Thus, the retrieved values of the refractive index are the average ones with respect to the size and spectral range considered (355-1064 nm).Additionally, in our retrieval we consider m R to be in the range 1.33-1.65,m i in the range 0-0.02, and the aerosol particles diameters in the range between 0.15-20 µm.The uncertainty on the m R and m i retrieval is of the order of ± 0.05 and ± 50 %, respectively, according to Veselovskii et al. (2010); the corresponding uncertainty of the retrieved values of the effective radius, volume and surface density is about ±30 %.Finally, the uncertainty on the number density estimation is about 50 % (Veselovskii et al., 2010).
We have of course to clarify here that the inverse problem (using lidar data to retrieve the aerosol micro-physical properties) in our formulation is underdetermined: the set of lidar measurements within a single atmospheric layer is extremely limited to 5 different profiles (3 b aer and 2 a aer ) and this is not sufficient to uniquely describe the properties of the aerosol.Therefore, we fit the observation and identify not a unique solution but a family of solutions instead.Specifically, a series of solutions is generated using different initial guesses, different aerosol assumptions and different settings of a priori constraints.Each single solution is obtained using the regularization technique.Then the individual solutions corresponding to the smallest residuals are averaged and the result of the averaging is taken as the best estimate of the aerosol properties.This approach has demonstrated possibility to provide rather adequate retrieval of aerosol properties (Veselovskii et al., 2009).
The inverted refractive index (which corresponds to in situ conditions, i.e. includes aerosol water) along with the water vapor profiles obtained by Raman lidar over Athens and the temperature and relative humidity profiles obtained by radiosonde, were incorporated in the thermodynamic model ISORROPIA II (Fountoukis and Nenes, 2007) to determine the aerosol composition.The model treats the thermodynamics of aerosol containing K, Ca, Mg, NH 3 /NH 4 , Na, SO 4 /HSO 4 , HNO 3 /NO 3 , HCl/Cl and H 2 O. ISORROPIA-II can predict composition for the "stable" (or deliquescent path) solution where salts precipitate once the aqueous phase becomes saturated with respect to a salt, and a "metastable" solution where the aerosol is composed only of an aqueous phase regardless of its saturation state.ISORROPIA-II was executed in "reverse" mode, where known quantities are T, RH and the concentrations of aerosol K, Ca, Mg, NH 4 , Na, SO 4 , NO 3 and Cl.The output provided by ISORROPIA-II is the aerosol phase state (solid only, solid/aqueous mixture or aqueous only) and the speciation in the gas and aerosol phases.The model has been evaluated with ambient data from a wide range of environments (including "dust-rich") (Moya et al., 2001;Zhang et al., 2003;San Martini et al., 2006;Nowak et al., 2006;Metzger et al., 2006;Fountoukis et al., 2009), while its computational rigor and performance makes it suitable for use in large scale air quality and chemical transport models.Some examples of such 3-D models that have implemented ISORROPIA-II are GISS, CMAQ, PMCAM x , GEOS-Chem, and ECHAM/MESSy (Adams and Seinfeld, 2002;Yu et al., 2005;Pye et al., 2009;Karydis et al., 2010;Pay et al., 2010;Pringle et al., 2010).
In order to use ISORROPIA in combination with the Raman lidar data, an assumption concerning the aerosol composition has to be done, mainly due to the absence of air mass sample within the under study layers.This is further corroborated by the fact that the aerosol conditions in the Mediterranean, and especially over Southern Greece are complex (Lelieveld et al., 2002), due to the presence and mixing of aerosols of various origins (marine, Saharan dust, biomass burning events, long-range and/or local pollution) (Sciare et al., 2003;Karageorgos and Rapsomanikis, 2007;Koulouri et al., 2008;Sciare et al., 2008;Pikridas et al., 2010;Terzi et al., 2010;Theodosi et al., 2011).Considering such complexity in aerosol modeling is often important; the output however may be subject to considerable uncertainty.
In our procedure, at first a typical composition of sulfate, ammonium sulfate and mineral dust aerosols was considered.ISORROPIA was run forward for the computation of complex refractive index for each aerosol composition, using as input the relative humidity and the temperature within an aerosol layer.Finally, the aerosol composition (a mixture of sulfate, ammonium and mineral dust) with the closest refractive index (both real and imaginary part) value to the one estimated by the inversion model is provided as the most acceptable composition value.
In situ measurements of aerosol properties
In situ sampling of dust aerosols mixed with urban-like ones was performed to infer the mass concentration and the composition of dust particles near ground.However, under the generally complex aerosol conditions prevailing over S. Greece, as previously mentioned, near surface particle measurements cannot be directly compared to lofted aerosol lidar data, although this could make sense under special conditions (e.g.under very strong Saharan dust events, where the mineral constituents largely dominate the aerosol composition in a homogenized lower troposphere).Our sampling site was installed at the NTUA Campus at the top of a building at 14 m height from ground level (located 200 m a.m.s.l.) and included: PM 10 continuous concentration monitoring by TSI Dustrak 8520 and TCR TECORA aerosol sampling.The sampling procedure and the elemental composition determinations are described in detail in Remoundaki et al. (2011).
Briefly, PM 10 sampling for elemental composition determination and SEM-EDX (Scanning Electron Microscope-Energy Dispersive X-ray) analysis was carried out using a TCR TECORA (Sentinel PM) operating at 38.33 l min −1 , constructed and calibrated in order to comply with European Standard EN12341 for standard sampling of PM 10 .The sampling device operates with autonomy of 16 samples charged in a charging cassette by programming the sampling span and duration.Aerosol samples were collected on 0.45 µm nuclepore membranes.Twelve samples have been collected from 27 March to 2 April 2009.From 28 March to 2 April, two 3-h samples per day were collected: one starting 06:00 UTC and the second starting at 11:00 UTC in order to correspond to urban activities maxima.This 3-h time span during the two urban activities maxima (beginning and end of working day) was also selected in order to avoid sampling interruption due to filter clogging.
Sampling material and filter keeping petri-dishes were pretreated by soaking in dilute nitric acid solution and thorough rinsing by ultra-pure water (18 M cm −1 ) and dried under the laminar flow hood of the laboratory.In order to determine PM 10 concentrations, the nuclepore membranes were weighted before and after sampling according to the procedure described in Annex C of EN12341 (EN12341, 1999) using a Mettler Toledo MS105 with a resolution of 10 µg in the air conditioned weighing room of the laboratory.The pre-weighted membranes were charged to the filter supports and sampler cassette under the laminar flow hood.Filter blanks and blank field samples were also prepared and analysed together with samples.The filters were also weighted according to the same procedure as described before.The elemental composition determinations have been carried out by using the EDXRF (Energy Dispersive X-Ray Fluorescence) technique (SPECTRO XEPOS bench top XRF spectrometer SPECTRO A.I. GmbH) with Pd end window X-ray tube.NIST standard SRM 2783 has been used for spectrometer calibration verification.The elements Si, Al, Fe, K, Ca, Mg, S, Ni, Cu, Zn, Mn, and Ti, have been determined.SPECTRO X-LAB PRO was used for values normalization and error correction.The method detection limits were 100 ng cm −2 for Mg, 20 ng cm −2 for Al and K, 10 ng cm −2 for Ca, Ti, Fe, 5 ng cm −2 for Si, Mn, 2 ng cm −2 for Ni, 1 ng cm −2 for S, Cu and Zn.The estimated precision of the method ranged between 0.1 % and 30 % for individual elements, for most of them being <5 %.
Case study: 27 March-April 2009 dust episode
This case study concerns an intense Saharan dust outbreak, which lasted for eight days (27 March to 3 April 2009) and affected most of the Eastern Mediterranean and Balkans.The ground and remote sensing instruments were operated continuously during this period (although the NTUA Raman lidar was operated during the end of the episode, when clouds were dispersed and aerosol optical depths were low enough to permit sampling by the laser beam.MODIS and CIMEL instruments did not provide aerosol optical depth between 28 March and 1 April, due to extensive cloud cover. Figure 1 presents the BSC-DREAM predictions of total dust in size classes between 0.1 and 10 µm (in g m −2 ) over the European continent at 12:00 UTC.Superimposed on the same figure are the corresponding hourly forecasted wind vectors at 3000 m height level.From this figure we see that during the studied period Athens is influenced by high values of dust loadings (up to 1.5 g m −2 ) from 29 March to 3 April.The maximum dust load was predicted to occur over Athens on 28, 29 March and 2 April (Fig. 1).These large amounts of dust particles originated from the Saharan region and approached Greece after passing over the Mediterranean Sea.Cluster analysis of back-trajectories for air masses arriving in Athens suggests that they may often experience interaction with maritime aerosol before reach-ing the Greater Athens Area (GAA) as indicated by Markou and Kassomenos (2010).the Planetary Boundary Layer: PBL), so they were enriched with dust particles.Subsequently, they traversed the central Mediterranean region and moved anti-cyclonically over Greece.More precisely, the air masses ending at 2000 m height at 15:00 UTC were enriched with Saharan dust particles near the source passing at about 850 m a.s.l.(about 100-120 h before their arrival in Greece), while those ending at 3500 m nearly touched the surface (some 70-80 h earlier); thus, they should contain much higher dust loads (Fig. 2, left).On the other hand, the air masses ending at 3500 m at 19:00 UTC were enriched twice with dust particles passing over the Sahara (about 150 and 30 h, before) (Fig. 2, right).In both figures desert dust particles had the opportunity to mix with marine aerosols and anthropogenic haze (from Italy and the Balkans/Black Sea areas) accumulating over the Mediterranean Sea, which is a common issue in that area (Lelieveld et al., 2002).
Aerosol backscatter and Raman measurements at 355, 532 and 1064 nm were performed by the NTUA Raman lidar system over Athens only under cloud-free conditions in the stud-ied period.Thus, we will focus on aerosol profiles obtained on 2 April.Figure 3 shows the time-height cross section of the range-corrected backscatter lidar signal (in arbitrary units: AU) obtained at 1064 nm from 13:42 to 20:49 UTC from 300 up to 6000 m a.s.l., after the cloud dissipation.According to the lidar measurements, the entire lower troposphere shows a deep, pronounced and aerosol-rich layer extending from ground up to 3500-4000 m height.More specifically, two thin and distinct aerosol layers are shown.The first layer was located around 2000 m, while the second one was found between 3200-3700 m.Indeed, the BSC-DREAM model indicates the transfer of Saharan dust particles over Athens (Fig. 1).These particles are mostly confined between 2000 and 4000 m height (Fig. 4) and have very high forecasted dust concentrations of the order of 400 µg m −3 at 3 km.These dust heights are also in full accordance with the output of the HYSPLIT model indicating the arrival of dustrich air masses over Athens originating from the Saharan desert, then passing over Algeria and Tunisia (Fig. 2).This kind of aerosol structure indicates the presence of aerosols of different origins, as similarly observed during the various campaigns previously cited, such as SAMUM 1 and 2 (Weinzierl et al., 2009;Engelmann et al., 2011;Tesche et al., 2011a), PRIDE, SHADE, DODO1 (McConnell et al., 2008) and DABEX (Heese and Wiegner, 2008;Pelon et al., 2008).
Both aerosol layers detected by lidar slightly descended to lower altitudes with time and became diluted during the afternoon hours (from 13:42 UTC to 16:00 UTC), although always present around 2000 m and 3200-3700 m.The PBL height during daytime reached heights of about 1400 m around 14:00 UTC, while during the afternoon hours it descended down to 500 m a.s.l.around midnight, in full accordance with the closest radiosonde profile data (not shown here).The highest values of the range-corrected backscatter lidar signal within the PBL (shown by the red color in Fig. 3), under stable relative humidity values, indicate the possibility of dust presence also near ground, mixed with locally produced aerosols (e.g. by anthropogenic sources) (Balis et al., 2006).This was confirmed by Remoundaki et al. (2011), since the mass concentration of PM 10 particles measured in situ at 14 m a.g.l., from 27 March to 2 April, showed that during this dust event when the aerosol rich air masses touched the ground, the aerosol mass concentrations exceeded 140-160 µg m −3 , from 30 March to 1 April.During noon and early afternoon hours of 2 April, PM 10 concentrations at ground reached 60-70 µg m −3 , consistent with the lidar data concerning the detection of the arrival of the dust layers over Athens (Fig. 3).The BSC-DREAM model correctly simulated the existence of a dust layer centred around 3 km (at 18:00 UTC) and extending up to 4000-4500 m height (Fig. 4), although it did not correctly simulate the profile of the aerosol mass concentration near ground (25 µg m −3 measured versus 10 µg m −3 simulated, while annual mean PM 10 mass concentrations in Athens on 2009 were of the order of 26 ± 1µg m −3 , YPEKA, 2010).This is because the BSC- DREAM simulates only the dust-related aerosol profiles and not those related to air pollution urban sources.
In Fig. 5 the corresponding vertical profiles of the aerosol optical properties (a aer , b aer , LR, ABR and AER), as well as the water vapour mixing ratio (WVMR) are presented, along with the respective error bars.The averaging time of the lidar signals for the retrieved vertical profiles is approximately 3 h and the vertical range resolution is of the order of 15 m.We note here, that the lower height of our aerosol retrievals is around 1500 m a.s.l., due to the overlap height of our lidar system, which is of the order of 1200-1500 m, depending on the wavelength used.Based on the aerosol extinction and backscatter profiles shown in Fig. 5, the presence of particles (between 17:40-20:40 UTC) extends mainly up to 3500 m, coinciding with decreased values (1.11 ± 0.02) of the ABR 532/1064 (red line in Fig. 5) between 1500 and 4000 m, indicating the presence of rather small particles (0.1µm<diameter<1µm) (Müller et al., 2003;Ansmann et al., 2002;Tesche et al., 2011b).The corresponding water vapour vertical profile derived by the NTUA Raman lidar showed that its mixing ratio remained of the order of 4 g kg −1 inside the dust layer.Moreover, the relative humidity (RH) profile obtained by radiosonde at a nearby location (about 15000 m away) showed RH humidity values (79-80 %) around the 3000 m height region, which could lead to a probable mixing of the dust particles with humidity.
The mean LR values found over Athens inside the referred aerosol layers in the height range between 1800 and 3500 m height were 83 ± 7 sr (355 nm) and 54 ± 7 sr (532 nm); while the ABR 355/532 and AER 355/532 values, were 1.17 ± 0.08 and 1.11 ± 0.02, respectively (Table 1).Indeed, the aerosol optical properties presented in Fig. 5, support our view about mixing of dust with other particles, since our measured LR, ABR 355/532 and AER 355/532 values are higher than those for pure dust which are close to 53 ± 7 sr (355 nm), 55 ± 7 sr (532 nm), 0.2 ± 0.2, and 0.0 ± 0.2, respectively, according to Tesche et al. (2009a) (see Table 1).Therefore, as our LRs mixture of dust (AER 355/532 ∼0) and other particles, such as continental haze and smoke (AER 355/532 ∼1.4-2) (see references in Table 1 and also Ansmann et al., 2001;Franke et al., 2003;Müller et al., 2003Müller et al., , 2004Müller et al., , 2011;;Amiridis et al., 2009bAmiridis et al., , 2012;;Burton et al., 2012).This is further supported by the MODIS hot spot fire product (http://modis.higp.hawaii.edu/cgi-bin/modis/modisnew.cgi) indicating biomass burning activity along the air mass trajectory path given in Fig. 2. Figure 6 shows the temporal evolution of the CIMEL sunsky radiometer AOD at eight wavelengths and the Ånsgtröm exponent (α) over Athens for the period 27 March to 3 April 2009.The value of α is derived according to the Ångström power law, using the 440, 670 and 870 nm channels (e.g.Eck et al., 1999;Holben et al., 2001).From the almucantar sky radiance measurements (see also http://aeronet.gsfc.nasa.gov) at the four highest wavelengths an inversion algorithm (AERONET version 2), as described by Dubovik et al. (2002 and2006), retrieves a large set of optical and microphysical aerosol parameters.In Fig. 6, the MODIS AODs at 550 nm are additionally presented (white squares) along with the BSC-DREAM dust AODs at 550 nm (upper panel).CIMEL data between 29 March and 1 April are lacking owing to excessive cloud cover over the city of Athens.
Moreover, the MODIS data for the GAA (10 km × 10 km over the station) on March 29 (not shown) showed that the evolution of the event was well captured by the BSC-DREAM model.The BSC-DREAM AODs data (Fig. 6) showed the quick arrival of the desert plume over the Athens station on 29 March and then on 2 April.The highest CIMEL AOD was registered on 2 April with a value of 0.90 ± 0.05 (340 nm).The desert dust plume was also visible on 3 April, while the following days (from 4 April) showed a clear weakening of the event as the desert plume quickly moved away, as shown by the AOD, which dropped back to background levels of around 0.3 at 340 nm in accordance with the mean annual AOD (0.29 ± 0.18), reported for Athens at 440 nm by Gerasopoulos et al. (2011).We have to mention here that the same authors report that the mean AOD at 500 nm for Athens is 0.34 during stagnant conditions where anthropogenic haze dominates (Gerasopoulos et al. (2011).Therefore, back- ground AODs are slightly lower than those reported for local haze conditions, and much lower for dust conditions (AOD > 0.4).The temporal evolution of the Ångström exponent (440-870 nm) for the same time period is also shown in the lower panel, showing low values (from 0.4 to 1.0) for 2 and 3 April, in inverse correspondence with the high AOD for desert aerosols.One of the characteristics of the desert dust episodes in our area (Balis et al., 2004) is the high variability shown by both parameters during each day.
As can be seen in Fig. 6, on 2 April between 10:00 and 14:00 UTC, we observed high AODs due to thick dust layers which were advected to the observation location (as shown in lidar data).For the same day, the mean daily volume aerosol size distribution (not shown) exhibited two modes, but the relative importance of the modes depends on the prevailing aerosol type: an accumulation or fine mode with particle radius below 0.6 µm, and a coarse mode with particle radius between 0.6 and 15 µm.In this case, we expect a predominant coarse mode during desert dust conditions.The mode radii and volume concentrations were analysed in order to characterize the aerosol dust evolution.The evolution of the desert dust is clear in the coarse mode fraction.
Table 2 presents the percentual contribution of dust and sulfates to PM 10 at near ground level.The detailed calculations have already been presented in Remoundaki et al. (2011).From this Table, it can be seen that dust contribution was at the level of 15 % before the arrival of the Saharan dust and increased significantly during the dust event reaching 65 % on 31 March and 79 % on 1 April, respectively.Sulfates contribution (SO 2− 4 ) was in expected levels for the city of Athens (Karageorgos and Rapsomanikis, 2007;Theodosi et al., 2011) and presented a maximum on March 29 where southerlies (responsible for long-range transport of particles of crustal origin) were simultaneously present with west winds charged with aerosol particles from local urban and industrial emission sources (e.g. oil refineries of Aspropyrgos located in the WNW-NW sector) (Remoundaki et al., 2011).Finally, both dust and sulfates represent significant fraction of PM 10 and account in some cases for more than 50 % of the PM 10 mass.
Indeed, the EDX analysis of all particles sampled (Remoundaki et al., 2011) during the reported period (27 March to 3 April 2009), revealed that aluminosilicates (clays) were predominant.The presence of illite was clear in many cases, quartz particles were rare and very difficult to be detected.Dust particles were very rich in calcium which is distributed between calcite, dolomite and sulfates and Ca-Si particles (e.g.smectites).Iron oxides were often detected.These results are in very good agreement and confirm those reported on the elemental composition of the dust and the origins of the air masses which first started from the Western Sahara and passed over northern Algeria on their way to Greece.These findings are also in very good agreement with literature on the Saharan particles characterization and their relationship to their origins (Coude-Gaussen et al., 1987;Avila et al., 1997;Blanco et al., 2003;Coz et al., 2009;Rodríguez et al., 2011).
Using the aerosol backscatter profiles at 355, 532 and 1064 nm and the corresponding aerosol extinction profiles at 355 and 532 nm, we calculated the aerosol microphysical properties with the retrieval code for spheroid particles (Veselovskii et al., 2010) using the lidar data of 2 April.As mentioned previously, the retrieval algorithm represents the aerosol as a mixture of spheres and spheroids.How-ever, without using the particle depolarization ratio in retrieval, the spheroid volume ratio (SVR) is underestimated, which leads to the underestimation of the real part of refractive index (Mishchenko and Hovenier, 1995;Veselovskii et al., 2010).Thus, it is more accurate to suggest that the majority of the particle volume in the considered height range is related to non spherical particles.This assumption is justified by the HYSPLIT trajectories, which suggest that most of the particles in the coarse mode are associated with dust.The finer mode particles (and a fraction of the coarse dust) are expected to mix with anthropogenic pollution and sea salt; this, together with aerosol water will undoubtfully make particles more spherical.
We selected to retrieve the aerosol properties at four different layers for the period between 17:40-20:40 UT: layer 1 (1910-2070 m), layer 2 (2284-2850), layer 3 (2960-3100 m) and layer 4 (3140-3420 m).The retrieved particle volume size distributions dV/dlnr for the four considered layers is shown in Fig. 7 (left-hand graph).Moreover, the integral particle parameters, such as V , S, N, r eff and m R and m i are summarized in Table 3. From the particle size distribution (PSD) analysis, shown in Fig. 7 we can conclude that the fine mode of the PSD is centered at 0.13 µm, while the coarse mode is centered near 1 and 2 µm.More specifically, at lower heights (layer 1) the fine mode is prevailing, but at higher altitudes the contribution of the coarse mode becomes more important.The fine mode containing the small particles determines the integral particle number density.In our case, the fine mode particles decrease with height, thus resulting in N decreasing from 1700 cm −3 in layer 1, to 700-800 cm −3 in layers 3 and 4. On the other hand, the mean effective radius rises from 0.22 µm to around 0.32 µm between layer 1 and layers 3 and 4. In fact, the in situ aerosol sampling near ground revealed that near the end of the Saharan dust event on 2 April, the dominant size of the particles diameter was smaller than 2 µm (Remoundaki et al., 2011), which is consistent with the retrieved entire aerosol volume size distribution (Fig. 7).Additionally, in Fig. 7 (right-hand graph in blue color) we show the aerosol size distribution (total column) measured by the CIMEL sun-sky radiometer for different hours (from 05:55:42 UTC to 13:30:45 UTC) where two main aerosol classes are found: those of fine mode (around 0.15 µm radius, in agreement with those obtained in the aerosol characterization from direct-sun AERONET data presented in Basart et al., 2009) and those of coarse mode particles (around 1-2 µm radius).
Furthermore, if we divide the integrated CIMEL data (at 13:30:45 UTC) by the dust layer thickness of about 3.5-4 km, we obtain maxima of the dV(r)/dln(r) of the order of 5.5-6.2 and 15-17 µm 3 cm −3 for the fine and coarse mode, respectively.These maximum values are quite close (especially the fine mode ones) to the maximum size distribution values (Fig. 7) retrieved from the lidar data (5.5-12 and 7-10 µm 3 cm −3 , for the fine and coarse mode, respectively).Moreover, our retrieved aerosol size distribution and the one from CIMEL show comparable size distributions having radius centered on 0.13 µm (fine mode) and 1 to 2 µm.Although there is some difference, especially in the coarse mode (around 1-2 µm radius) particles, temporal variability and non concurrent measurements between the retrievals could account for it.Additionally, the retrieved real part of the refractive index is of the order of 1.47 for the layers 1 and 2, indicating mixing of dust with urban-like sulfate and organic carbon aerosols (Sokolik et al., 1993;Ebert et al., 2004;Raut and Chazette, 2008;McConnell et al., 2010), while for the layers 3 and 4 it increases up to 1.52, indicating the even stronger mixing of dust with organic carbon aerosols and urban-like sulfate over Athens (Ebert et al., 2002;Petzold et al., 2009).The imaginary part of refractive index in all layers is m i =0.007 ± 0.0035, indicating aerosol that is internally mixed with slightly absorbing dust (Patterson et al., 1977;Sokolik et al., 1993;Sokolik and Toon, 1999;Ebert et al, 2004;T. Müller et al., 2009;Kandler et al., 2009).
It is interesting to note that the latest available CIMEL data for 2 April, obtained at 13:30 UTC over Athens (not shown here), gave a columnar refractive index of the order of 1.53-1.55(real part), while the imaginary part was ranging from 0.009i to 0.015i.These values represent column values which are obtained during the Saharan dust event several hours before the lidar sampling; they are typical of mixtures of silicate particles with sea salt (Ebert et al., 2002).Moreover, the retrieved mean columnar value of the effective radius was 0.21 µm (at 13:30:45 UTC), which compares very well with the retrieved value (0.22 µm), from the lidar data at the dust layer 1 (between 17:40-20:40 UTC), but less with those retrieved from layers 2-4 (2.28-3.42km), as shown in Table 3.Indeed, layers 2 to 4 (17:40 to 20:40 UTC) are related to the strong dust layer which appeared around 3.5 km (from 13:42 to 16:00 UTC) (see Fig. 3), therefore, they could be probably associated to bigger dust particles.
The final data set of the aerosol optical and microphysical properties along with the water vapour profiles were incorporated into the ISORROPIA II model (Fountoukis and Nenes, 2007), to provide a possible dry chemical composition that is consistent with the retrieved refractive index values (Table 3).Of course, due to the complexity of the aerosols probed over Athens, a unique "solution" for the chemical composition cannot be provided, unless in situ airborne data are available for direct comparison to validate our model results.For the aerosols located at layer 1, we derived a chemical composition of about 50-60 % sulfate, 15-25 % organic carbon (OC) and 15-35 % mineral dust is required for this.At the second layer, the model showed less concentration of sulfates and a slight increase of OC in comparison with the first one.Specifically, the retrieved chemical composition was of the order of 32-52 % for sulfates, 28-36 % for the OC and 12-40 % mineral dust.For the aerosols located at layer 3 we derived a chemical composition of about 18-38 % sulfate, 52-62 % OC and 0-30 % mineral dust.For the aerosols located at layer 4 a chemical composition of about 16-36 % sulfate, 54-64 % OC and 0-30 % mineral dust was estimated by the model.At the two upper layers the retrievals showed more concentration of OC which is correlating well with the high values of the refractive index (of the order of 1.52) and is consistent with the enrichment of organics in the aerosol that is often seen in the free troposphere (Heald et al., 2005).These findings, which indeed indicate mixing of mineral dust aerosols with sulfate and OC ones (typical from urban air pollution sources and biomass burning), are in accordance with the lidar data presented in Fig. 3 where the dust layered aerosols around 2000-3500 m (between 17:40-20:40 UT) are diluted over the PBL, through mixing with locally produced ones.Table 3 summarizes the optical, microphysical and chemical properties of aerosols retrieved at the four specific layers (1st to 4th layer) as well as the RH (%) at each layer for 2 April 2009.Regarding the percentage of mineral dust (34.5 %) to PM 10 shown in Table 2, we see that the findings from the chemical analysis are in the upper limit values of those derived by the ISORROPIA II model for the lower atmospheric layers (1st and 2nd layers, located from 1.9 to 2.85 km a.s.l.).Given that OC is not measured, 34.5 % is in reality the upper limit of dust concentration.
On the other hand, the percentage of SO 4 contribution to PM 10 near ground (see Table 2) was much lower than the ones derived by the ISORROPIA II model for the lower atmospheric layers, but it agreed quite well with the lower limit values of those derived by the ISORROPIA II model for the upper atmospheric layers (3rd and 4th layers, located from 2.9 to 3.42 km a.s.l.).Given that the filter integrates over a larger period than the lidar (Remoundaki et al., 2011), the average chemical composition differs from the retrieval: furthermore, changes in acidity (due to uptake of (Seinfeld and Pandis, 2008).
Conclusions
In this manuscript, we attempted to combine experimental data (multi-wavelength Raman data (3 aerosol backscatter and 2 extinction profiles) and in situ measurements to chemically characterize the aerosol sampled) and models (microphysical inversion and thermodynamic ones) to infer the particle optical and microphysical properties, as well as a possible chemical composition.During a strong Saharan dust event that occurred over Athens (27 March to 3 April 2009), selected measurements were performed to obtain the optical properties of the dust particles in the lower free troposphere.A hybrid regularization technique was used to derive the mean microphysical properties of the dust particles, while the thermodynamic model ISORROPIA II was used to provide possible aerosol chemical properties at four selected dust layers between 2.9 and 3.4 km.AOD values, derived from the CIMEL sun-sky radiometer, ranged from 0.33-0.91(340 nm) to 0.18-0.60(500 nm), while the LR values retrieved from the Raman lidar ranged from 75-100 sr (355 nm) and 45-75 sr (532 nm).Moreover, ABR 355/532 and ABR 532/1064 values of about 0.9-1.9 and 1.0-1.45were observed, respectively, while the AER 355/532 ranged within 0.8-1.5.Inside selected dust layers, mean AER 355/532 ranged within 0.85-1.25,while the mean LR values were 83 ± 7 sr (355 nm) and 54 ± 7 sr (532 nm) and the mean ABR 355/532 and AER 355/532 were 1.17 ± 0.08 and 1.11 ± 0.02, respectively.The higher mean value of AER 355/532 (1.25) observed at the lower atmospheric layer indicates the presence of smaller particles, compared to those at higher layers where we have larger particles, since their AER 355/532 were much lower (0.85-0.94).The presence of pure dust can be easily excluded since it is characterized by AER and ABR values lower than 0.5 (Tesche et al., 2011a).The presence of smaller particles at the lower layer, under this strong dust event, indicates a possible mixing of haze and dust, while the larger particles at higher heights may indicate the mixing of dust with coarser particles.Indeed, inside the four selected dust layers the aerosol refractive indexes ranged from 1.47 (± 0.05) + 0.0070( ± 0.0035)i to 1.52(±0.05)+ 0.0070(±0.0035)i,while effective radiuses ranging from 0.22 ± 0.06 to 0.33 ± 0.10 µm were retrieved.
In the first two lower atmospheric layers, the inferred possible contribution of dust to the optical properties observed was estimated to vary from 12-40 %, with a quite important sulfate contribution from anthropogenic haze of 32-60 %, and OC content (15-36 %) originating from urban combustion and/or biomass burning activities (Prosmitis et al., 2004;Sillanpää et al., 2005).At the two higher layers the inferred possible contribution of dust to the optical properties observed was lower (0-30 %), as was the sulfate contribution (18-36 %).The latter, combined with the much higher OC content (52-64 %) indicates a possible mixture of higher levels of biomass burning smoke, with dust and less continental haze particles.This, along with our retrieved aerosol optical and microphysical properties, is in agreement with Raman lidar and in situ observations reported from DABEX (Heese and Wiegner, 2008) and from SAMUM-2 in Cape Verde by Ansmann et al. (2011) and Tesche et al. (2011a) for mixture of dust and smoke particles.Furthermore, in situ airborne aerosol sampling together with multi-wavelength Raman lidar measurements should be performed to further evaluate the procedure proposed in this study.
Fig. 1 .
Fig. 1.Dust loading (in g m −2 ) over Europe in the period between 27 March and 3 April 2009, as estimated by the BSC-DREAM forecast model (12:00 UTC).The wind field pattern is also shown for 3000 m height level.
Fig. 4 .
Fig. 4. Forecast of the vertical profile of the dust concentration (in µg m −3 ) over Athens, Greece for 2 April 2009, at 18:00 UT using the BSC-DREAM model.
Fig. 5 .
Fig. 5. Vertical profiles of the aerosol optical properties (extinction and backscatter coefficient, lidar ratio and Ångström backscatter-and extinction-related exponent), as well as of the water vapor to dry air mixing ratio (g kg −1 ) (with error bars), as retrieved by the NTUA Raman lidar over Athens on 2 April 2009 (17:40-20:40 UTC).
Fig. 6 .
Fig. 6.Temporal evolution of the AOD at eight wavelengths over Athens for the period 27 March to 4 April 2009 according to CIMEL sunsky radiometer, MODIS at 550 nm (white squares) and BSC-DREAM model at 550 nm (upper panel).Temporal evolution of the Ångström exponent (440/870 nm) for the same time period (lower panel).
Table 3 .
Mean optical, microphysical and chemical properties of aerosols, as well as the relative humidity RH (%) at each layer, retrieved at four specific layers on 2 April 2009. | 2018-12-01T16:21:59.624Z | 2011-09-12T00:00:00.000 | {
"year": 2011,
"sha1": "64690adaac86d922ebb64c8ccb7c75eb10a9095f",
"oa_license": "CCBY",
"oa_url": "https://acp.copernicus.org/articles/12/4011/2012/acp-12-4011-2012.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "ba3cfb2bd7f47eb0ae70dce3f0d37c06baf7c3e8",
"s2fieldsofstudy": [
"Environmental Science",
"Physics"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
261745335 | pes2o/s2orc | v3-fos-license | Protective effect of Melanogrammus aeglefinus skin oligopeptide in ultraviolet B-irradiated human keratinocytes
. Protective effect of Mel-anogrammus aeglefinus skin oligopeptide in ultraviolet B-irradiated human keratinocytes
Introduction
Exposure to excessive sunlight is an important etiologic factor in the development of acute inflammation and is thus consequently linked to the progression of skin cancer (Hooda et al., 2023).Ultraviolet B (UVB) is the most damaging component of solar radiation that reaches the earth and is a major risk factor for the development of acute inflammation as well as nonmelanoma skin cancer in the epidermis (Lim et al., 2022).When the skin is exposed to UV light, a photooxidative reaction is initiated that impairs the antioxidant status and increases the cellular level of reactive oxygen species (ROS) (Kumar et al., 2022;Masaki et al., 2009).This process, which is commonly known as photoaging, impairs the self-protection property of the skin and consequently damages the cutaneous tissues (Berry et al., 2022).
Skin aging is a complex phenomenon that results from interactions between intrinsic and extrinsic factors.Intrinsic or chronological aging is an inevitable, genetically programmed process related to a natural decline in the normal physiological process whereas extrinsic aging is caused by environmental factors (Löwenau et al., 2017;Uitto, 1997).Both intrinsic and extrinsic factors are superimposed in sun-exposed areas of skin, resulting in marked histological changes in the extracellular matrix (ECM); clinically, the changes are observed as, photoaged skin with loss of rigidity and elasticity (Freitas-Rodríguez et al., 2017;Oh et al., 2020).Antioxidants have the capability to quench ROS and have been reported to repair skin damage, loss of elasticity, wrinkling and premature aging when supplemented in the diet (Fernandes et al., 2023;Liu, et al., 2023).Thus, it is suggested that regular consumption of antioxidant-containing natural ingredients might be a useful strategy for protection against UV-mediated cutaneous damage.
Cod is among the most important commercial food species used worldwide, and a large amount of byproduct is generated during processing.Skin is a good source of collagen, which accounts for 8% of the byproduct (Bruno Siewe et al., 2021;Chen et al., 2020).It has been reported that cod skin oligopeptide induces apoptosis and leads to inhibition of human gastric carcinoma cell growth (Wu et al., 2020), while a calcium-chelating peptide isolated from Pacific cod skin gelatin showed the ability to improve calcium absorption (Wu et al., 2017).In addition, active peptides derived from pacific cod skin demonstrated antioxidant activity and angiotensin-I converting enzyme inhibitory ability (Ngo et al., 2011).Moreover, further research that clarified the underlying mechanism was mediated by regulation of the MAPK and transforming growth factor-β/Smad signaling pathways (Chen and Hou, 2016;Chen et al., 2016).However, there is little information about the protective effect of Melanogrammus aeglefinus skin oligopeptide (MSOP) in UVB-irradiated human keratinocytes, and the underlying mechanism remains unclear.
In this study, the method of preparing MSOP was optimized, the composition and abundance of peptides in MSOP was identified by MALDI-TOF/TOF, the function of peptides was predicted by Discovery Studio 2019, the protective effect of MSOP was explored in UVB-irradiated human keratinocytes, and the underlying mechanism was investigated by proteomics.We proposed that MSOP has potential applications in UVB-induced photoaging.
Optimizing the method used to prepare MSOP
Pepsin (Shanghai Yuanye Bioengineering Institute, Shanghai, China) was selected to hydrolyze Melanogrammus aeglefinus skin collagen (Shanghai Nitta Gelatin Co., Ltd, Shanghai, China) according to previous studies (Song et al., 2022).Single-factor experiments were performed to establish the range of independent variables [temperature, enzyme/substrate (E/S) ratio and time].Then Box-Behnken response surface design was formulated to evaluate and optimize the effects of these three independent variables and their interactions on the degree of hydrolysis (DH).The DH was measured by trichloroacetic acid soluble nitrogen method as previously described (Rutherfurd, 2010), and calculated by following formula: DH (%) = [content of amino nitrogen in supernatant (mg/ml)/total content of nitrogen in sample (mg/mL)] × 100%.
Peptide sequence identification
The peptide composition in MSOP was measured by MALDI-TOF/TOF 5800 (AB Sciex, Framingham, MA, USA) at Jiyun Biotechnology Co., Ltd.(Shanghai, China) as previously described (Lu et al., 2021).MS spectra were collected in a molecular mass range of 500-3,000 Da, the peak detection signal-to-noise ratio was set to 15, and MS/MS spectra were analyzed to identify peptides using the Mascot Server (Matrix Science Inc, Boston, MA, USA) with the NCBI nr database.
Molecular docking with Discovery Studio 2019
The molecular structures of peptides were built, and energy was minimized using Discovery Studio 2019.The X-ray structure of Keap1 (PDB ID: 5DAD) was retrieved from the Protein Data Bank (https://www.rcsb.org), the water molecules, native ligands and heteroatom molecules were removed by Discovery Studio 2019, and PyRx 0.9 software was used to minimize the energy of the protein.Then, the CDOCKER module in Discovery Studio 2019 was selected for molecular docking using the default settings as previously described (Fan et al., 2022).
Cell culture and UVB irradiation
The HaCaT cell line was purchased from Beyotime Biotechnology (Shanghai, China), and cultured in Dulbecco's modified Eagle's medium (DMEM) (Hyclone, Logan, UT, USA) supplemented with 10% fetal bovine serum (FBS) (Hyclone) and 1% antibiotics (Hyclone) at 37 °C and 5% CO 2 in a humidified incubator.HaCaT cells were seeded and adhered for 24 h.The cells were treated with various concentrations of MSOP and subsequently exposed to UVB radiation at a dose of 50 mJ/cm 2 according to previous studies (Xiao et al., 2021).Cells without MSOP or UVB irradiation treatments were served as the control.
Cell viability assay
HaCaT cells (1 × 10 5 ) were seeded in 96-well culture plates and treated with various concentrations of MSOP for 24 h.Then, an MTT assay kit purchased from Nanjing Jiancheng Bioengineering Institute (Nanjing, Jiangsu, China) was used to measure the cell proliferation and cytotoxicity in response to MSOP treatments at different concentrations.
Antioxidant enzyme activities
MDA, SOD and GSH-Px activities were measured using colorimetric assay kits (Nanjing Jiancheng Bioengineering Institute) according to the manufacturer's protocol.
Apoptosis detection
Cells were washed with PBS, stained with an Annexin V-FITC/PI kit (Beyotime Biotechnology), and measured with a CytoFLEX Flow Cytometer (Beckman Coulter, Inc. Brea, CA, USA).In addition, the apoptotic rate was analyzed by Modfit LT 6.0 software (www.vsh.com).
Quantitative proteomic analysis
According to the previous method (Miles et al., 2023), the cell pellet was extracted with lysis buffer.The protein extract was centrifuged at 15,000 g at 4 °C for 20 min, the supernatant was treated with acetone to precipitate protein, and NH 4 HCO 3 solution (40 mM) was added to the pellets.The protein concentration was measured by a BCA protein assay kit (Nanjing Jiancheng Bioengineering Institute).Then, the proteins were reduced and alkylated before digestion by trypsin (Promega Corporation, Madison, WI, USA).The digestion was stopped by formic acid, and the samples were then vacuum-dried.
Digested peptides were separated with chromatography by an Easy-nLC1000 system (Thermo Fisher Scientific, Am Kalkberg, Osterode, Germany) as previously described (J.Guo et al., 2017).Separated peptides were detected in the Orbitrap Fusion mass spectrometer (Thermo Fisher Scientific) with a Michrom captive spray nanoelectrospray ionization (NSI) source.Spectra were scanned over the m/z range 300-4,000 Da at 120,000 resolution.
The raw data were further examined by PEAKS software (version 7.5, Bioinformatics Solutions, Waterloo, Canada) as previously described and a label-free approach was performed to detect the relative quantification by the Q module in PEAKS (Yang et al., 2023).Proteins were considered to be significant when p < 0.05 and a fold change of ≥2.
Bioinformatics analysis was performed according to a previous method (J.Liu et al., 2015).Functional annotation was performed by Blast2GO, (version 6.0.1) and protein pathway analysis was performed by the Kyoto Encyclopedia of Genes and Genomes (http://www.genome.jp/kegg).
Quantitative real-time PCR (qRT-PCR) analysis
As previously described (Huang et al., 2021), total RNA was isolated and purified by the TransZol Up Plus RNA kit (Beijing TransGen Biotech Co., Ltd., Beijing, China), cDNA was reverse transcribed by TransScript® All-in-One First-Strand cDNA Synthesis SuperMix for qPCR (Beijing TransGen Biotech Co., Ltd), and the primers used in this study are presented in Table S1.The mRNA level of the target gene was calculated by the 2 -ΔΔCT method and normalized to the β-actin level.
Statistical analysis
All data are presented as the mean ± standard deviation (SD).The normally distributed data were assessed by ANOVA followed by Tukey's post hoc test (SPSS, version 19.0, Chicago, IL, USA), and data that did not meet the assumptions of ANOVA were analyzed by the Mann-Whitney test (MATLAB R2012a, Natick, MA, USA).Statistical significance was defined as p < 0.05.
Composition of MSOP
The compositions of peptides in MSOP were determined by MALDI-TOF/TOF.The results of the first-order mass spectrometry showed that the majority of the parent ions were less than m/z 1,000 (Figure S2a).Three peptides were found to be dominant in the MSOP, with molecular weights of 506.2860Da, 524.2478Da and 524.2478Da, respectively (Figure S2b-d).Further analysis by second-order mass spectrometry identified the amino acid sequences of these three peptides, which were VADML (Val-Ala-Asp-Met-Leu), IARF (Ile-Ala-Arg-Phe) and SSPSF (Ser-Ser-Pro-Ser-Phe)
Predicted interactions between three peptides and Keap1
In this study, a total of 44 target pharmacophores were predicted to interact with three peptides.We compared the fit value of peptides with those of targets, and the targets with high fit values are listed in Table 1.The results showed that VADML, IARF and SSPSF interacted with Keap1 (4zy3 and 5dad), playing a vital role in the Keap1/Nrf2-ARE pathway involved in oxidative stress regulation.Therefore, Keap1 was used as the receptor for subsequent molecular docking.
The interactions between the peptides and Keap1 were characterized with E CI , E VDW and RMS Gradient.As shown in Table 2, the -E CI values of VADML, IARF and SSPSF were 92.3, 73.6 and 63.0, respectively, while the corresponding -E VDW values were 10.1, 5.6 and 3.7.According to Figure 1a, there are seven amino acid residues of Keap1 that interact with VADML.Among them, Arg B:233 formed attractive charges, Ser B:231 and Gln B:276 formed conventional hydrogen bonds, Trp B:232 , Ser B:239 and Pro B:235 formed carbon hydrogen bonds, and Tyr B:230 formed pi-alkyl groups.There are nine amino acid residues involved in the interac- 1b).Among them, Asp B:278 and Asp B:279 formed attractive charges, Ser B:231 , Arg B:233 , Trp B:240 , Ser B:275 and Gln B:276 formed conventional hydrogen bonds, and Pro B:235 and Ser B:275 produced carbon hydrogen bonds.There are 7 seven amino acid residues involved in SSPSF interactions with Keap1 (Figure 1c).Gln B:276 , Ser B:231 , Ser B:234 and Trp B:240 formed conventional hydrogen bonds, TrpB:232 formed carbon hydrogen bonds and Trp B:230 and Phe B:282 formed pi-alkyl bonds.
Protective effects of MSOP against UVB-induced injury in HaCaT cells
We investigated the effect of MSOP on the proliferation of HaCaT cells after exposure to UVB. Cell viability was reduced to 57.37% by UVB irradiation in the absence of MSOP but significantly restored with MSOP treatment, and the highest cellular survival rate was observed in the high dosage MSOP-treated group (Figure 2a).
Antioxidant effects of MSOP in UVB-induced HaCaT cells
To investigate whether the radical scavenging activity of MSOP was mediated by antioxidant enzymes, the activities of antioxidant enzymes were examined in HaCaT cells after UVB exposure.SOD and GSH-Px activity were enhanced by MSOP treatments in a dose-dependent manner.Furthermore, in comparison with the control group, the level of MDA in the UVB-induced group was increased, but treatment with MSOP decreased the level of MDA (Figure 2b-d).
Protective effects of MSOP against UVB-induced apoptosis
The lower left quadrant represents viable and healthy cells, which were decreased by UVB treatment, but restored after MSOP treatment at a high dosage.The right quadrant represents the apoptotic cells, including early and late apoptotic cells, which decreased from 21.3% to 16.2% in MSOP-treated cells (Figure 3).
Analysis of differentially expressed proteins
Figure 4a shows the overlap of differentially expressed proteins (DEPs) (fold change ≥2 or ≤0.5) from the control, LD, MD and HD groups compared to the UVB group.The numbers of DEPs in the control, LD, MD, and HD groups were 286, 204, 176 and 157, respectively.Among them, 119 proteins were upregulated and 167 were downregulated.The numbers of DEPs upregulated in LD, MD, and HD were 98, 104, and 79, respectively, while the corresponding numbers for downregulation were 106, 72 and 78.
In addition, 23 DEPs were identified in all pairwise comparisons (Figure 4b and Table S4).
Functional analysis of DEPs
The proteins with increased expression were categorized as biological processes (BP), cellular components (CC), or molecular functions (MF).In the comparison of UVB and HD, the proteins with increased expression in the BP category were posttranscriptional gene silencing by RNA, regulation of apoptosis involved in tissue homeostasis, reactive nitrogen species metabolic process, etc.The proteins in the CC category were RISC complex, dense body and peroxisomal matrix.The proteins in the MF category were RNA polymerase III regulatory region DNA binding, peroxynitrite reductase activity, transcription cofactor activity, etc (Figure 4c).The proteins with decreased expression in the BP category were defense response to sensory perception of sound, exocytosis, cellular response to osmotic stress, etc.The proteins in the CC category were proteasome activator complex and exocyst, while the proteins in the MF category were SNARE binding, protease binding, serine-type endopeptidase inhibitor activity, etc (Figure 4d).In addition, the results from KEGG differential pathway analysis showed that the DEPs were involved in 9 metabolic pathways that were mainly related to the oxidative stress pathway (Table S5).
Discussion
Biologically active peptides are inactive within the sequence of their parent protein and can be released by enzymatic hydrolysis either during gastrointestinal digestion or food processing (Dhar et al., 2023;Ucak et al., 2021).In this study, MSOP was prepared via pepsin, the major components were identified to be VADML, IARF and SSPSF, and the potential antioxidant activity was predicted by Discovery Studio 2019.In addition, a UVB-induced HaCaT cell model was established to explore the anti-photoaging and antioxidant activity of MSOP in vitro, and this cell model was widely used in evaluating the anti-photoaging activity of Salvia plebeia R. Br ethanol extract, bioactive peptides from Skipjack Tuna cardiac arterial bulbs and flexible liposomes co-loaded with apigenin and doxycycline as previously described (Guo et al., 2023;Kong et al., 2023;Liu et al., 2023) Discovery Studio is a powerful research platform for the life sciences that integrates a comprehensive suite of informatics, modeling and simulation solutions (Han et al., 2020).From the results of reverse docking, we found that all three peptides interacted with Keap1 (4zy3 and 5dad) (Table 1).Nuclear factor erythroid 2-like 2 (Nrf2), is a transcription factor that serves as a key regulator of the cellular response to oxidative stress.Under normal physiological conditions, Keap1 maintains low levels of Nrf2 by promoting its Cul3-rbx1-mediated ubiquitination and subsequent proteasomal degradation.Under stressful conditions, ROS modify sensor Newly synthesized Nrf2 then translocates to the nucleus, binds to antioxidant response elements, and increases the transcription of numerous genes that encode cellular defense and repair enzymes (Ong et al., 2023;Suzuki et al., 2023).This coordinated and rapid upregulation of multiple genes ensures a robust response to many types of cellular stress (Dai et al., 2023).The results of molecular docking showed that VADML, IARF and SSPSF interacted with Keap1 and showed potential Keap1 inhibitory activity (Figure 1).Thus, we proposed a tentative inference that MSOP played an antioxidative role by regulating the Keap1-Nrf2/ARE signaling pathway.Superoxide radicals are formed in all living cells exposed to oxygen, by various biochemical systems in the cells.SOD catalyzes dismutation of the superoxide anion (O 2-) into hydrogen peroxide, which protects organisms against the toxic effects of superoxide radicals (Khayatan et al., 2023).GSH-Px is located primarily in the cytosol and has a general specificity in detoxifying H 2 O 2 and converting lipid hydroperoxides to nontoxic alcohols (Brigelius-Flohé and Flohé, 2020).MDA can exacerbate the actions of superoxide ions by impairing endothelium dependent relaxation and propagation of lipid peroxidation by chain reaction in membranes (Del Rio et al., 2005).In our experiment, MSOP treatment dosedependently increased the activities of SOD and GSH-Px, and decreased the content of MDA (Figure 2), which activated the defensive mechanism against ROS attack in the living HaCaT cells.
However, the underlying mechanism mechanism of MSOP activity in the UVB-irradiated HaCaT cellular model is extremely complex, and it remains unclear which proteins are involved in the beneficial effects of MSOP.Therefore, quantitative proteome technology was used in this study.Research has shown that that the Keap1/Nrf2/ARE signaling pathway plays a significant role in protecting cells from oxidative stress (Hassanein et al., 2020;Shi et al., 2021;Yuan et al., 2020).Under quiescent conditions, the transcription factor Nrf2 interacts with the actin-anchored protein Keap1, which is largely localized in the cytoplasm.This quenching interaction maintains low basal expression of Nrf2-regulated genes.However, upon UVB-irradiated photodamage, Nrf2 is released from Keap1, escapes proteasomal degradation, translocates to the nucleus, and transactivates the expression of several dozen cytoprotective genes that enhance cell survival (Figure 6).The present study showed that the MSOP caused significant upregulation of Nrf2 and downregulation of Keap1 in HaCaT cells, which may contribute to the beneficial effects of MSOP in a UVB-induced HaCaT cellular model.
In conclusion, we optimized the preparation method of MSOP with an 80.24% degree of hydrolysis and identified three peptides
Figure 3 .
Figure 3.The anti-apoptosis effects of MSOP measured by flow cytometric analysis.(a) Control group.(b) UVB group.(d) High-dosage UVB group.
(
VADML, IARF and SSPSF), which were highly abundant and predicted to show potential antioxidant activity due to their interactions with Keap1 via Discovery Studio 2019 simulation.In addition, the antioxidant activity of MSOP was investigated in vitro via a UVB-induced HaCaT cell injury model, and the underlying mechanism was explored via quantitative proteomics, especially on the regarding of the Keap1/Nrf2/ARE pathway.MSOP might offer an alternative for protecting skin against UVB exposure.
Figure 4 .
Figure 4. Quantitative proteome analysis in HaCaT cells in response to UVB and MSOP treatments.(a) Venn diagram of differentially expressed proteins (DEPs) (fold change ≥2 or ≤0.5) in response to UVB and MSOP treatments.(b) Numbers of upregulated and downregulated proteins in HaCaT cells.Upregulated (c) and downregulated (d) DEPs categorized by biological process, molecular function, and cellular component.
Figure 6 .
Figure 6.General scheme for the beneficial effects of MSOP through Keap1/Nrf2/ARE signaling pathway regulation.
Table 2 . The molecular docking results of three peptides with Keap1
cysteine residues within Keap1 and block the degradation of Nrf2. | 2023-07-12T06:40:48.401Z | 2023-06-30T00:00:00.000 | {
"year": 2023,
"sha1": "b685ef661eea6457d19fff342d8a902718e84c16",
"oa_license": null,
"oa_url": "http://www.isnff-jfb.com/index.php/JFB/article/download/335/533",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "79d9cd82c857fd972c927b989ca5cd2e9fb62d3c",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": []
} |
67766416 | pes2o/s2orc | v3-fos-license | A singularly perturbed convection-diffusion problem posed on an annulus
A finite difference method is constructed for a singularly perturbed convection diffusion problem posed on an annulus. The method involves combining polar coordinates, an upwind finite difference operator and a piecewise-uniform Shishkin mesh in the radial direction. Compatibility constraints are imposed on the data in the vicinity of certain characteristic points to ensure that interior layers do not form within the annulus. A theoretical parameter-uniform error bound is established and numerical results are presented to illustrate the performance of the numerical method applied to two particular test problems.
Introduction
The construction of globally pointwise accurate numerical approximations to singularly perturbed elliptic problems (of the form Lu = f onΩ) posed on non-rectangular domainsΩ is a research area that requires development. We shall restrict our focus to problems involving inverse-monotone differential operators L. That is, for all functions z in the domain of the operator L, if Lz ≥ 0 at all points in the closed domainΩ then z ≥ 0 at all points inΩ. This class of problems includes convection-diffusion problems of the form −ε u + a∇u + bu = f (x, y), b(x, y) ≥ 0, (x, y) ∈ Ω; u = g, (x, y) ∈ ∂Ω.
In any discretizations of singularly perturbed convection-diffusion problems, we seek to preserve this fundamental property of the differential operator. In other words, we require that the discretization of both the domain and of the differential operator combine so that the system matrix (denoted here by L N ) is a monotone matrix. That is, for all mesh functions Z, if L N Z ≥ 0 at all mesh points then Z ≥ 0 at all mesh points. It is well established that classical finite element discretizations of singularly perturbed convection-diffusion problems lose inverse-monotonicity. There is an extensive literature on alternative finite element formulations [1] that attempt to minimize the adverse effects of losing this property of inverse-monotonicity in the discretization process. Given these stability difficulties with the finite element framework, we pursue our quest for discretizations that preserve inverse-monotonicity within a finite difference formulation.
Rectangular domains are ideally suited to a computational approach, as a tensor product of one-dimensional uniform or non-uniform meshes is a simple and obvious discretization of the domain. For some non-rectangular domains, coordinate transformations exist so that the non-rectangular domain can be mapped onto a rectangular domain. However, in general, the Laplacian operator ũ in one coordinate system is mapped to a more general elliptic operator (au xx + bu xy + cu yy ) in an alternative coordinate system, where the general elliptic operator contains a mixed second order derivative [4]. Due to the presence of different scales in the solutions of singularly perturbed problems, it is natural to use highly anistropic meshes, where aspect ratios of the form h x /h y = O(ε p ), 1 ≥ p > 0 are unavoidable in some subregions of the domain. However, we know of no discretization of a mixed second order partial derivative that preserves inverse-monotonicity and does not place a restriction on the aspect ratio of the form [14]. Due to this barrier to preserving stability, we look at particular non-rectangular domains for which a coordinate transformation (to a rectangular domain) exists, which does not generate a mixed second order derivative term.
Parameter-uniform numerical methods [3] are numerical methods designed to be globally accurate in the maximum norm and to satisfy an asymptotic error bound on the numerical solutions (which are, in this paper, the bilinear interpolantsŪ N ) of the form where the error constant C and the order of convergence p are independent of the singular perturbation parameter ε and the discretization parameter N . Parameter-uniform numerical methods can normally be categorized as either a fitted operator method or as a fitted mesh method. In the fitted operator (sometimes called exponential fitting) approach a uniform or quasi-uniform meshΩ N is used and the emphasis is on the design of a non-classical approximation L N * to the differential operator L. These fitted finite difference operators can be generated by constructing a nodally exact difference operator L N F for a constant coefficient problem and extending it to the corresponding variable coefficient problems (e.g. Il'in's scheme [11]) or by enriching the solution space with non-polynomial basis functions (e.g. the Tailored Finite Point Method [5] or using correctors [10]). However, Shishkin [18] established that for a class of singularly perturbed problems, whose solutions contain a characteristic boundary layer, no fitted operator method exists on a quasi-uniform mesh. This result led many researchers to the construction of fitted mesh methods, where classical finite difference operators L N (such as simple upwinding) are combined with specially constructed layer-adapted meshes (such as the Shishkin mesh [19,3] or the Bakhvalov mesh [2]). In general, we are interested in developing numerical methods which can be adapted to solving problems with characteristic boundary or interior layers. Hence, our focus will be on the construction of a suitable fitted mesh. In passing, we note that the option of combining a fitted operator (in the neighbourhood of a particular singularity) and a fitted mesh remains open to further investigation.
In [7,6] we examined the case of a convection-diffusion problem posed within a circular domain. In the current paper, motivated by the problem proposed in [9], we consider a problem posed on an annular domain. In the numerical experiments in [6] it was observed that the imposition of certain compatibility constraints on the data (which were required to establish a theoretical error bound in the associated numerical analysis [7]) appeared unnecessary in practice, as the numerical experiments indicated that the numerical method appeared to be parameter-uniform even when these compatibility constraints on the data were not imposed on particular test problems. However, in the case of an annular region, the character of the data at the interior characteristic points is crucial and intrinsic to the problem. In general, interior parabolic layers will emerge from the interior characteristic points, unless a sufficient level of compatibility constraints are placed on the data to prevent such layers occurring. Some preliminary numerical results illustrating parabolic interior layers appearing in the solution are given in [8]. In the current paper, we identify sufficient compatibility constraints on the data so that such interior layers do not appear in the solution and, in addition, so that a theoretical error bound can be established for a class of singularly perturbed problems posed on an annulus. The construction, and subsequent numerical analysis, of a parameter-uniform numerical method for a singularly perturbed convection-diffusion problem (posed on an annulus), where the solution exhibits an interior parabolic layer, remains an open problem.
In §2 we define the continuous problem and identify constraints on the data (2.9) to prevent interior layers appearing. The solution is decomposed into regular and boundary layer components. Pointwise bounds on the derivatives of these components of the solution are established. In §3 the discrete problem is specified and the associated numerical analysis is given. Some numerical results are presented in the final section.
Notation: Throughout this paper, C denotes a generic constant that is independent of the singular perturbation parameter ε and of all discretization parameters. Throughout the paper, we will always use the pointwise maximum norm, which we denote by · . Sometimes we attach a subscript · D , when we wish to emphasize the domain D over which the maximum is being taken. Dependent variables specified in the computational domain Ω will be denoted simply by g and their counterparts in the physical domainΩ will be identified byg.
Continuous problem
Consider the singularly perturbed elliptic problem: Findũ such that Assume that the dataã,f ,g is sufficiently smooth so thatũ ∈ C 3,α (Ω). The differential operator L satisfies a minimum principle [17, pg. 61]. As the problem is linear, there is no loss in generality in assuming homogeneous boundary conditions on the outer circle. Compatibility constraints will be imposed below on the data in the vicinity of the characteristic points (0, ±R 1 ) and (0, ±R 2 ).
For problem (2.1), boundary layers will typically form in the vicinity of the inner outflow boundary Γ 1 := {(x, y)| − R 1 < x < 0, x 2 + y 2 = R 2 1 } and in the vicinity of the outer outflow boundary Moreover, when f ≡ 0, if the inner boundary condition is such thatg(0, ±R 1 ) = 0 then an internal layer will appear in a neighbourhood of the region We also define the inflow boundary (which is a disconnected set), as the union of the following two sets By using the stretched variables x/ε, y/ε and the minimum principle, we can deduce [13,16] that the solutionũ of problem (2.1) satisfies the bounds We next define the regular component, which is potentially discontinuous across the two half-lines defined in (2.2). Define the reduced operator (associated with the operatorL) bỹ The reduced solution v 0 is characterized by two influences: the upwind data on the outer inflow boundary Γ 3 and the data on the inner inflow boundary Γ 4 in the wake of the inner circle. We begin with a definition of the upwind regular component v − , given by where the subcomponents are the solutions of the following problems: Observe that the sub-componentsṽ 0 ,ṽ 1 are solutions of first order problems and, hence, the level of regularity of these components is determined by certain compatibility conditions being imposed at the points (0, ±R 2 ). As in [12], these compatibility conditions are of the form where n is sufficiently large so thatṽ − 2 ∈ C 3 (Ω). Next we define the downwind regular component over the wake regioñ where it's three subcomponents satisfy: Excluding the region S, we define the regular component as In general, the main component ofṽ, which is the reduced solutionṽ 0 , will be discontinuous along S asṽ Hence, in order to have a continuous reduced solution, we would need to impose the following compatibility conditionũ The arguments in [12] could be applied to both v − 0 and v + 0 so that they are both sufficiently regular and satisfy certain additional constraints (along the horizontal lines y = ±R 1 ) to ensure thatṽ 0 ∈ C 3 (Ω). However, in order to establish pointwise bounds on the boundary layers present, we will also need to impose more severe constraints on the data in neighbourhoods of these characteristic points. To complete the numerical analysis in this paper, we assume the following compatibility constraints on the data.
Assumption Assume that there exists This assumption prevents interior parabolic layers emerging downwind of the characteristic points (0, ±R 1 ) and also implies that the reduced solution v 0 is smooth throughout the region. Moreover, asṽ 0 andṽ 1 both satisfy first order problems, then they are both identically zero in the vicinity of the characteristic points. That is, We associate the following critical angles θ * , θ * with assumption (2.9) Two boundary layer components w − and w + are defined bỹ By virtue of assumption (2.9), the boundary layer componentw defined bỹ is well defined and is a sufficiently smooth function throughout the domain.
Polar coordinates are a natural co-ordinate system to employ for this problem, where x = r cos θ, y = r sin θ. In these polar coordinates, the continuous problem (2.1) is transformed into the problem: Find u ∈ C 0 (Ω) ∩ C 3 (Ω), Ω := {(r, 0)|R 1 < r < R 2 , 0 ≤ θ < 2π}, which is periodic in θ, such that In our analysis of the behaviour of the layer component w, we will make use of smooth cut-off functions ψ * (θ), ψ * (θ), which are constructed in the Appendix.
and the boundary layer component w satisfies For all i, j with 1 ≤ i + j ≤ 3, the derivatives of the boundary layer component w satisfy where the constant C is independent of ε. Moreover, there exists some µ > 1 such that θ * < µθ * < π 2 and w(r, θ) ≡ 0, Proof. The bounds on the regular component v are established using the decompositions in (2.5, 2.6, 2.7) and the argument in [7]. The bulk of the proof involves establishing the pointwise bounds on the boundary layer function w and the proof is available in the appendix.
Discrete problem and associated Error Analysis
We discretize this problem using simple upwinding on a piecewise uniform Shishkin mesh [15,19] in the radial direction, with M mesh elements uniformly distributed in the angular direction and N mesh elements used in the radial direction to produce the mesh (3.1b) (3.1d) The numerical method on the mesh (3.1), will be of the following form 1 : Find a periodic mesh function U (r and for the boundary mesh points This numerical method is different to the numerical method examined in [7], as a discretized version of the differential equation is used at all the internal mesh points. For the internal mesh points, where i = 1, 2, . . . , N − 1, j = 0, 1, 2, . . . , M − 1, we define the associated finite difference operator L N r,θ as follows: For any mesh function Z and, for the boundary mesh points, we define For periodic mesh functions, with Z(r i , θ j ) = Z(r i , 2π + θ j ), R 1 ≤ r i ≤ R 2 , we have the following discrete comparison principle: The finite difference operators D + r , D − r , D ± r , δ 2 r are, respectively, defined by Proof. By checking the sign pattern of the elements in the system matrix, one will see that the system matrix is an M -matrix, which guarantees that the inverse matrix is a non-negative matrix.
The discrete solution U can be decomposed along the same lines as the continuous solution. The error in each component is then separately bounded. To this end, we define the discrete regular component V as the solution of and the two discrete layer components W − , W + as the solutions of the following problems: where h i := r i − r i−1 , γ * < α cos(θ * ) and γ * < α cos(θ * ). Moreover, there exists some µ * > 1 such that µθ * ≤ µ * θ * < π 2 and Proof. (i) Let us first establish the bound on W + . Consider the following discrete barrier function where ψ * is the cut-off function defined in (5.3). For any radial mesh, note the following From the definition (5.3) of the cut-off function ψ * , we have that sin θ j D ± θ ψ * < 0 and δ 2 θ ψ * (θ j ) = (ψ * ) (θ j ) + CM −2 .
For all θ ∈ [2π − µθ * , 2π) ∪ [0, µθ * ] and ε sufficiently small, using the strict inequality a > α we have that (ii) We next establish the bound on W − within the region where cos θ < 0 and sin θ ≥ 0. As for the continuous boundary layer function w − , consider the following discrete barrier function where γ * is a parameter to be specified later. The function Ψ * (θ j ) is constructed as follows: Let µ * > µ. We identify the angles corresponding to the mesh points and assume that M is sufficiently large so that 8πM −1 < (µ * − µ)θ * . Then Note that L N r,θ Ψ * (B M ) ≥ 0. Hence, for any radial mesh, note the following For all θ ∈ (π − µ * θ * , π), assuming ε sufficiently small, and using the strict inequality a > α we have that, where u is the solution of the continuous problem andŪ is the bilinear interpolant of the discrete solution U , generated by the finite difference operator on the piecewise-uniform mesh.
Proof. Let E := U − u denote the pointwise error. Let us consider the truncation error at all the interior points. At the transition point r i = R 1 + σ * , R 2 − σ * and for θ ∈ (µθ * , π − µθ * ) ∪ (π + µθ * , 2π − µθ * ), we have (αr i cos θ j − ε) < 0, if cos θ < 0, and (αr i cos θ j − ε) > 0, if cos θ > 0, for ε sufficiently small. Hence at each interior mesh point (r i , θ j ), we have the truncation error bounds We consider only the case where ε is sufficiently small so that as the alternative case is easily dealt with by using a classical stability and consistency argument across the entire mesh.
For the regular component, observe that we have the following truncation error bounds: Note that, since D ± θ cos θ j = − sin θ j + CK, we have that and, hence, we can use the discrete barrier function to bound the error in the regular component,.
Note that w − = W − ≡ 0, for cos θ ≥ 0 and so we consider the error w − − W − in approximating the layer component only in the region where cos θ < 0. For r i ≥ R 1 + σ * , we use the pointwise bounds (2.11a), (3.6), on the continuous and discrete layer functions, and the argument in [15, pg.72] to deduce that Within the fine mesh, we have the truncation error bound Note that Hence, to complete the argument, we use the barrier function Finally, we consider the error W + − w + . Away from the outer boundary layer, and where ε is sufficiently small, we observe that e −R 2 α cos(θ * )(1−sin(θ * )) 2ε Proceed as for the other boundary layer function. The global error bound follows as in [7,Theorem 4].
Numerical Results
In this final section, we examine the performance of the numerical applied to two sample problems. In both cases, the exact solutions is not known and we estimate both the errors and the rates of convergence using the double-mesh method [3]. We compute the maximum pointwise global two-mesh differencesD N ε and from these values the parameter-uniform maximum global pointwise two-mesh differencesD N , defined respectively, as follows where U N is the bilinear interpolant of U N , which is the numerical solution computed on the mesh Ω N . Approximationsp N ε to the global order of convergence and, for any particular value of N , approximations to the parameter-uniform order of global convergencep N are defined, respectively, byp Example 1 In order to satisfy the main assumption (2.9), we introduce the piecewise quadratic function Then we consider problem (2.1), where f (x, y) = (1 + x 2 )(Q(y)) 2 , δ = 0.2. (4.1b) For this particular example, the Shishkin transition points are taken to be σ * = min{0.75, 2R 1 ε ln N δ(2R 1 − δ) } and σ * = min{0.75, A plot of a typical computed solution and the associated approximate error are given in Figure 1 and Figure 2. Boundary layers are visible at all parts of the outflow boundary. The global orders of convergence, given in Table 1, indicate that the method is parameter-uniform for this problem.
The construction of a Shishkin mesh (3.1) is motivated by simplicity and the objective to be parameter-uniform for a class of problems of the form (2.1). From the pointwise upper bound on the layer component (2.11a) we see that the widths of the boundary layers vary with the angle θ. The layer is the most thin when θ = 0. The mesh (3.1) is designed so as to encompass all angles where the boundary layer is expected to be non-zero and hence the mesh is linked to the widest angle θ * and θ * of the relevant boundary layers. In the next example we construct a test problem, which only has an outer boundary layer, with the maximum amplitude occurring at θ = 0. Moreover, the fitted mesh located around the inner boundary is not required for such a problem. Nevertheless, we have not optimized the mesh to this particular problem, as we are interested in the performance of the numerical method for a class of problems.
A plot of the approximate error in Figure 4 demonstrates that the largest error is occurring at the outflow. The global orders of convergence presented in both Tables 1 and 2 We confine the discussion to the region where sin θ ≥ 0. By symmetry, an analogous analysis can be performed in the region where sin θ < 0. In addition to the above properties, the cut-off function will be constructed so that it rapidly changes from the value of 0 to 1 over the subinterval [π − µθ * , π − θ * ]. To construct this function, we initially define an associated function v, as the solution of the following singularly perturbed boundary value problem where C * is a positive constant, which is constrained by an upper bound below. In passing, we note that this function v has a boundary layer to the left of θ = π − µ 1 θ * . By using the maximum principle, we see that 0 < v ≤ 1, v > 0.
to bound the boundary layer function w + .
(iii) From the crude bounds (2.3) on the derivatives, we can establish the bounds ∂ i+j w ∂r i ∂θ j ≤ Cε −i−j .
As in [7], one can check that Repeating the argument in (i), we can then establish that |z − | ≤ CB − (r, θ), where B − (r, θ) is defined in (5.2). Using the fact that |w − (R 1 , θ)| ≤ C| cos θ|, the improved bounds on the derivatives in the angular direction for w − follow. An analogous argument is used for w + .
Given the construction of the cut-off functions ψ * (θ) and ψ * (θ), one can check that w ≡ 0 in fixed neighbourhoods of the characteristic points. | 2019-06-07T22:23:48.268Z | 2019-11-01T00:00:00.000 | {
"year": 2019,
"sha1": "aeaa8569c6b3204241bb429649ac28dccdd7e4b5",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "c6a7b3972e056e6af02bc53ec5d09cfd71cd8e37",
"s2fieldsofstudy": [
"Mathematics",
"Engineering"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science",
"Physics"
]
} |
233560202 | pes2o/s2orc | v3-fos-license | Causality Between Higher Education With Economic Growth In Indonesia
The aim of this study is to investigate the causality between higher education and economic growth in Indonesia in 1980-2017. The econometric approach of this study used the Vector Auto Regressive (VAR) and Granger Causality Test. The Empirical results show evidence of unidirectional causality between higher education and economic growth in lag 1 with α of 10%. These results indicate that education as a form of investment in human resources will affect the economy. This study also shows that economic growth does not affect higher education. This unidirectional causality can be explained that the economic growth that occurs does not directly increase the overall income of the community, meaning that the positive impact of growth can only be enjoyed largely by a group of high-income people.
INTRODUCTION
The achievement of high economic growth is the goal of every country. These efforts require resource support. One of the resources that affect economic growth is human resources (Apergis & Poufinas, 2020). The role of human capital in increasing economic growth basically comes from the assumption of the Solow growth model which incorporates technological advances to optimize labor. Solow's growth theory by explaining the role of the power of accumulated knowledge and technological assets as a source of growth that comes from domestic strength (endogenous factors) (Salman et al., 2019). Endogenous Growth Model and Solow Growth Model, enabling each country to optimize internal strength (endogenous) in the form of the ability to master science combined with natural resources, savings rates, population growth, and technological advances as sources of growth.
Human capital (human capital) with a level of knowledge and skills which is one of the sources of economic growth and a source of competitiveness (competitiveness) of a country. The concept of humancapital is defined as the knowledge and skills, ideas or ideas, information and health conditions possessed by individuals. The important role of human capital as one of the drivers of sustainable development can be seen from its long-term contribution to the productivity of individuals or workers (Truong et al., 2021). The high productivity of workers will be able to increase high economic growth.
Analysis of the contribution of human capital to economic growth can be viewed from a micro and macroeconomic perspective. The microeconomic point of view sees the role of human capital as a part of production input in addition to other inputs such as physical capital, labor, and technology which are manifested in the form of quality Human Resources (Kruss et al., 2015). The production process, which uses quality labor input both in knowledge and skills, will have an impact on competence, mastery of labor technology, and the emergence of new innovations so that in the end it can encourage efficiency and productivity (Marginson, 2018).
Indonesia as a developing country still faces the quality of its human resources. Indonesia's Human Development Index (HDI) in 2017 reached 70.81. Based on these achievements, Indonesia is in the 116th position out of 189 countries in the world. Furthermore, based on the Global Competitive Index (GCI) 4.0, Indonesia's competitiveness is ranked 36th out of 137 countries. HDI and Indonesia's competitiveness have improved, but are still low in terms of innovation, technology adaptation, and labor market efficiency which are closely related to the achievement of the education sector.
One example of research conducted by Solaki (2013) resulted in the growth of real GDP per capita in Greece from changes in primary education, secondary education, higher education and expenditures in the education sector. Pelinescu (2014) conducted research in European countries and Wang et al (2016) in 55 countries from 5 continents during the period 1960-2009. The conclusion shows that human capital, proxied by education and health indicators, has a significant positive effect on economic growth (GDP growth).
A case study that discusses the study of how the relationship between human capital and economic growth between regions in Indonesia was carried out by Saepudin (2013). The results of the research analysis state that the growth in human capital which is proxied by the number of high schools and university graduates has a positive and significant effect on economic growth between regions in Indonesia, as well as the effect of growth in government spending in education on economic growth between regions. Also states that the difference in economic growth in developed countries with developing countries is caused by 80% of the influence of physical capital and human capital, while the remaining 20% is due to other factors.
Another study on the influence and relationship between human capital and economic growth in terms of patterns of mutual influence (cointegration) or a causality between variables has been conducted by Mariana (2015), namely analyzing the causal relationship between education and economic growth. The education variable proxied by the number of students enrolled in higher education and expenditure in education, meanwhile, the economic growth variable proxied by the Gross Domestic Product (GDP) per capita in Romania for the 1980-2013 period. The results of a study conducted by Mariana (2015) concluded that education has a positive effect on economic growth and both have a long-term relationship (cointegration).
The existence of conditions in Indonesia related to the problem of imbalance between the development process and efforts to improve the qualit of human capital, especially output in higher education, thus encouraging research on the relationship or reciprocal relationship (causality) between economic growth and higher education as an aspect of human capital in Indonesia. The focus of this research is on higher education because it is based on the fact that higher education has a major role in providing educated personnel, the accumulation of technology-based knowledge capital, and innovation as a stimulus foreconomic growth and national development (agent of economic development) in an effort to increase the nation's competitiveness.
METHODOLOGY
This study aims to look at the long-term relationship between higher education and economic growth as well as the causality between higher education and economic growth in Indonesia. The first step in this research is to perform a unit root test. This test is carried out to see the stationarity of the variables used, whether the data is stationary or not. The second step is to perform a cointegration test, if the two variables are integrated, then the possibility of a long-term balance between the observed variables will occur.
The procedure to determine whetherthe data is stationary or not is by comparing the ADF statistical value with the critical value of the Mackinnon statistical distribution. If the absolute value of the ADF statistic is greater than the critical value, then the observed data shows stationary and vice versa, if the absolute value of the ADF statistic is smaller than the critical value, then the data is not stationary.
In the ADF test, if the conclusion is that the data is not stationary, then steps are needed to make the data stationary through the data differentiation process. The stationary test of data through this differentiation process is called integration test. The degree of integration test used is Augmented Dickey-Fuller. As with the previous unit root test, the decision to what degree a data will be stationary can be seen by comparing the ADF statistical value with the critical value of the Mackinnon distribution. If the absolute value of the ADF statistic is greater than the critical value at the first level of differentiation, then the data is said to be stationary at level one. However, if the value is smaller, then the integration degree test needs to be continued at a higher differentiation so that it is stationary.
This study uses a quantitative approach, an approach that focuses on testing and proving hypotheses through various tests using measured data and calculations accompanied by an explanation of the results of the calculations. This study aims to see the causality between higher education institutions and economic growth by using a time series Vector Auto-Regressive (VAR) analysis for the period 1980 -2017 and the Granger Causality Test. The VAR model is built with the consideration of minimizing theoretical approaches to be able to capture economic phenomena. Therefore, the VAR model is a non-theoretical time series model.
The most important thing in VAR estimation is the problem of determining the lag length in the VAR system. The optimal length of the variable inaction is needed to capture the effect of each variable on other variables in the VAR system. Determination of the optimal lag length can use several criteria such as Akaike Information Criterion (AIC), Schwartz Information Criteria (SIC), Hannan-Quin Criteria (HQ), Likelihood Ratio (LR), and Final Prediction Error (FPE). The optimal lag length occurs if the criteria value above has the smallest absolute value.
The cointegration test is the next stage of the unit root test and the degree of integration. Cointegration test to test and estimate the long-term relationship between the economic variables studied. The cointegration test can be done by compiling a long-term equation first to obtain the residual value, which is then tested at the degree level with the ADF test. If a stationary residual value is obtained, then there is a long-term balance between the independent and dependent variables in the model. This method is often called the Augmented Engle-Granger (AEG) cointegration test method.
The Granger Causality Test is a method to determine where the dependent variable can be influenced by other variables and vice versa, and the independent variable can occupy the position of the dependent variable (Mazzarisi et al., 2020). This relationship is called a causal or reciprocal relationship. The result of Granger causality will produce four possibilities, namely one-way causality from x to y, one-way causality from y to x, two-way causality from x to y, and y to x, and not. there is causality between variables (Vaduva et al., 2020).
The research variables used were the number of students and economic growth. Both of these variables cannot be identified as independent variables or dependent variables so that all variables are considered as dependent variables. These variables can be determined after the Granger causality test is carried out. The model used to determine whether there is a causal relationship between proxies of higher education and the number of students and economic growth in Indonesia is as follows: where LHEt is number of students in year t (natural logarithm), Get is economic growth in year t, LnHEt-j: the number of students lag operation, GEt-j is economic growth lag operations, t is time, j is time lag, aj, bj, cj, dj is regression coefficient, ut and vt is error term. Higher Education is the number of enrolled students (school enrollment) in units of people. Economic growth is the process of increasing Indonesia's Gross Domestic Product (GDP) in percentage.
The data used in this study are secondary data that is time series with a time of 1980 -2017. The data used in this study are the number of registered students obtained from the Ministry of Research, Technology and Higher Education and Gross Domestic Product (GDP) Indonesia.
RESULTS AND DISCUSSION
The dynamics of tertiary institutions in Indonesia are experiencing increasingly advanced developments. Higher education has an important role in every activity carried out by the government or the state. Higher Education in Indonesia at the beginning of its development was carried out by the Directorate General of Higher Education, Ministry of Education and Culture. Government policies prioritize aspects of sustainable development that encourage the greater role of higher education as part of the determining elements of development success. This is in accordance with the policy outlined by the Ministry of Education and Culture in 1975.
The role of higher education as a provider of skilled and skilled personnel to carry out economic development. In line with the policy that has been issued, efforts to increase the establishment of new colleges or study programs that are relevant to the needs of the community are also continuously carried out by the government, both universities Table 1 shows that the higher education variable (LnHE) is not stationary at I (0), but the economic growth variable (GE) is stationary at the level level. Higher education is not stationary at the level level because the ADF probability values are more than 1%, 5%, and 10%, so it managed by the government, namely State Universities (PTN) and universities that are managed and organized. by the community, namely private universities (PTS).
Economic growth is a condition of an increase in output that lasts in the long run and shows the success of a country in providing various needs for goods for its citizens. Efforts to create high economic growth are basically the efforts of a country in managing and utilizing its resources to produce output in the long term in line with technological advances and the institutional conditions experienced.
Indonesia as a country that has natural resources and human resources as well as other potential development capital has also experienced dynamics of economic growth in several periods since the old order government to the present. The ups and downs of the dynamics of Indonesia's economic growth can be divided into the pre-reform era, namely the period 1945-1998 and the reform era from 1999 to the present.
is not stationary. Furthermore, the first difference or I (1) process shows that all variables are stationary, because the ADF probability of the two variables is less than 1%, 5%, and 10%. All variables are stationary in I (1), then they are not affected by unit root problems.
Table 1 Stationary Test
The estimation results in Table 2 show the lag used to analyze the causality test. The Based on the Granger causality test, the results show that there is a unidirectional causality, namely higher education affects economic growth at the α level by 10%. These results indicate that education as a form of investment in human resources will affect the economy. This means that economic growth depends on human capital. The results of this estimate support that education is proven to have an effect on driving economic growth. Human Capital Theory states that education has a positive influence on economic growth. Humans who have higher levels of education and have better skills will have jobs and be followed by better wages than workers with lower education. If wages reflect the level of productivity, then an increase in the number of highly educated workers means selected lag is lag 2 with the AIC approach. This is because AIC provides the lowest score compared to other approaches. a higher contribution to productivity which in turn will encourage aggregate growth.
Education has a significant positive effect on growth as evidenced in several empirical studies. Kesikoglu and Ozturk (2013) proved in their study that there was a bi-directional causality between the costs of the education sector and economic growth in 20 groups of OECD countries during the 1999-2008 period. Investments in human capital embodied in the education sector have had a positive impact on growth in these countries. Pelinescu (2014) conducted research in European countries and Wang et al (2016) in 55 countries from 5 continents during the period 1960-2009. The conclusion shows that human capital, proxied by education and health indicators, has a significant positive effect on economic growth (GDP growth).
Table 2 Lag Length Criteria Test
The results of data estimation from this study also indicate that economic growth does not affect higher education. This unidirectional causality relationship can be explained that the economic growth that occurs does not directly increase the overall income of the community, meaning that the positive impact of growth can only be enjoyed mostly by a group of high-income people. The result of this economic condition is the people's purchasing power, especially for the education sector, namely increasing education to a higher level in tertiary institutions, which is still very limited.
The current conditions in Indonesia indicate economic inequality in society, especially the ability of each parent to provide a better education for their children to higher education, which tends to only be done by parents or people who have an income above the average ability. In investing in the education sector, in other words, economic growth in Indonesia does not affect school enrollment for higher education. The fact that currently occurring in Indonesia is based on the results of data estimation, has deviated from the concept of the final goal of development as emphasized by the United Nations Development Program (UNDP) since the 1990s, where there are 4 (four) main keys in development which are essentially efforts complete human development aimed at creating productivity, equality, sustainability and empowerment. The role of economic growth as the principal means to achieve the ultimate goal of development and focus on increasing the value of the Human Development Index (HDI) in Indonesia has not yet been realized.
CONCLUSIONS
This study examines the reciprocal relationship or causality between education and economic growth in Indonesia during the period 1980-2017. Estimation of higher education data which is proxied by the number of students in higher education and economic growth data concludes the results of the analysis of the causality of the two research variables that there is a unidirectional causality from higher education to Indonesia's economic growth during the study period 1980-2017. Higher education positively affects growth with a significance level of α (alpha) 10% in lag 1, meaning that the impact of adding or changing higher education school enrollment will affect growth in the next 1 year but not vice versa. In other words, the results of data estimation also show that growth does not affect school enrollment for higher education during the study period.
The policies that can be taken based on the results of this study are as follows: (1) Investments in education, especially higher education, should involve a lot of stakeholders in the preparation of education sector development planning; (2) The subsidy policy can be applied especially for higher education or vocational levels as an effort to encourage increased productivity of the workforce or educated human resources; (3) The government is obliged to maintain economic stability and encourage people to create savings so that its accelerated effect can increase people's purchasing power in the education sector; (4) The portion of the 20% APBN for the education sector is maintained and its use must be closely monitored, especially the budget for higher education, so as not to cause more complex problems in the function of the state budget allocation. | 2021-05-04T22:06:39.786Z | 2021-03-30T00:00:00.000 | {
"year": 2021,
"sha1": "23ea6fa48d50af5f24bcee6515b36e4513812e4e",
"oa_license": "CCBY",
"oa_url": "https://journal.trunojoyo.ac.id/mediatrend/article/download/7780/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "4459e6efb1db44f1b26a74677aff7b3bb3d4dcb0",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
256044490 | pes2o/s2orc | v3-fos-license | STAT1 epigenetically regulates LCP2 and TNFAIP2 by recruiting EP300 to contribute to the pathogenesis of inflammatory bowel disease
The aetiology of inflammatory bowel disease (IBD) is related to genetics and epigenetics. Epigenetic regulation of the pathogenesis of IBD has not been well defined. Here, we investigated the role of H3K27ac events in the pathogenesis of IBD. Based on previous ChIP-seq and RNA-seq assays, we studied signal transducer and activator of transcription 1 (STAT1) as a transcription factor (TF) and investigated whether the STAT1–EP300–H3K27ac axis contributes to the development of IBD. We performed ChIP-PCR to investigate the interaction between STAT1 and H3K27ac, and co-IP assays were performed to investigate the crosstalk between STAT1 and EP300. Lymphocyte cytosolic protein 2 (LCP2) and TNF-α‐inducible protein 2 (TNFAIP2) are target genes of STAT1. p-STAT1 binds to the enhancer loci of the two genes where H3K27ac is enriched, and EP300 subsequently binds to regulate their expression. In mice with dextran sulfate sodium (DSS)-induced acute colitis, an EP300 inhibitor significantly inhibited colitis. p-STAT1 and EP300 promote TNFAIP2 and LCP2 expression through an increase in H3K27ac enrichment on their enhancers and contribute to the pathogenesis of chronic inflammation.
Introduction
Inflammatory bowel disease (IBD) is a nonspecific chronic inflammatory disorder that occurs in the intestinal tract and is caused by multiple factors. Two types of IBD have been identified: ulcerative colitis (UC) and Crohn's disease (CD). IBD is related to factors such as genetics, the environment, immunity, and intestinal microbiota [1]. Environmental factors have been shown to increase the risk of disease development by changing epigenetic patterns [2,3]. For example, smoking changes the epigenetic pattern of airway cells in humans and affects genes expression [4]. Particulate matter in air and airborne benzene induce inflammation and carcinogenesis by changing the pattern of epigenetic modifications [5,6]. In recent years, the role of epigenetic mechanisms in IBD has attracted increasing attention.
Epigenetic modifications are biochemical changes in chromatin that do not affect the nucleotide sequence of the genome. These modifications explain the shaping of the immune system throughout the lifetime, as well as the effects of external environmental factors, microbes, and nonmicrobial particles during the development of diseases. Epigenetic modifications include DNA methylation, histone modifications, and noncoding RNAs [7]. DNA methylation affects the duration and severity of IBD, the extent of inflammation, the hospitalization rate, and the possibility of canceration [8]. The DNA methylation pattern in the intestinal epithelial cells (IECs) of children with CD was significantly different from that in the normal group and was closely related to the prognosis of the CD [9]. Tahara et al. [10] showed that a large number of hypermethylated sites are located in the CpG islands in the rectal inflammatory mucosa of patients with UC, and methylation in these patients is closely related to the disease course. Noncoding RNAs regulate gene expression at both the transcriptional and posttranscriptional levels and participate in the onset and progression of IBD by modifying T-cell differentiation, IL-23/Th17 signaling pathways, and autophagy [11]. The role of histone modification in IBD is also important. Histone modification refers to the process of covalent modifications on histones, such as methylation, acetylation, phosphorylation, and adenylation. Inhibition of histone deacetylases (HDACs) proved that histone 3 acetylation is related to dextran sulfate sodium (DSS)-induced colitis in mice [12], and acetylation of histone 4 in the colonic mucosa was significantly increased in mice with TNBSinduced colitis [13]. Furthermore, the E3 ligase FBXW7 promotes the expression of Ccl2 and Ccl7 by suppressing H3K27me3 modification via degradation of the histone-lysine N-methyltransferase EZH2 in macrophages, thereby promoting the aggregation of proinflammatory mononuclear phagocytes in local colonic tissues [14]. Li et al. [15] proved that H3 acetylation was significantly reduced in the colonic epithelium of individuals with UC and negatively correlated with the disease severity. In addition, H3K27ac plays a role in specific genomic regions in mice with DSS-induced colitis [16]. ChIPseq showed increased H3K27ac levels at the promoters of iNOS, IL-6, TNF-α, and other inflammatory factors in the colon tissues of mice exposed to azoxymethane (AOM)/DSS [8]. EP300 is a transcriptional cofactor with acetyltransferase activity that promotes the acetylation of histone and nonhistone proteins [17]. EP300 acetylates H3K27 and contributes to inflammation, differentiation, lipid metabolism, and chromatin remodeling [18,19]. In the experimental Drosophila embryonic mesoderm and mouse models, H3K27ac enrichment is closely related to promoter and enhancer activity regulated by EP300 [19][20][21][22][23]. CBP/EP300 suppresses immunity by promoting the acetylation of H3K27 on enhancers and promoters of tumor-promoting target genes in myeloid-derived suppressor cells (MDSCs) to promote tumorigenesis [24]. Increased expression of EP300 induces H3K27 acetylation and leads to higher concentrations of lithocholic acid (LCA) and deoxycholic acid (DCA) to promote colon cancer [25]. Currently, the effect of EP300-H3K27ac on inflammation remains elusive.
In a previous study, by applying a high-throughput ChIP-seq assay for H3K27ac and an RNA-seq assay in a mouse model of DSS-induced chronic colitis, we revealed changes in the global genomic profile and investigated its role in the pathogenesis of IBD, indicating that H3K27ac levels were increased on enhancers in colon tissues from the DSS group and were involved in DSS-induced colitis [26]. In the current study, based on a previous ChIPseq assay, we studied the transcription factor (TF) signal transducer and activator of transcription 1 (STAT1) and investigated whether the STAT1-EP300-H3K27ac axis contributes to the development of IBD.
Increased levels of STAT1/p-STAT1 in a mouse model of DSS-induced chronic colitis and in patients with IBD are related to H3K27ac modification
Based on the H3K27ac ChIP-seq assay, we predicted 118 TFs that bind to the enhancers where H3K27ac was enriched in inflamed intestinal tissues from mice with DSS-induced chronic colitis, and the TFs were then overlapped with differentially expressed genes (DEGs) identified in our previous RNA-seq analysis [26]. We identified the three most significantly differentially expressed TFs, among which STAT1 was upregulated and HNF4A and IRF2 were downregulated (Fig. 1a). We further verified the expression of these TFs in mice with chronic colitis, and the result was consistent with our prediction (Fig. 1b). Because the H3K27ac modification is usually associated with increased gene expression, we selected a functionally upregulated TF, STAT1, for further study. Levels of the STAT1 and p-STAT1 proteins were increased after DSS treatment (Fig. 1c), and the level of the STAT1/p-STAT1 mRNA in the inflamed mucosa of patients with IBD was increased compared to that in the normal mucosa (Fig. 1d). However, the level of the p-STAT1 protein, but not total STAT1 protein, was increased (Fig. 1e). These results suggested the involvement of the TF STAT1 in IBD might be related to H3K27ac modification.
Identification of lymphocyte cytosolic protein 2 (LCP2) and TNF-α-inducible protein 2 (TNFAIP2) as STAT1 target genes
We sought to identify the target genes of STAT1 that were upregulated in mice with DSS-induced colitis and to further investigate the mechanism underlying the contribution of STAT1 to IBD through H3K27ac. Based on previous ChIP-seq and RNA-seq assays, we identified 181 upregulated genes whose enhancers had increased H3K27ac enrichment after the DSS treatment [26]. After overlapping the 181 genes with the STAT1 target genes in the Gene Transcription Regulation Database (GTRD, http:// gtrd. biouml. org), we obtained 142 genes as candidate target genes of STAT1 (Fig. 2a), and the top seven candidate genes with the most significant changes (TNFAIP2, LCP2, CREBBP, HNF4A, LRG1, HSD17, and USP18) were selected for verification. We built an inflammatory cell model by stimulating NCM460 cells with TNF-α and/or IFN-γ and found that STAT1 was activated only when IFN-γ was present, consistent with the results from previous reports [27][28][29]. Moreover, we confirmed that LCP2 and TNFAIP2 were upregulated, consistent with the sequencing results (Fig. 2b). In addition, both genes were upregulated in the mice with DSS-induced colitis and patients with IBD ( Fig. 2c-f ). We analyzed the data in the GEO (Gene Expression Omnibus) database and found that STAT1 expression was positively correlated with the expression of LCP2 and TNFAIP2 in patients with UC (GES107499-GPL15207) and CD (GSE20881-GPL1708-20418) (Fig. 2g). Furthermore, LCP2 and TNFAIP2 were stably downregulated in cells transfected with the STAT1 siRNA ( Fig. 2h-i) upon IFN-γ ( Fig. 2j-k) or TNF-α and IFN-γ cotreatment ( Fig. 2lm), indicating that LCP2 and TNFAIP2 are target genes of STAT1. Taken together, these results showed that LCP2 and TNFAIP2 were target genes of STAT1. , and IRF2 mRNA expression in DSS-induced mice chronic colitis (error bars: mean ± SEM; control group n = 7, DSS group n = 9). c STAT1/ p-STAT1 protein expression in DSS-induced mice chronic colitis. d-e STAT1/p-STAT1 mRNA and protein expression in IBD patients (error bars: mean ± SEM; NC group n = 8, UC group n = 8, CD group n = 10). *p < 0.05, **p < 0.01, ***p < 0.001, ns means no significance
STAT1 regulates its target genes through H3K27ac at their enhancers in mice with DSS-induced chronic colitis
We confirmed that LCP2 and TNFAIP2 were the target genes of STAT1 and that H3K27 acetylation was enriched at their enhancers according to previous ChIP-seq results [26] (Fig. 3a). We verified by ChIP-PCR that H3K27ac enrichment was increased at the enhancers of LCP2 and TNFAIP2 in mice with DSS-induced colitis (Fig. 3b). These results implied a potential role for H3K27ac enrichment at enhancers in promoting inflammation.
Next, we sought to investigate whether STAT1 modulates target gene expression through H3K27ac at enhancers. By performing ChIP-PCR experiments, we found that the recruitment of p-STAT1 to the enhancers in the LCP2 and TNFAIP2 promoters was significantly increased in tissues of mice with chronic colitis compared to mice in the control group (Fig. 3c). Therefore, p-STAT1 may promote the expression of LCP2 and TNFAIP2 by increasing H3K27ac enrichment at the enhancers of these two genes.
p-STAT1 may recruit EP300 to promote target gene expression in NCM460 cells
Since H3K27ac is catalyzed mainly by the epigenetic enzyme EP300, we speculate that EP300 may play a role in the regulatory effect of STAT1 on LCP2 and TNFAIP2 expression. No significant differences in the expression of EP300 in the inflamed tissues from either mice with chronic colitis (Fig. 4a) or patients with IBD (Fig. 4b) were observed. First, we verified that the EP300 inhibitor C646 [30] was effective in NCM460 cells (Fig. 4c). In NCM460 cells with or without TNF-α induction, the expression of the LCP2 and TNFAIP2 mRNA did not show a significant change when EP300 was inhibited ( Fig. 4d-e). Furthermore, we silenced EP300 with siRNAs and selected the most efficient sequence for further study ( Fig. 4f-g). Consistent with the results obtained with the C646 inhibitor, the expression of the LCP2 and TNFAIP2 mRNAs did not show a significant change after silencing EP300 in NCM460 cells stimulated with or without TNF-α ( Fig. 4h-i). When EP300 was silenced in NCM460 cells and STAT1 was activated by IFN-γ, the induction of LCP2 and TNFAIP2 expression was repressed (Fig. 4j). As shown in Fig. 2h-i, siSTAT1 decreased the expression of the two genes in the absence of EP300. However, after silencing STAT1 and EP300 simultaneously, the expression of the LCP2 and TNFAIP2 mRNAs did not change (Fig. 4k). These results indicated that EP300 is required for STAT1 to regulate TNFAIP2 and LCP2 expression. Moreover, the western blot analysis results showed that the levels of total STAT1 and p-STAT1 increased upon IFN-γ stimulation, and immunoprecipitation results indicated that p-STAT1 bound to EP300 upon IFN-γ treatment (Fig. 4l). Taken together, these results indicate that EP300 is required for STAT1 to induce LCP2 and TNFAIP2 expression.
Inhibition of EP300 relieves DSS-induced colitis in mice
We found that p-STAT1 binds to EP300 to regulate the expression of LCP2 and TNFAIP2 by promoting H3K27 acetylation at enhancers, and thus, we further investigated whether the increase in H3K27ac levels mediated by EP300 is involved in the development of colitis in mice. We established a mouse model of DSS-induced acute colitis as previously reported [31]. DMSO (control) or C646 was intraperitoneally (i.p.) injected at different time points during the course of colitis development (Fig. 5a). After the administration of C646, the weight loss of mice treated with DSS was significantly alleviated (Fig. 5b). The disease activity index (DAI), a score used to evaluate the clinical manifestations of colitis, was also decreased upon C646 treatment (Fig. 5c). In addition, the extent of colon shortening is considered a macroscopic indicator reflecting the degree of intestinal inflammation. We observed significantly longer colons after DSS exposure in C646-treated mice than in mice in the DMSOtreated group (Fig. 5d). The detailed histological analysis of colonic lesions in colitis mice showed severe pathology, such as extensive destruction of crypt structures and abundant inflammatory cell infiltration. Our histological analysis of the mouse colon showed that these pathological changes were significantly reduced after the administration of C646, as the mucosal structure was well preserved and the infiltration of inflammatory cells (See figure on next page.) Fig. 2 Identification of LCP2 and TNFAIP2 as STAT1 target genes. a Overlap of the target of STAT1 on the website of GTRD and the upregulated genes in the RNA-seq whose enhancers had increased H3K27ac enrichment after DSS treatment. b TNFAIP2, LCP2, CREBBP, HNF4A, LRG1, HSD17, USP18 mRNA expression in NCM460 cell inflammation model (error bars: mean ± SD). c-d LCP2 and TNFAIP2 mRNA and protein expression in DSS-induced mice chronic colitis (error bars: mean ± SEM; control group n = 7, DSS group n = 9). e-f LCP2 and TNFAIP2 mRNA and protein expression in IBD patients (error bars: mean ± SEM; NC group n = 8, UC group n = 8, CD group n = 10). g The correlation between STAT1 and LCP2, TNFAIP2 in UC and CD patients in GEO database. h-i STAT1 mRNA and protein expression in NCM460 after transfected with siSTAT1-1, 2, 3 or the negative control (error bars: mean ± SD). j-k LCP2 and TNFAIP2 mRNA and protein expression in NCM460 after transfected with siSTAT1-1, 2 or the negative control with stimulation by IFN-γ (error bars: mean ± SD). l-m LCP2 and TNFAIP2 mRNA and protein expression in NCM460 after transfected with siSTAT1-1, 2 or the negative control with stimulation by TNF-α and IFN-γ (error bars: mean ± SD). *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0. and the loss of crypts were significantly reduced (Fig. 5e). Proinflammatory factors are closely related to intestinal inflammation and clinical symptoms of IBD [32]. We next explored whether C646 treatment affected the production of proinflammatory factors. We detected the expression of proinflammatory factors, including TNF-α, IL-1β, IL-6, and IL-17A, in mouse colon tissues and found that C646 administration led to a significant decrease in the levels of TNF-α, IL-1β, and IL-17A, whereas the level of IL-6 remained unaffected (Fig. 5f ). Furthermore, the mRNA expression of LCP2 and TNFAIP2 was upregulated in mice with DSS-induced colitis, and these changes were reversed after C646 administration (Fig. 5g). Therefore, a decrease in H3K27 acetylation induced by inhibiting EP300 alleviates DSS-induced colitis in mice.
Discussion
Epigenetic modifications have been extensively studied in various human diseases, and their effect on autoimmune diseases has drawn increasing attention. As the major event in epigenetic modification, some studies have shown that H3K27 acetylation plays a role in various cancers, such as increased H3K27ac signals in colon cancer [33] and hepatocellular carcinoma [34]. However, to date, the contribution of epigenetic events to the pathogenesis of IBD remains elusive. The results of our previous ChIP-seq assay indicated that H3K27ac levels at enhancers tended to increase during colitis, suggesting that this modification might be involved in intestinal inflammation. Thus, H3K27ac enrichment at enhancers might contribute to the pathogenesis of IBD. The binding of TFs to enhancers is one of the critical steps in transcriptional activation. The deposition of H3K27ac unwinds chromatin and then promotes the regulation of target gene expression by TFs [35,36]. According to a previous ChIP-seq assay, we identified STAT1 as an important TF involved in this process. STAT1 activity is increased in some inflammatory diseases, such as asthma and rheumatoid arthritis [37,38]. When activated, the protein translocates into the nucleus and binds to specific promoter elements to regulate gene expression [39]. It directly binds to the sphingosine 1-phosphate receptor 1 (S1PR1) promoter at the region between bp -29 to bp -12 and promotes S1PR1 expression, which induces the development of cancers and inflammatory diseases [40]. The phosphorylation of STAT1 (Y701 and S727) is substantially increased upon DSS exposure, but a significant change in total STAT1 levels is not observed [41]. In patients with CD and UC, the levels and activity of STAT1 and p-STAT1 are also increased [39]. In our study, STAT1 and p-STAT1 levels were elevated in vivo and in vitro, consistent with previous reports. We also verified that the level of p-STAT1 but not STAT1 was increased in patients with IBD. Collectively, these data implicated STAT1 as a TF involved in regulating H3K27ac modification and contributing to the pathogenesis of IBD.
Then, we identified LCP2 and TNFAIP2 as the target genes of STAT1 by overlapping the target genes of STAT1 predicted by GTRD and our RNA-seq assay. The expression of the two genes was increased in activated NCM460 cells, the inflamed tissues from mice with DSS-induced chronic colitis and patients with IBD. We analyzed the data in the GEO database and found that STAT1 expression was positively correlated with the expression of LCP2 and TNFAIP2 in patients with IBD. In addition, silencing STAT1 in intestinal epithelial cells induced downregulation of both genes. TNFAIP2 is a proinflammatory gene whose expression is regulated by multiple TFs and signaling pathways, including the NF-κB, KLF5, and retinoic acid pathways, which play essential roles in inflammation [42]. LCP2 (SLP-76) is a direct regulator of nuclear pore function in T-cells and is a critical immune cell involved in the pathogenesis of IBD [43].
Furthermore, we verified that H3K27ac levels were increased at the enhancers of these two genes in mice with DSS-induced chronic colitis and confirmed the relationship of H3K27ac with the expression of TNFAIP2 and LCP2. Van der Kroef et al. reported that H3K27ac is enriched at the promoters of myxoma resistance protein 1 (MX1) and cytidine/uridine monophosphate kinase 2 (CMPK2) in SSc and that STAT1 strongly binds the hyperacetylated regions in SSc [44]. In the current study, the binding of p-STAT1 to the enhancers of TNFAIP2 (See figure on next page.) Fig. 4 p-STAT1 may recruit EP300 to promote target gene expression in NCM460 cells. a EP300 mRNA expression in DSS-induced mice chronic colitis (error bars: mean ± SEM; control group n = 7, DSS group n = 9). b EP300 mRNA expression in IBD patients (error bars: mean ± SEM; NC group n = 8, UC group n = 8, CD group n = 10). c H3K27ac protein expression in NCM460 after adding EP300 inhibitor C646 in different concentration (0, 10, 20, 30, 40, 50 μM) for 24 h. d-e LCP2 and TNFAIP2 mRNA expression in NCM460 after adding C646 or DMSO without or with stimulation of by TNF-α (error bars: mean ± SD). f-g EP300 mRNA and protein expression in NCM460 after transfected with siEP300-1, 2, 3 or the negative control (error bars: mean ± SD). h-i LCP2 and TNFAIP2 mRNA expression in NCM460 after transfected with siEP300 or the negative control with or without stimulation by TNF-α (error bars: mean ± SD). j LCP2 and TNFAIP2 mRNA expression in NCM460 after transfected with siEP300 or the negative control with stimulation by TNF-α + IFN-γ (error bars: mean ± SD). k LCP2 and TNFAIP2 mRNA expression in NCM460 after transfected with siSTAT1 + siEP300 or the negative control with stimulation by TNF-α + IFN-γ (error bars: mean ± SD). l co-IP experiment was performed to validate the combination between p-STAT1 and EP300. *p < 0.05, **p < 0.01, ***p < 0.001, ns means no significance and LCP2 was significantly increased in tissues from mice with chronic colitis, suggesting that both p-STAT1 deposition and H3K27ac enrichment occur on the same enhancer of both the TNFAIP2 and LCP2 genes.
EP300 is an acetylase that regulates H3K27 acetylation. A previous study showed that STAT1 requires EP300 for effective transcriptional activity [45], and STAT1 also interacts with p300/CBP through its C-terminal transactivation domains (TADs) [46]. EP300 and STAT1/3 act cooperatively to participate in the pathogenesis of lightinduced retinopathy in zebrafish [47]. STAT1 activity is regulated by EP300-dependent acetylation, and that the interaction between EP300 and STAT1 participates in oxidized low-density lipoprotein uptake and foam cell formation, which are responsible for the pathogenesis of atherosclerosis [48,49]. In the present study, STAT1 could not regulate target gene expression without EP300, and we verified that p-STAT1 interacted with EP300 to increase H3K27ac levels and upregulate TNFAIP2 and LCP2 expression. Moreover, we applied an inhibitor of EP300, C646, to suppress the activity of EP300 in vivo, and this treatment significantly alleviated colitis in mice. Therefore, STAT1 regulates TNFAIP2 and LCP2 by binding to EP300 to promote H3K27ac enrichment at their enhancers. Furthermore, strategies targeting EP300 might be a promising treatment for IBD.
Conclusions
Based on previous ChIP-seq and RNA-seq assays, we verified in the current study that p-STAT1 interacts with EP300 to promote H3K27ac enrichment at the enhancers of LCP2 and TNFAIP2 and contributes to the development of chronic inflammation (Fig. 6). Furthermore, H3K27 acetylation might be a promising therapeutic target for IBD.
Specimen collection
A total of ten patients with clinically active Crohn's disease (CD) and eight patients with active ulcerative colitis (UC) were collected in the study at Zhongnan Hospital of Wuhan University (Wuhan, China) between August 2018 and February 2019. All patients were diagnosed by endoscopic features, and original histopathological detection and a history of autoimmune diseases were excluded. None of the patients received steroids, immunosuppressives, or biologic agents. Normal controls (n = 8) were age-and sex-matched healthy volunteers. Inflamed tissues of UC and CD collected during endoscopic procedure or operation were preserved in liquid nitrogen. Clinical data of patients were also collected.
Cell culture and treatment
Cell lines NCM460 was purchased from the China Center for Type Culture Collection (Wuhan, China), and cells had been authenticated for STR profiling and tested for mycoplasma by the vendor. All the cells were cultured in RPMI 1640 medium (HyClone, USA) containing 10% fetal bovine serum (FBS, HyClone, USA), 100 U/ml penicillin, and 100 mg/ml streptomycin (Genom, China), at 37 °C with 5% CO2. 20 ng/ml IFN-γ (PeproTech, USA)and TNF-α (PeproTech, USA)-treated cells to construct a cell inflammation model. siSTAT1, siEP300, and FAMlabeled siNC were purchased from Guangzhou RiboBio (Guangzhou, China) and transfected into cell line using Lipofectamine 2000 (lipo2000, Invitrogen, USA). Transfection efficiency of siRNA with lipo2000 was preliminarily assessed by a fluorescence microscope (Olympus U-RFL-T, Japan) 24 h after transfection, and the silencing effect of siRNA was further verified by qRT-PCR and WB experiments.
RNA extraction and qRT-PCR
The total RNA of tissues and NCM460 cells was extracted by Trizol reagent (Invitrogen, USA) after treatment of TNF-α and IFN-γ for 24 h, and the reverse transcription was performed using TOYOBO ReverTra Ace kit (TOYOBO, Japan). mRNA expression was quantified using quantitative reverse transcription PCR (qRT-PCR) on Biorad CFX (Biorad, USA), and GAPDH was selected as a housekeeping gene. The primers were designed and synthesized by TSINGKE Biological Technology (Wuhan, China). The expression levels of mRNA were calculated using the comparative CT (2-ΔΔCT), and all experiments were performed with three biological replicates. b Body weight and c disease activity index of mice that received regular drinking water alone (water group n = 10) or 3.0% DSS-containing water (DSS group n = 10), or 3.0% DSS combined with C646 injection (DSS + C646 group n = 15). For statistical comparisons, asterisk indicates DSS vs. DSS + C646. d Colon length and e representative hematoxylin and eosin (H&E) staining of distal colon sections at day 11 after DSS induction (water group and DSS group n = 10, DSS + C646 group n = 15). f Colonic inflammatory cytokine analysis from colonic tissue at day 11 after DSS induction (n = 10 per group). g LCP2 and TNFAIP2 mRNA expression in colonic tissue at day 11 after DSS induction (n = 10 per group). Error bars: mean ± SEM. *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001, ns means no significance (See figure on next page.)
Mice feeding and tissue collection
The chronic mice colitis model was performed as previous [26]. In the acute colitis model, 8-week-old male mice were fed with water for 4 days following 3% DSS (MP Biomedicals) for one week. At the same time, in the inhibitor group, C646 (MCE, USA) was given by intraperitoneal injection at a concentration of 1 mg/kg, and the non-inhibitor group was given DMSO of the same concentration which was given every other day from the first day. At the same time, the control group kept drinking water until the three groups were killed at day 11. DAI was accessed based on weight loss, stool consistency, and the degree of intestinal bleeding [50]. 0.5 cm of the distal colon for all mice was used for hematoxylineosin (HE) staining.
Protein preparation and coimmunoprecipitation
The extraction of protein was performed as previously described. Two microliters ProteinA/ProteinG Magnetic Beads (#19B002202, Beaver, China) were added to 200ul cell lysate and incubated for 45 min with gentle rotation at 4 °C. After centrifugation at 12,000 rpm/min for 1 min at 4 °C, the supernatant was equally split into two new 1.5-ml tubes. EP300 antibody (1:50, Cell Signaling Technology, 4771 T) and IgG were added, and the tubes were gently swirled and mixed overnight at 4 °C. After incubation overnight, 5ul ProteinA /ProteinG Magnetic Beads were added to each tube and mixed gently at 4 °C for 3 h. After centrifugation at 12,000 rpm/min at 4 °C for 1 min, the precipitate was washed with 500 μl 1 × wash buffer three times. Then, loading buffer was added to the precipitate and 15-30 μl of the sample was loading on SDS-PAGE for western blot.
ChIP-PCR assay
We ground the tissue with a tissue grinder (Shanghai Jingxin Industrial Development Co., Ltd., China). Then, the tissue was fixed with 1% formaldehyde and incubated at room temperature for 10 min to make DNA protein cross-links. Then, glycine was added to stop the cross-linking and incubated at room temperature for 5 min. One milliliter tissue lysis containing protease inhibitors (MCE, USA) was added to suspend tissue, and then, tissue was sonicated using EPI-SONIC (USA) to get 200-300 bp of chromatin fragments. Immunoprecipitation was performed with STAT1 (1:1000, Cell Signaling Technology, 14994S), p-STAT1 (1:1000, Cell Signaling Technology, 9167S), and H3K27ac (1:50, ABclonal Technology, A7253). The chromatin DNA was extracted using DNA purification kit (TIANGEN, China), and the specific primers of TNFAIP2 and LCP2 enhancer were used for PCR. The primer sequences were as follows: TNFAIP2-F5′-GTG CCT TCC AGT CAG AGG AG-3′, R5′-GCA TCA TAG GGA GGTC.
Statistical analysis
All the experiments were performed in three times, and data were exhibited as mean ± SEM or mean ± SD. When the variance between the two groups was similar, Student's t test was used to analyze data difference between two groups; if not the same, Welch's t-test was used. Statistical analysis was performed using SPSS 17.0 software (IBM, USA), and GraphPad Prism 7.0 software (GraphPad software, USA). p < 0.05 was considered to be statistically significant.
Fig. 6
Model explaining how p-STAT1 interacts with EP300 to promote H3K27ac enrichment on the enhancers of LCP2 and TNFAIP2 and contributes to the development of chronic inflammation. When STAT1 is activated by IFN-γ, STAT1 migrates into the nucleus and binds to the enhancers of LCP2 and TNFAIP2. p-STAT1 interacts with EP300 to promote H3K27ac enrichment on the enhancers and promote the transcriptions of LCP2 and TNFAIP2 and then contributes to the development of chronic inflammation | 2023-01-21T15:33:06.011Z | 2021-06-10T00:00:00.000 | {
"year": 2021,
"sha1": "0b79ae8887a91ae34877ea49df0533cc5df7198b",
"oa_license": "CCBY",
"oa_url": "https://clinicalepigeneticsjournal.biomedcentral.com/counter/pdf/10.1186/s13148-021-01101-w",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "0b79ae8887a91ae34877ea49df0533cc5df7198b",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
250563582 | pes2o/s2orc | v3-fos-license | Mesenchymal stromal cells: promising treatment for liver cirrhosis
Liver fibrosis is a wound-healing process that occurs in response to severe injuries and is hallmarked by the excessive accumulation of extracellular matrix or scar tissues within the liver. Liver fibrosis can be either acute or chronic and is induced by a variety of hepatotoxic causes, including lipid deposition, drugs, viruses, and autoimmune reactions. In advanced fibrosis, liver cirrhosis develops, a condition for which there is no successful therapy other than liver transplantation. Although liver transplantation is still a viable option, numerous limitations limit its application, including a lack of donor organs, immune rejection, and postoperative complications. As a result, there is an immediate need for a different kind of therapeutic approach. Recent research has shown that the administration of mesenchymal stromal cells (MSCs) is an attractive treatment modality for repairing liver injury and enhancing liver regeneration. This is accomplished through the cell migration into liver sites, immunoregulation, hepatogenic differentiation, as well as paracrine mechanisms. MSCs can also release a huge variety of molecules into the extracellular environment. These molecules, which include extracellular vesicles, lipids, free nucleic acids, and soluble proteins, exert crucial roles in repairing damaged tissue. In this review, we summarize the characteristics of MSCs, representative clinical study data, and the potential mechanisms of MSCs-based strategies for attenuating liver cirrhosis. Additionally, we examine the processes that are involved in the MSCs-dependent modulation of the immune milieu in liver cirrhosis. As a result, our findings lend credence to the concept of developing a cell therapy treatment for liver cirrhosis that is premised on MSCs. MSCs can be used as a candidate therapeutic agent to lengthen the survival duration of patients with liver cirrhosis or possibly reverse the condition in the near future.
Background
Liver disease is an infection of the liver induced by viruses (such as hepatitis B and C), autoimmune hepatitis, alcoholic steatohepatitis (ASH), non-alcoholic steatohepatitis (NASH), and progressive metabolic diseases resulting in liver failure, cirrhosis, and liver cancer. It has become an increasingly serious cause of death worldwide, accounting for 3.5% of all annual mortality globally [1]. Long-term liver injury gradually results in the loss of liver function and accumulation of extracellular matrix (ECM), leading to the occurrence of liver fibrosis. The end stage of liver fibrosis is cirrhosis, and patients with decompensated cirrhosis develop multiple complications; the most common clinical manifestations are ascites and gastroesophageal variceal bleeding. Late complications include jaundice, coagulopathy, hepatic encephalopathy, acute kidney injury, and hepatorenal syndrome (HRS), with complications recurring with increasing frequency after the initial presentation, and most patients die within a median time of approximately 2 years [2]. The only available option is liver transplantation, but its clinical use is restricted by donor scarcity and immune rejection. Therefore, there is an urgent need for effective treatment strategies to reverse cirrhosis.
Recently, mesenchymal stromal cells (MSCs) have received much attention in many different areas of health and medical research. MSCs are thought to be potentially useful and appropriate candidates for treating acute liver failure and cirrhosis owing to their ability to differentiate into hepatocyte-like cells (HLCs) and immunomodulatory properties [3][4][5][6]. Clinical experiments have shown that treatments based on MSCs are both safe and feasible to use for a wide variety of disorders, including autoimmune diseases, cancer, Crohn's disease, respiratory disorders, liver cirrhosis, multiple sclerosis, spinal cord injury, diabetes and its complications, bone and cartilage injuries, osteoarthritis, heart diseases, and graft-vs-host disease [7]. MSCs are defined as multipotent stromal cells with self-renewal capacity that could be readily extracted from a wide range of tissues (e.g., amniotic fluid, umbilical cord, adipose tissue, bone marrow, and menstrual blood) and amplified in vitro [8]. MSCs do not express major histocompatibility complex (MHC) antigens of the class II subtype and contain low levels of MHC molecules of the class I subtype. MSCs also lack the co-stimulatory molecules essential for immune detection, including CD40, CD80, and CD86. Therefore, MSCs generally have low immunogenicity and can avoid immune rejection by the recipient, which serves as the foundation for their allogenic application [9]. The therapeutic advantages of MSCs include self-renewal, homing to the site of injury, immunomodulation, multidirectional differentiation, and secretion of trophic factors that promote repair and regeneration of damaged tissues, and the use of MSCs is free of any ethical concerns [10]. Since the first isolation of MSCs from the bone marrow of a mouse in 1976, several fundamental and clinical research trials have revealed that MSCs may enhance liver function and treat liver cirrhosis in a way that is both safe and effective [9]. As of Dec. 2021, there have been a total of 60 registered clinical studies involving MSCs in liver illness treatment, with 45 of those trials focusing specifically on liver cirrhosis(www. clini caltr ials. gov) (Table1).
Patients with liver cirrhosis may experience an improvement in their liver function following MSCs treatment, which appears to be safe and have a good safety profile and is generally well tolerated. Results obtained in a four-case clinical report illustrated that transplantation of autologous bone marrow mesenchymal stem cells (BM-MSCs) via peripheral vein is safe and the Model for End-Stage Liver Disease (MELD) scores in two patients with decompensated liver cirrhosis is improved [11]. The transplantation of umbilical cord mesenchymal stem cells, often abbreviated as UC-MSCs, was evaluated in a phase I-II clinical study encompassing patients who had hepatitis B cirrhosis. The intravenous infusion of UC-MSCs resulted in a considerable reduction in the volume of ascites, improvements in liver function (such as serum bilirubin and serum albumin levels), and a reduction in the MELD Na score, without any serious adverse reactions or complications [12]. In phase I-II clinical trial by Pedram et al. [13], autologous MSCs were amplified in vitro and differentiated into hepatocytes and then injected via the portal or peripheral vein into patients with the end-stage liver disease having the MELD score ≥ 10. Liver function improved after MSCs transplantation as evidenced by the lowered MELD score, and all of the patients tolerated the therapy. Peng et al. [14] indicated that patients who suffered from liver failure due to hepatitis B and underwent autologous bone marrow mesenchymal stem cell transplantation demonstrated satisfactory short-term effectiveness (from the fourth week to the thirty-sixth postoperatively), but did not significantly enhance the long-term prognosis of these individuals. Despite this, other clinical investigations demonstrate that there is no enhancement in liver function following the transplantation of MSCs [15,16]. There is a need for more extensive trials to establish that the transplantation of MSCs into patients with liver cirrhosis is both safe and effective.
Pathogenesis of liver cirrhosis
Liver cirrhosis refers to the advanced stage of liver fibrosis, which may be induced by several diseases and disorders that affect the liver, such as chronic alcoholism and hepatitis. It is the consequence of an excessive accumulation of extracellular matrix (ECM) and collagen I in response to chronic damage [17]. The process via which liver fibrosis develops might vary considerably depending on the causes, which include hepatitis virus, alcohol, or bile acids. In most cases, the initial stage includes damage to liver cells, which results in the generation of oxygenfree radicals and inflammatory substances. Subsequently, Kupffer cells and other inflammatory cytokines become activated and are recruited into the process. The next step is the activation of hepatic stellate cells (HSCs) [18]. This is the general process underlying liver fibrosis. HSCs, which are found in the space of Disse, perform an integral function in the onset and progression of liver fibrosis.
During chronic liver injury, Kupffer cells release various inflammatory cytokines (e.g., platelet-derived growth factor (PDGF), transforming growth factor-β (TGF-β)) that activate HSCs, causing them to trans-differentiate into myofibroblasts [19]. Kupffer cells also stimulate the influx of bone marrow-derived immune cells into the liver by releasing CCL2 and CCL5, where they develop into Ly-6C + macrophages that promote inflammation, angiogenesis, and fibrogenesis, driving the progression of fibrosis [20]. Inflammatory cytokines induce the transdifferentiation of HSCs from a quiescent to a proliferative, migratory, and fibrotic phenotype (myofibroblast). This leads to the generation of plenty of extracellular matrix (ECM) as well as the expression of α-smooth muscle actin (α-SMA) [21]. TGF-β is the main cytokine in the stimulation of HSCs trans-differentiation and the signal of epithelial-to-mesenchymal transitions (EMT). The TGF-β1 activation enhances ECM synthesis and suppresses ECM degradation, thereby speeding up the advancement of liver fibrosis [22]. Myofibroblasts are known to produce tissue inhibitors of matrix metalloproteinases (TIMP) to prevent the degradation of ECM by matrix metalloproteinases (MMP) and maintain the integrity of ECM [23] (Fig. 1).
Characteristics of MSCs
MSCs are multipotent fibroblast-like cells that are often extracted from the umbilical cord, dental pulp, adipose tissue, menstrual blood, and bone marrow [24,25]. The International Society for Cell therapy (ISCT) established a set of minimum criteria to characterize MSCs in the [26]. Owing to their flexibility, several studies suggest that MSCs may also develop into cells of endodermal (hepatocytes) or neuro-ectodermal (oligodendrocytes, astrocytes, neurons) origin [27]. Besides their ability to differentiate, MSCs possess at least two other properties that contribute to their beneficial therapeutic value in the treatment of immunological-mediated disorders: homing to the site where tissues are damaged and modulation of the immune response [28]. Inflammatory cytokines including interferon gamma (IFN-γ), interleukin (IL)-1, and tumor necrosis factor alpha (TNF-α) produced following tissue injury and during inflammation stimulate cell surface expression of adhesion molecules that facilitate rolling and migration of MSCs into ECM. It would seem that the soluble products of MSCs, which include extracellular vesicles (EVs), cytokines, trophic factors, and chemokines, are responsible for their major therapeutic impacts, which include angiogenic, antioxidant, anti-fibrotic, and anti-inflammatory properties. MSCs also regulate innate and adaptive immune responses through intercellular contact (binding of programmed death 1 (PD-1) to its ligands PD-L1 and PD-L2) or paracrine mechanisms [29]. For example, hepatocyte growth factor (HGF) and IL-6 produced by MSCs impede the differentiation of monocytes into dendritic cells, lowering their propensity to cause inflammation, decreasing the secretion of the pro-inflammatory cytokines IL-12 and IFN-γ, and increasing the production of anti-inflammatory cytokines IL-10, sequentially weakening the activation of T cells [30]. MSCs inhibit the Kupffer cell activity, which leads to an attenuation in the production of the pro-inflammatory cytokine TNF. In addition, MSCs transform pro-inflammatory M1 macrophages into anti-inflammatory M2 macrophages by secreting prostaglandin E2 (PGE2) [31]. Besides, MSCs can modulate the immunological response by triggering the Notch 1 signaling pathway, generating HLA-G5, PGE2, and TGF-β1 and promoting the activation and expansion of CD4 + CD25 + FoxP3 + regulatory T cells (Tregs). By producing indoleamine 2,3-dioxygenase (IDO) and heme oxygenase 1, MSCs suppress the proliferative ability of CD8 + T lymphocytes and enhance the conversion rate of CD4 + T lymphocytes from T-helper 1 to T-helper 2 phenotypes [32].
The decrease of inflammation is by far the most common use of MSCs [33]. However, MSCs are not inherently immune suppressants; rather, they need a "licensing" step to be undertaken by the acute phase inflammatory Fig. 1 Pathogenesis of liver cirrhosis. Liver fibrosis is initiated by hepatic injury and the subsequent imbalance of ECM synthesis and degradation mediated by activated HSCs. Cirrhosis is the most advanced stage of liver fibrosis. ECM, extracellular matrix; HSCs, hepatic stellate cells; TIMP, tissue inhibitors of metalloproteinase; PDGF, platelet-derived growth factor; TGF-β, transforming growth factor-β molecules, such as TNF-α and IFN-γ, or toll-like receptor (TLR) ligands [34]. Recent research has shown that mesenchymal stem cells (MSCs) only become immunosuppressive when they are exposed to sufficiently high levels of pro-inflammatory cytokines [29,35,36]. MSCs have the potential to exhibit a pro-inflammatory phenotype when they are exposed to low levels of IFN-γ and TNF-α. They do this by producing chemokines (such as CXCL9 and CXCL10), which bring lymphocytes to the regions of inflammation, thereby enhancing the immune response of T cells [29]. In contrast, when MSCs are stimulated with high levels of IFN-γ and TNF-α, there is an improvement in the production of inhibitory soluble substances, and MSCs acquire an anti-inflammatory phenotype and inhibit the activation as well as effector properties of inflammatory dendritic cells (DCs), T lymphocytes, macrophages, natural killer (NK) cells, and NKT cells [37]. This suggests that MSCs suppress or promote inflammation depending on the pathological conditions to which they are exposed and that the levels and concentrations of inflammatory cytokines are fundamental factors influencing the immunomodulatory capacity of MSCs [38,39]. Li et al. [29] confirmed that MSCs may stimulate immunological responses when they are exposed to a low level of pro-inflammatory cytokines or when iNOS activity is absent. It has been suggested that iNOS, which is present in mouse cells, and IDO, which is present in human cells, might act as a molecular switch between the immunosuppressive and immune-enhancing properties of MSCs [40]. TLRs priming can critically impact the multilineage potential, phenotype, and immunomodulation ability of MSCs [41]. The transition from a pro-inflammatory to an anti-inflammatory phenotype could also be dependent on the degree to which MSCs are stimulated by the TLRs that are expressed on their surface. Activation of TLR4 that is dependent on lipopolysaccharide may affect the polarization toward a pro-inflammatory phenotype, which is crucial for early damage responses, while the stimulation of TLR3 that is based on double-stranded RNA (dsRNA) might trigger a polarization toward an anti-inflammatory type [42]. The maintenance of a stable equilibrium between these antagonistic pathways might, on the one hand, help to stimulate the host's immune response, while, on the other hand, it may generate a feedback loop that inhibits extensive tissue damage and stimulates regeneration.
It has been demonstrated that MSCs, like immune cells, can "remember" a stimulation even after being exposed to other settings [40]. IIFN-γ activates the transcription and synthesis of IDO, HGF, and TGF-β in MSCs through JAK/STAT pathway [43]. According to Saldana and her colleagues' study [44], the pro-inflammatory cytokine TNF-α acts as a priming factor for MSCs, and priming MSCs using a medium that had been conditioned by either pro-or anti-inflammatory macrophages enhanced their immunoregulatory capacity by increasing the amount of PGE2 that they secreted. As a result, MSCs have been primed to induce a "short-term memory" effect in vitro by imitating the stimuli that are present in the microenvironment; therefore, it is not necessary to activate the MSCs in vivo to achieve the specific therapeutic activities that are being sought [45].
Mechanisms of MSCs-based therapy in liver cirrhosis
The mechanisms of MSCs in the treatment of cirrhosis have been investigated from multiple perspectives in basic research and are broadly divided into three types: (1) When introduced into damaged liver tissues, MSCs have the potential to either differentiate into hepatocytes or fuse with existing hepatocytes, making them a useful resource for the regeneration and repair of hepatic tissues; (2) MSCs are capable of producing a wide range of cytokines and growth factors, and they may also have the ability to exert a paracrine impact, which helps to stimulate the repopulation of endogenous cells in injured tissues; and (3) MSCs have a suppressive influence on many other types of cells, such as natural killer cells (NKs), B lymphocytes, and T lymphocytes which allow them to exert an immunoregulatory impact on liver illnesses [46,47] (Fig. 2).
Trans-differentiation versus cell fusion
The majority of investigations using HLCs, generated from MSCs, have reported encouraging findings indicating that these cells both improved their serum parameters and recovered the liver function in vivo [48]. To this day, the trans-differentiation of MSCs into HLCs has mostly been induced by the use of four primary methods, which include the addition of cytokines as well as growth factors, alterations to physical parameters, modification of the microenvironment, and genetic alteration [49]. In 1996, Abe et al. [50] revealed that mouse ESCs could differentiate into endodermal cells. In a later study, Hamsaki et al. [51] discovered that certain growth factors may be used to direct mouse embryonic stem cells to differentiate into HLCs. The cytokine combination approach is one of the available ways of induction, and it has received a great deal of research. The origin of the MSCs has a significant impact on their ability to differentiate as well as the kinds of growth factors and cytokines needed to induce it [49]. There is no universally accepted standard for the growth factor cocktail; its composition will vary according to the origin of the MSCs as well as the characteristics of each study. Lee et al. [5] designed a protocol to differentiate human UC-MSCs and BM-MSCs into HLCs: MSCs were treated for 7 days using a differentiation medium (comprising nicotinamide, bFGF, and HGF), before treatment with a mature medium (comprising transferrin and selenium (ITS), dexamethasone, insulin, oncostatin M (OSM)) to induce differentiation. It is important to note that some researchers favor the use of sequential differentiation step by step. This involves the use of a variety of biochemicals, cytokines, and growth factors that each play a role in the many stages of development and regeneration. In most cases, a sequential administration of varying levels of EGF, OSM, and HGF, together with insulin and glucocorticoids, in a culture medium that is optimized for hepatocytes is employed [52]. FGF and EGF are the two growth factors that are responsible for inducing MSCs into endodermal cells during the primary induction stage [53]. OSM and dexamethasone are required to induce further maturation, together with the addition of FGF, ITS, and HGF [54].
After culturing hUC-MSCs for 16 days in a mixture containing sodium selenite, insulin, dexamethasone, bFGF, and HGF, Zhao et al. transplanted the cells into a medium supplemented with OSM. The hUC-MSCs that were generated as a consequence displayed a strong capacity for hepatic differentiation as well as activities that are particular to hepatocytes [55]. Raufi and colleagues conducted a study in which they differentiated umbilical cord vein MSCs into hepatic MSCs by employing OSM and HGF in a two-step strategy, followed by a 4-week induction. An immunological examination revealed that the generated HLCs produced protein biomarkers that are specific to the liver, and they also displayed certain properties that are typical of hepatocytes [56]. Si-Tayeb et al. [57] illustrated that human iPSCs may be differentiated into functional hepatocytes by following a 4-step differentiation strategy and maintaining a low oxygen level. This was accomplished via lentiviral transduction of the generated LN28, NANOG, SOX2, and OCT3/4. They revealed the proliferation of these cells in the fetal liver of mice for seven days following transplantation. Research by Ang et al. [58] indicated that iPSCderived HLCs exhibited different hepatocyte activities in vitro at day 18 of differentiation, possessed CYP3A4 enzymatic activity, experienced positive staining for periodic acid Schiff in addition to the liver-specific surface marker ASGR1, and expressed hepatocyte markers, including albumin, AAT, and CPS1. Most importantly, in the FRG murine model of hereditary tyrosinemia, iPSC-derived HLCs transplantation resulted in an elevation in their short-term survival rate. Zhou et al. [48] proved that by transfecting MSCs with a combination of five miRNAs, they can be quickly and effectively transformed into functional HLCs. Additionally, intravenous transplantation of the HLCs was shown to produce urea, store glycogen, take up LDL, and enhance the status of CCl 4 -induced fulminant liver failure and acute liver injury mouse models. Mun et al. [59] developed a novel human iPSC-derived hepatocyte-like liver organoids that are anatomically identical to and with functional performance comparable to liver organoids derived from adult tissues while simultaneously preserving their mature hepatic features during long-term culture, thus generating a strong hepatic model in a way that is both reproducible and reliable for toxicity prediction, screening of drugs, regenerative and inflammatory responses, as well as modeling for disorders including hepatic steatosis. Collectively, numerous parameters, such as the timing, dosage, and particular type of growth factors, have been proven to alter hepatic induction. Furthermore, diverse sources of MSCs may be involved in the trans-differentiation process into HLCs. Unfortunately, the data that are now available cannot provide insights into which source or technique is superior for producing functioning HLCs to address the necessity for a viable source for transplantation.
Hepatocytes in hepatic chords reside in three-dimensional (3D) formations within the original liver matrix. These hepatocytes are linked to each other by tight and gap junctions, and they are located proximal to nonparenchymal cells [60]. When compared with traditional two-dimensional (2D) cultures, 3D arrangements, such as organoids or spheroids, have many advantages, including the capacity of stem cells to differentiate and the preservation of metabolic activity [61]. Some research teams created artificial 3D scaffolds intending to model the 3D milieu and morphology of the liver. These teams then studied whether or not the artificial scaffolds were able to promote MSCs differentiation more effectively as opposed to 2D culture. Gieseck and his colleagues developed a culture approach for the iPSC-derived HLCs maturation that makes use of 3D collagen matrices which were compatible with high-throughput screening. In comparison with traditional 2D structures, this culture approach considerably accelerates the functional development of HLCs into a mature adult phenotype [62]. Saito et al. [63] used human adipose-derived MSCs to generate both 2D-and 3D-cultured HLCs. They discovered that 3D-cultured HLCs were more comparable to primary hepatocytes and demonstrated significantly superior hepatocellular functions when contrasted to 2D-cultured HLCs. Since liver regeneration in vivo is linked to portal pressure, which characterizes fluid shear stress, shear stress, as well as the fluid friction force created by a continuously flowing fluid, can considerably alter the hepatic differentiation of MSCs [64]. Yen et al. [65] developed an innovative microfluidic system combined with the hepatic differentiation protocol, which may generate HLCs from MSCs in a more effective manner and with a more quick functionality maturation in comparison with a typical static culture method.
However, according to the findings of certain research, there is currently no differentiation method that can generate HLCs that are capable of exerting most hepatic activities at a level comparable to that of an adult liver. In the scientific literature, it is frequently noted that the HLCs phenotype is comparable to that of neonatal or fetal hepatocytes. When compared with primary hepatocytes, the level of expression of several genes that are favored by hepatocytes is much lower in HLCs [60,66]. Orge et al. [67] found that the gene expression profiles of MSCs-derived HLCs were distinct in comparison with that of primary hepatocytes, even though these HLCs demonstrated certain levels of hepatocyte activity, showing a combination of characteristics associated with immature progenitors and mature hepatocytes. It is important to note that these cells were unable to mature in vivo in the Ah cre Mdm2 fl/fl murine model that was established for the regeneration of hepatocytes. After isolating UC-MSCs and differentiating them into HLCs, Campard et al. [68] made a comparison of the results of their study with undifferentiated UC-MSCs and also freshly extracted liver cells. They demonstrated that HLCs had developed some of the functional characteristics of hepatocytes, including the ability to store glycogen and generate urea, as well as active G6P and CYP3A4 enzymes. However, certain hepatic biomarkers including hepatocyte nuclear factor 4 (HNF-4) and hepPar1 could not be identified, demonstrating that the differentiation had not progressed to the point of mature hepatic cells. In the field of regenerative medicine, one of the most huge challenges right now is the synthesis of HLCs from stem cell sources. More efforts are needed to develop a protocol to differentiate MSCs into HLCs with the same function as adult mature hepatocytes.
There has been a great deal of discussion regarding the ways through which stem cells trans-differentiate into new mature hepatocytes. Based on the data from the recent experiments, it is plausible to conclude that transdifferentiation is a very uncommon and unphysiological process, taking place either over a long duration or just in the context of well-controlled experiments [69]. It has been shown in a growing body of research and reports that the transplantation of cells taken from bone marrow may lead to the production of hepatocytes that are fully functioning. The mechanism of this process has been identified as the cellular fusion between bone marrowderived cells and host hepatocytes [70,71]. Wang and his colleagues discovered that in the fumarylacetoacetate hydrolase (FAH)-deficient murine liver, bone marrow cells adopt hepatocyte morphologies through cell fusion [72]. A cytogenetic investigation of BM-derived hepatocytes extracted from host livers based on research on the transplantation of bone marrow from female Fah +/+ mice into lethally irradiated Fah −/− males revealed an abundance of (80, XXXY) and (120, XXXXYY) hepatocyte karyotypes, corroborating the hypothesis that cell fusion is a frequent event in the process of producing hepatocytes that express FAH [73]. The restoration of hepatic functionality by the fusion of donor cells obtained from the bone marrow or another location with local hepatocytes offers exciting insights into the possible use of these cells in the development of future regenerative liver treatment therapies.
Paracrine effects
Although interest in the early treatment of MSCs revolves around their ability to differentiate in the liver, there is a growing body of research that provides significant support for the assumption that the activities of MSCs are primarily mediated via paracrine processes instead of through trans-differentiation [74]. According to the findings of Parekkadan and his colleagues, differentiation of engrafted MSCs into hepatocytes occurs rarely, although the factors released by transplanted MSCs have a significant and favorable influence on hepatocytes. These researchers were able to effectively repair mice models of acute liver damage by using molecules that were derived from MSCs [75]. In order to avoid HSCs from being activated and causing liver fibrosis, MSCs release a vast variety of antiapoptotic growth factors, including VEGF, HGF, and IGF-1 [76]. Haldar and his colleagues encapsulated human BM-MSCs in an alginate-polyethylene glycol hybrid hydrogel that is permeable to soluble factors (cytokines, glucose, and oxygen), although it is impermeable to antibodies and does not allow for direct cellular interactions. They also observed that injecting microencapsulated MSCs into mouse models of chronic liver damage alleviated inflammation and liver fibrosis, which suggests that the benefits may be completely attributable to substances produced by MSCs [23].
In vitro experiments have demonstrated that the coculture of MSCs with HSCs inhibits the proliferation and activation of HSCs, reduces α-SMA expression by secreting IL-10 and TGF-β, and induces HSCs apoptosis via the release of nerve growth factor (NGF) and HGF [7,77]. It has been found that the conditioned medium (CM) of MSCs considerably suppresses hepatocyte apoptosis and enhances hepatocyte proliferation in various mouse models of acute liver damage [78]. Meier et al. [79] found that MSCs-CM decreased the expression of MMP-2, α-SMA, and type I collagen from primary HSCs, indicating that the secretion factors of MSCs can prevent HSCs activation. Although researchers identified evidence supporting paracrine impacts following MSCs transplantation and MSCs synthesize a broad range of cytokines and growth factors, the particular mechanisms and the corresponding molecular pathways still need additional exploration.
Immunomodulatory effects
The immunoregulatory properties of MSCs have been the focus of many research reports in both in vitro and in vivo settings. Several studies provide significant evidence that the therapeutic benefits of MSCs in chronic and acute liver disorders are systemic and that these impacts are dependent on the secretion of substances that are trophic and immunoregulatory [80]. MSCs exert their immunotherapy impact by regulating both innate and adaptive responses [81]. MSCs were able to inhibit the growth of T cells by producing soluble substances, notably, TGF-β, PGE2, IDO, NO, and HGF. Via the mechanism of the production of TGF-β, MSCs not only have the capacity, but also the propensity, to stimulate the differentiation of CD4 + T cells into CD25 + Foxp3 + regulatory T cells (referred to as induced Tregs) [82]. According to the findings of Zhang et al. [83], MSCs greatly reduced the amount of CD4 + T cells infiltrating the liver, the proportion of CD4 + T lymphocytes that were activated, as well as the overall concentration of Th1 cells, which was followed by the induction of regulatory DCs and Tregs in the liver in order to ameliorate the damage caused to the liver. MSCs drastically alleviated CCl 4 -mediated liver fibrosis by lowering the proportion of Th17 cells and elevating the levels of CD4 + IL-10 + T cells as well as the levels of immune suppressive factors, including kynurenine, IDO, and IL-10 [84]. In addition, MSCs enhanced liver functionality and ameliorated the clinical symptoms in patients with hepatitis B virus-mediated decompensated liver cirrhosis by remarkably downregulating the expression levels of IL-6 and TNF-α while simultaneously upregulating the expression level of IL-10 [85]. Successful immunoregulation, as well as tissue regeneration, depends on interactions between MSCs and macrophages, particularly paracrine modulation and direct cell interaction [86]. Inflammatory cytokines produced by M1 macrophages or activated T lymphocytes might stimulate MSCs and cause the production of cytokines that alter the monocyte differentiation toward an antiinflammatory phenotype and, eventually, toward M2 macrophages [87]. MSCs have previously been shown to stimulate liver infiltration by host monocytes and neutrophils, leading to the alleviation of fibrosis through MMP synthesis [88]. Luo et al. [89] revealed that the transplantation of BM-MSCs stimulated M2 macrophages, which expressed MMP13, while simultaneously inhibiting M1 macrophages, which subsequently inhibited the activation of HSCs, eventually exhibiting a synergistic effect in reducing the severity of the liver fibrosis caused by CCl 4 . The invasion of immune cells is a necessary stage on the path to liver damage. MSCs provide an immunotolerant milieu in liver tissue by inhibiting the engagement of pro-inflammatory immune cells while enhancing the recruitment of anti-inflammatory immune cells, thereby eliminating acute or chronic liver damage [90].
MSCs are known to release a variety of anti-inflammatory substances in conjunction with a variety of proinflammatory cytokines such as IL1b, IL-6, IL-8, and IL-9, which mediate its immunomodulatory effect [91]. As we mentioned above that MSCs suppress or promote inflammation depending on the pathological conditions to which they are exposed. Waterman et al. [42] proved that the activation of T lymphocytes was enhanced when MSCs were transiently exposed to 10 ng/ml LPS for less than one hour in the coculture test. In comparison, MSCs that had been subjected to poly I: C at a concentration of 1 μg/ml were able to inhibit the activation of T lymphocytes while simultaneously boosting the expression of IDO and PGE2. The study by Lin et al. [36] indicated that a protective anti-inflammatory response was triggered in macrophages by MSCs subjected to LPS at concentrations ranging from 1 to 20 μg/ml. IFN-γ is primarily generated by activated T-helper 1 (Th1) cells and serves as an essential modulator of the innate as well as the adaptive immune responses. According to the findings of Ren and his colleagues, the presence of IFN-γ in combination with one of the pro-inflammatory cytokines (IL-1β, IL-1α, or TNF-α) is necessary for the inhibition of T-lymphocyte activation that is mediated by MSCs [92]. Thus, the ultimate immunomodulatory impact may be determined by the balance between anti-inflammatory and pro-inflammatory cytokines in the milieu in which MSCs reside.
In the past decades, MSCs have gained a lot of attention and investigation, both in the laboratory and in clinical settings owing to their numerous benefits. Although MSC-related clinical studies in patients diagnosed with liver cirrhosis are shown to be safe and effective in the short term, the favorable benefits of these trials have been shown to diminish over time or have not been assessed at all. According to the findings of Lotfinia and colleagues, MSC-CM can improve the histological and biochemical characteristics of livers, but it does not significantly improve the survival of murine with TAA-mediated liver failure [93]. In patients with decompensated cirrhosis, autologous BM-MSCs transplantation possibly does not exert a therapeutic impact, according to the findings of one randomized controlled trial [16]. Additionally, there are still a lot of challenges that need to be resolved, such as the ideal timing, optimum delivery channel, and adequate cell count for MSCs transplants. Therefore, additional research on the betterment of the engraftment and MSCs survival rate in the liver is required to enhance the effectiveness of MSCs treatment. These studies should also identify methods to enhance the long-term implantation of mesenchymal stem cells in the host liver. What is more, the clinical application of MSCs in the treatment of liver disease is just in its infant stages at this point, and large-scale randomized and controlled clinical trials need to be conducted with longer follow-up periods to enhance the dependability of the clinical safety and effectiveness of MSCs for human liver disorders.
Pretreatments enhance the therapeutic effects of MSCs in liver cirrhosis
After being isolated and cultured in vitro, MSCs fail to maintain their capacity for subsequent applications because they are deprived of nutrients and oxygen [94]. External growth factors are also unable to preserve MSCs' capabilities. While MSCs are capable of differentiating into a wide variety of somatic cells under controlled circumstances in vitro, it is very uncommon for them to transform into target cells after they have been transplanted. Additionally, in response to the hostile milieu, transplanted MSCs go through senescence or apoptosis [95]. Owing to the hostile microenvironment generated by injured organs or tissues, there are only a finite number of functioning stromal cells accessible for transplantation following MSC-based therapy and this presents a challenge for the treatment [96]. The acute inflammatory response that occurs in vivo serves as an efficient stimulant for the recruitment of progenitor cells. On the other hand, persistent inflammation poses a considerable barrier to the recruitment and survival of both naturally present progenitor cells and transplanted MSCs. As a result, paracrine and anti-inflammatory processes are the primary contributing factors in the repair of liver tissue injury and enhancing survival in animal models with liver damage [10].
Studies have shown that gene modification, pharmacologic agents, hypoxia, and inflammation milieu may all be used to shield MSCs from the damage that is caused by a hostile milieu, hence enhancing the homing ability of MSCs, their rate of survival, their paracrine impacts in vivo and in vitro, and also the therapeutic effectiveness of these cells within the setting of liver cirrhosis [97]. In conventional cell culture, the oxygen level is typically approximately 21% O 2 ; however, this oxygen level is not equivalent to levels present in the in vivo milieu. The normal concentration of oxygen in tissues ranges from 1 percent in bone marrow up to 12 percent in peripheral blood [98]. Hypoxic environments have been shown to drastically enhance the MSCs' survival rate in the hostile conditions of injury sites upon transplantation. This has been demonstrated to positively affect MSC production of cytoprotective molecules, proliferation, and multipotency [99]. Mortezaee et al. [100] treated BM-MSCs with melatonin for 24 h before injection into the rat model of liver fibrosis and found that MSCs that had been pretreated with melatonin exhibited a greater homing ability into the damaged liver regions. Additionally, there was a considerable improvement in the proportion of glycogen storage, and the accumulation of collagen and lipids in fibrotic liver tissue was greatly reduced. Watanabe et al. [88] displayed the differences and interactions between bone marrow-derived macrophages (id-BMMs) and MSCs in the mouse model of cirrhosis and found that combined treatment with MSCs and id-BMMs exhibited a synergistic effect with greater amelioration of liver cirrhosis and facilitation of hepatocyte regeneration, compared to the treatment with each cell alone. Ye et al. [101] demonstrated that in the CCl 4 liver cirrhosis model, the therapeutic impact of BM-MSCs that had been transfected with hepatocyte nuclear factor 4 alpha (HNF-4α) was superior to that of BM-MSCs without treatment. The transplantation of HNF-4α-BM-MSCs alleviated liver damage as evidenced by an elevation in the levels of cytokeratin-18 (CK-18) and albumin, reduction in the expression levels of alanine transaminase (ALT), bilirubin total levels, and aspartate aminotransferase (AST), as well as the decrease in inflammation associated with the reduction in IL-6, IFN-γ, and TNF-α, and the suppression of Kupffer cells (Fig. 3).
MSCs-derived exosomes for treating liver cirrhosis
Research shows that the EVs generated by MSCs, such as exosomes (40-100 nm in diameter) and microvesicles (MVs, 0.1-1 mm in diameter), could make a significant contribution to the therapeutic potential of MSCs by facilitating cell-cell interactions and delivering paracrine factors during angiogenesis, tissue repair, and immunomodulation [102,103]. Exosomes are nanoscale EVs derived from multivesicular bodies (MVBs). Exosomes are secreted into the extracellular microenvironment as a result of the fusion of MVBs with the plasma membrane. These exosomes may either be taken up by target cells that are located in the milieu or transported to distant areas through biofluids [104]. Exosomes are known to contain a broad spectrum of cytoplasmic as well as membrane proteins encompassing ECM proteins, lipids, transcription factors, receptors, and nucleic acids (miRNA, mRNA, dsDNA, ssDNA, and mtDNA) [105]. Exosomes have been shown to perform an instrumental function in a wide range of cell-cell interaction pathways, which are connected with a diverse range of pathophysiologic activities [106].
Several different animal models of liver illnesses, such as drug-induced acute liver damage and liver fibrosis, have been observed to benefit from the injection of mesenchymal stromal cell-derived exosomes (MSCs-Ex) [107,108]. Injection of MSCs-Ex into the livers of mice exposed to CCl 4 resulted in a reduction in the severity of the fibrosis that had developed by inhibiting the production of collagen and TGF-β1 [109]. Jiang et al. [110] demonstrated that human umbilical cord MSCs-Ex reduced CCl 4 -mediated acute liver damage, as well as liver fibrosis, which was achieved by attenuating hepatocyte apoptosis and oxidative stress. Additionally, these MSCs-Ex enhanced the hepatoprotective and antioxidant properties of bifendate. MSCs-Ex also decreases the deposition of collagen and ameliorates CCl 4 -mediated liver fibrosis by suppressing EMT, alleviating liver inflammation, and eliminating hepatocyte apoptosis [107]. Moreover, MSCs are capable of releasing immunologically potent exosomes, which enables them to have immunoregulatory impacts on the differentiation, activation, and functionality of various subsets of lymphocytes [111]. Tamura et al. [112] revealed that MSCs-derived exosomes enhance the production of anti-inflammatory cytokines and the levels of T-regulatory cells in murine with concanavalin A-elicited liver damage, providing evidence for the immunosuppressive properties. The potential for MSCs-Ex to perform a function in the delivery of medications is another area in which they might be used in a therapeutical manner. Recent research has shown that MSCs are capable of packaging and delivering active agents via their exosomes. This opens the door for the use of MSCs in the research and development of novel pharmaceuticals that are more effective and have more homing potential [113].
Cell-free-based treatment methods eliminate the possible tumorigenesis, unnecessary differentiation, emboli formation, cell injection, and infection transmission that are associated with MSCs transplantation. Additionally, these treatment methods are safer, less expensive, and more successful [114]. The absence of fabrication methods that are both reproducible and effective continues to be a significant barrier, even though MSCs-Ex has significant promise in the treatment of liver illnesses. As a result, there is a need for the development of more effective techniques for the extraction, characterization, purification, and preservation of exosomes that are applicable in therapeutic settings. Meanwhile, to further understand the role that MSCs-Ex play in liver regeneration, more research is required. Research conducted on large animal models is required before the results can be implemented in clinical settings.
Challenges and future directions
During the past two decades, numerous cell therapy technologies have emerged to treat liver cirrhosis. These techniques have resulted in the accumulation of knowledge regarding the enhancement of in vitro cell manipulation as well as the processes through which transplanted cells can counteract liver fibrosis and enhance liver repair. This information is now facilitating the development of novel techniques targeting the production of in vitro systems that potentially result in the creation of liver organoids or perhaps full bioengineered livers for transplant, with the help of established cutting-edge technologies such as bioreactors, microfluidics, and 3D (biology) printing [115]. The transplantation of MSCs has been evaluated in multiple clinical studies, and the findings have been encouraging and demonstrate MSCs transplantation as one of the potential alternative methods for treating patients who suffer from liver disorders. However, there have been reports of chromosomal abnormalities occurring in cultured cells, although research has shown the transplantation of MSCs to be safe in both preclinical settings and clinical trials [116]. Because of this, questions about the safety of therapies based on MSCs are still being discussed, particularly in the context of long-term follow-up. The key worry is undesired differentiation of the transplanted MSCs and their propensity to impair anti-tumor immune reaction and develop new blood arteries that could facilitate tumor development and spread [27]. Furthermore, the most effective method of administering MSCs is not yet uncovered, and their use in clinical trials has not yet been standardized to this day. When reviewing the findings from clinical studies, one of the practical issues that must be addressed is the optimal dosage, as well as the number of injections. In addition, there are not yet any advanced tools available for monitoring engrafted MSCs. In summary, the quality of the clinical investigations that have been published so far is insufficient to arrive at an outcome that can be considered definitive.
Conclusions
Therapy based on mesenchymal stem cells (MSCs) is a potential approach among the several medical procedures now in use for the treatment of liver illnesses and could be employed accordingly to give the best match for therapy, in an attempt to prevent the frequently occurred fatal consequence of the necessity for liver transplantation. On the other hand, to optimize the therapeutic value of MSCs, mechanistic as well as clinical investigations should create closer cooperation between academic and industrial researchers. | 2022-07-16T13:45:48.924Z | 2022-07-15T00:00:00.000 | {
"year": 2022,
"sha1": "43fb55528ea1ce82fb01af701407413b713ecbee",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "c5ffae599b0b7a0ef9250b2f319b2f61d191ab63",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
238967961 | pes2o/s2orc | v3-fos-license | Comparison of Solid and Solvent - Based Syntheses, Characterization and Antioxidant Property of Metal Complexes of Sodium Diclofenac
Mechanosynthesis and solvent – based syntheses of transition metals (Ni, Co and Zn) complexes of Sodium diclofenac has been carried out by 1:1 molar ratio of the ligand to metal salts respectively. The synthesis of the metal complexes were confirmed by melting point determination, FT-IR (Fourier Transform Infra-red) and UV-visible spectroscopies. The ligand showed bidentate coordination to the metal ions through carboxylate moiety. Octahedral geometry was proposed for all the metal complexes. The antioxidant property of metal complexes was determined using DPPH (1,1-diphenyl-2-picrylhydrazyl) assay with Ascorbic acid as control. The antioxidant evaluation results revealed that the synthesized metal complexes are promising antioxidant agents.
Introduction
Mechanochemistry can be seen as an interface between chemistry and mechanical engineering. Mechanochemistry is radically different from the traditional solvent-based, thermal or photochemical mechanisms because it eliminates the need for many solvents, although, its mechanisms of transformations are often complex (Hickenboth et al., 2007;Carlier et al., 2013). It involves the synthesis of chemical products by using only mechanical action. For example, grinding and ball milling methods are widely used mechanochemical processes in which mechanical force is used to achieve chemical processing and transformations (Carlier et al., 2011). In recent times, reactions conducted through grinding or milling, rather than solvent mediation, holds great promise for synthetic chemistry. Mechanochemical synthesis is one of the variants of so-called dry technologies, promising to become an economically profitable (due to reduced number of stages) and ecologically clean procedure compared to the traditional processes. It is now emerging as an alternative reaction media to replace solvent-based synthesis since it is environmental-friendly with a substantial good yield, the reactions is rapid, often reaching substantial completion in several minutes compared to hours in organic solvents (Adams & Jonathan, 2009;Tella et al., 2011;Arise et al., 2017). This has been widely demonstrated in organic chemistry by Hernández & Friščić (2015), *Corresponding author: +2348066741425 Email address: wahab.osunniran@kwasu.edu.ng DOI: 10.53704/fujnas.v9i1.291 in inorganic and coordination chemistry Arise et al., 2017;Jobbágy et al., 2014) especially in the area of metal organic frameworks {MOFs} (Bisht et al., 2015). For example, mechanochemical process has been used as an environmentally preferable way to synthesize pharmaceuticallyattractive phenol hydrazones (Oliveira et al., 2014).
In present study, sodium diclofenac ( Figure. 1) transition metal (Co, Ni, Zn) complexes have been synthesized and monitored by using melting point, Fourier Transform Infra-red (FTIR) and Electronic spectroscopy. The antioxidant property of the synthesized metal complexes was also investigated.
Materials and Methods Materials
The ligand, Sodium diclofenac (NaD), DPPH and Ascorbic acid were commercially obtained and used without further purification. Hydrated metal salts used for complexation (NiCl2·6H2O, CoCl2·6H2O and Zn(NO3)2·6H2O) were obtained from British Drug House Chemical Limited Co. Poole England.
Instrumentations
Melting points were determined using MPA100 OptiMelt Automated Melting Point system.The FT-IR spectra of both the ligand and the metal complexes were recorded on Thermos Scientific Nicolet i5 Spectrophotometer in the range of 4000 cm -1 -500 cm -1 using KBr. Solution electronic absorption spectra of the ligand and complexes were ran in the range of 180 -400 nm and 180-1100 nm respectively on Jenway 6405 Uv/vis.
Solvent-based synthesis of NaD-metal complexes
A solution of Sodium Diclofenac (1 mmol, 0.3181 g) in methanol (10 mL) was drop-wisely added to each of solution of NiCl2·6H2O (1 mmol, 0.2377 g) ; Zn(NO3)2·6H2O (1 mmol, 0.2975 g); CoCl2·6H2O (1 mmol, 0.2379 g) in methanol (10 mL). The resulting solutions were refluxed for 1 hour at 70˚C. There were colour changes during the course of refluxing. The precipitates formed after the solution was gently evaporated and the concentrate was allowed to cool for 30 minutes at room temperature. The precipitates were filtered, washed (3 times) with methanol and dried over silica gel in the dessicator.
Evaluation of antioxidant activities of Metal (II) Complexes DPPH Antioxidant Assay
The determination of the free radical scavenging activity of the metal complexes was carried using the DPPH (1,1-diphenyl-2-picrylhydrazyl) assay. Appropriate dilution of the metal complexes (1 ml) was mixed with 3 ml of 60µM acetone, acetic anhydride, DMSO, solution of DPPH radicals; the mixture was left in the dark for 30 min before the absorbance was measured at 517 nm. The decrease in absorbance of DPPH · on addition of test samples relative to the control (ascorbic acid) was used to calculate the percentage inhibition (% Inhibition) according to the following the equation: % Inhibition = [(Abscontrol -Abssample) ÷ Abscontrol] x 100.
Results and Discussion
The mechanochemical method (Scheme 1) was carried out by manual grinding of Sodium diclofenac and hydrated metal salts of cobalt, nickel and zinc using mortar and pestle. The reaction was done within 20 minutes at room temperature with greater yield as compared to solvent-based (Scheme 2) reaction (120 min in methanol at 70˚C) and 3 days for products to form. The melting point of the metal complexes differ from that of the NaD (288 -290˚C): this gives an insight into the formation of the coordination compounds. Also, the high purity of the complexes can be predicted from their sharp melting point.
The colours of cobalt (deep purple) and nickel (pale green) complexes are distinctly different from that of the ligand (white). Thus, it can be inferred that the colours displayed by the metal complexes are determined by the metal ions, a possible indication to the formation of coordination compounds.
The metal complexes were completely soluble in polar solvents such as water and methanol and also in DMF and DMSO but, practically insoluble in non-polar solvents indicating that the metal complexes are polar in nature and have tendencies to conduct electricity in solution.
FTIR: The characteristics absorption bands in the spectra of ligands and metal complexes are shown in Table 1. The FT-IR spectra of the complexes synthesized via the two methods are identical, but different from the free ligand (sodium diclofenac, NaD), suggesting that both methods produced similar products.
The FTIR spectrum of sodium diclofenac ( Figure 2) showed a broad absorption band at 3411 cm -1 which is assignable to the stretching frequency of OH/ONa of carboxylic acid group. This band undergone shift in the spectra of the metal complexes (Figure 3 -5) which is a possible indication that sodium diclofenac coordinated to the metal ions through oxygen atom of hydroxyl group of carboxylic acid through replacement of sodium atom by metal ions. This is further supported by the stretching frequency of C-O which was observed at 1295 cm -1 in the spectrum of sodium diclofenac and shifted bathochromically in the spectra of metal complexes. This confirms the participation of oxygen in the C-O-M bond (Sharma et al., 2011;Osunniran et al., 2018).
The band observed at 1604 cm -1 and 1471 cm -1 in the spectrum of sodium diclofenac assigned to the asymmetric and symmetric stretching frequency of C=O shifted in all the spectra of metal complexes equally supported the proof of coordination of sodium diclofenac to the metal ions through Carboxylate moiety (Obaleye et al., 2014). The absorption bands between 3751 cm -1 and 3550 cm -1 were assigned to stretching frequency of H2O molecule coordinated to the metal ions. The strong bands in the range of 1480 -1590 cm -1 were assigned to aromatic benzene stretching frequency. The bands found between 450 cm -1 and 532 cm -1 at the fingerprint region of the spectra of the metal complexes is attributed stretching frequencies of M-N, M-O and M-Cl. These revealed the metalligand coordination (Obaleye et al., 2014;Osunniran et al., 2018). Electronic spectra: The electronic spectra data of ligand and metal complexes is contained in Table 2.The electronic spectrum of the free ligand in DMSO showed absorption bands in the UV-region at 196 nm and 246 nm. These bands are assigned to intra-ligand transition due to n-π* transition of the non-bonding electrons present on the oxygen of (C=O) group. The slight shift and disappearance of the bands in the spectrum of metal complexes relative to the ligand is attributable to coordination (Obaleye et al., 1997). The absorption spectra of Co (II) and Ni (II) complexes of NaD showed three bands in the visible region corresponding to the electronic transitions for d 7 and d 8 ion respectively in an octahedral environment (Najeeb, 2018;Osunniran et al.,2018). Zn(II) compounds do not have additional bands in the visible region since it has d 10 configuration and as such there is no d-d transition. However, hypsochromic shifts of the first and second bands points to complex formation (Ali & Jabali, 2016;Osunniran et al., 2018).
Antioxidant Activity
The IC50, which stands for the concentration of fraction required for 50% scavenging activity, was calculated from the dose-inhibition linear regression equation of each complex. The antioxidant property of metal complexes is determined by comparing their IC50 with that of ascorbic acid. The result of antioxidant evaluation of the synthesized metal complexes (Figure 6 (a-d)) revealed that the IC50 of Co(NaD)Cl2(H2O)2, Ni(NaD)2Cl2(H2O)2, and Zn(NaD)(NO3)2(H2O)2 is lower than that of ascorbic acid and as such they possess a very good antioxidant property.
Conclusion
Mechanochemical synthesis of sodium diclofenac -metal complexes represents a comparatively best method of synthesis with no environmental pollution, economically profitable, appreciably good yield and rapid when compared to the solvent-based synthesis that leads to environmental pollution and takes a longer period of time. Coordination compounds of cobalt, nickel and zinc containing sodium diclofenac as ligand have been synthesized by solvent-free and solventbased methods, and characterized by FT-IR, UVvisible techniques and melting point determination. The ligand was found to behave as bidentate ligand coordinated to the metal ions through Carboxylate. Octahedral geometry was proposed for all the metal complexes based on spectroscopic results. Antioxidant evaluation of the metal complexes carried out revealed that the synthesized metal complexes are promising antioxidant agents n -π* n -π* 3 A2g(F) | 2021-09-08T18:33:29.061Z | 2020-08-31T00:00:00.000 | {
"year": 2020,
"sha1": "0f74ca4164a37895b5aaacc455141f6c5d38ca88",
"oa_license": "CCBY",
"oa_url": "https://fountainjournals.com/index.php/FUJNAS/article/download/291/166",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "d086283b1d3131dc863b554ae5e7c30dfe9ec00c",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
198934124 | pes2o/s2orc | v3-fos-license | Psychosocial Factors Associated with Quality of Life in Young Men Who Have Sex with Men Living with HIV/AIDS in Zhejiang, China
Objectives: To explore the quality of life (QOL) status and related factors in young human immunodeficiency virus (HIV)-infected men who have sex with men (MSM) aged 16 to 24 years in Zhejiang province. Methods: A cross-sectional study was conducted in 22 counties of Zhejiang province, and 395 subjects took part in our research. A t-test, one-way Analysis of variance (ANOVA), and multivariate stepwise linear regression analysis were used to investigate the factors associated with QOL in young HIV-infected MSM. Results: The total score on the QOL was 86.86 ± 14.01. The multivariate stepwise linear regression analysis revealed that self-efficacy and discrimination were associated with all domains on the QOL assessment, monthly income was associated with QOL for all domains except spirituality and consistent condom use during oral sex with men in the past three months was associated with QOL for all domains except the relationship domain. Those individuals within the group of young HIV-infected MSM who have higher self-efficacy, a higher monthly income, greater social support, safer sexual behaviors, a higher level of education, and a higher cluster of differentiation 4 (CD4) count have a better QOL. Conclusions: These findings suggest that to improve the QOL of this population, greater emphasis should be placed on improving social support, self-efficacy, and antiviral therapy adherence and on reducing discrimination, disease progression, and high-risk behaviors.
Introduction
Men who have sex with men (MSM) have become the group with the highest risk of human immunodeficiency virus (HIV) infection, exhibiting the fastest growth in HIV epidemic in China [1,2]. The MSM infection ratio in men aged 15 to 24 years old has increased progressively in recent years [3]. In 2017, more than 60 percent of the individuals newly infected with HIV were aged 15 to 24 years old among MSM in the Zhejiang province. Multiple sexual partners, high unprotected anal intercourse (UAI) behaviors, low condom use, and low HIV testing increased the risk of HIV infection in young MSM [4,5], and high risk behaviors put them at risk of HIV infection. The HIV infection problem in young MSM cannot be ignored in MSM HIV intervention.
Due to the application of a highly active antiretroviral therapy (HAART), the mortality rate from HIV/AIDS has significantly decreased and the life expectancy of HIV patients has increased, which means that HIV/AIDS is now considered a chronic, albeit controllable disease [6]. However, there are still many problems among HIV/AIDS patients with respect to the physical, psychological, cognitive, and social domains, especially among MSM with HIV/AIDS [7], who face not only the stress of having
Study Design and Data Collection
A cross-sectional study was conducted in Zhejiang. The inclusion criteria were as follows: (1) Positive for HIV antibody; (2) infected through homosexual behavior; and (3) age ranging from 16 to 24 years old. First, we identified the counties that had more than 20 eligible patients in November 2016. There were 1583 eligible MSM patients; then, 400 participants were randomly selected from those 1583 patients. The exclusion criteria were the participants declining to participate. A replacement would be chosen via the same inclusion criteria in the local county. Thirteen replacements were ultimately recruited. Of 400 subjects, 5 participants were excluded from participating in the study due to a failure to provide key information, resulting in a total of 395 young MSM participants.
The sociodemographic and antiviral therapy information of the participants was collected from the case reporting database of the HIV/AIDS comprehensive response information management system in Zhejiang in November 2016, including age, level of education, marital status, employment, census register, infection status, route of transmission, and antiviral therapy status. We designed a questionnaire and collected the information including monthly income, self-identified sexual orientation, and route of dating, high-risk sexual behavior, and psychosocial and behavioral conditions such as self-efficacy, social support, and quality of life, etc.
Self-Efficacy
We used the HIV self-efficacy questionnaire (HIV-SE) to measure the self-efficacy of HIV disease management [28]. The questionnaire was revised by Chinese researchers based on feedback and cultural appropriateness. It contained 34 items divided among 6 domains: Emotional management (9 items), medication management (7 items), symptom management (5 items), communication with health care providers (4 items), receipt of support and assistance (5 items), and fatigue management (4 items). The participants responded to each item by indicating the confidence they felt in their abilities to perform each specific goal or behavior on a 10-point Likert scale ranging from 1 = "not at all sure" to 10 = "completely sure." The instrument exhibited good reliability and construct validity. The Cronbach's alpha coefficient was 0.94. Higher scores indicate that the respondent has better self-efficacy. Referring to Chen's study [29], we divided the score for self-efficacy into three levels as follows: Scores ≥80% are high, between 79% and 41% are medium, and ≤40% are low.
Social Support
Social support was measured using the social support rating scale (SSRS) developed by the Chinese researcher Xiao [30]. The questionnaire included ten items in three dimensions: Objective support, subjective support, and support utilization, comprising three items, four items, and three items, respectively. Eight items had four possible responses, and the scores ranged from 1 to 4, two items had nine questions, participants could select more than one option, and each option was one score. The total score of questionnaire was the sum of the ten items; higher scores indicated greater social support. In this study, social support was divided into three support groups, namely, low, medium, and high, based on the total scores ranging from 25% to 75%. The Cronbach's alpha coefficient of SSRS was 0.89.
Quality of Life
QOL was measured using the Chinese WHOQOL-HIV-BREF scale [31], which was translated from the WHOQOL-HIV-BREF for use in a Chinese population. This scale contained 31 items covering six dimensions and two additional general items (overall HRQOL and general health). The six domains are physical, psychological, independence, relationship, environment, and spirituality. The respondents were asked to rate certain experiences they had encountered over the past two weeks on a 5-point Likert scale ranging from 1 = very poor/dissatisfied to 5 = very good/satisfied. Higher scores indicated a better QOL. This tool has been determined to be valid and reliable for analysis of the MSM population. The internal consistency reliability ranged from 0.7 to 0.9 [16].
Statistical Analysis
Data were entered into EpiData version 3.1(EpiData Association, Odense, Denmark) and were processed using SPSS Statistics 19.0 (Armonk, NY, USA: IBM Corp) for statistical analysis. QOL was the continuous variable and it is presented as the mean ± standard deviation (SD). The QOL result was also compared with those from other studies using a t-test. A t-test and ANOVA were performed to investigate the relationships between and among demographic data, sexual behaviors, HIV infection, antiviral therapy, psychosocial factors, and QOL. A multivariate stepwise linear regression analysis was used to analyze the factors influencing QOL. The total QOL and domain scores were the dependent variable, and the variable p < 0.2 was incorporated into the independent variables. The method was stepwise, the entry variable was p < 0.05, and the removal variable was p > 0.10; 95%CIs and p-values were reported, when p < 0.05, the result was considered statistically significant.
Ethics Approval and Consent to Participate
This study was approved by the Ethics Committee of the Zhejiang Provincial Center for Disease Control and Prevention (Project Identification Code 2013-001) and informed consent was obtained before the survey was administered. We assured the respondents that the survey was confidential and that this survey would not have any negative impact on the respondents.
QOL Score
The total QOL score was 86.86 ± 14.01, and the QOL scores by domain, namely, physical, psychological, independence, relationship, environment, and spirituality, were 15 As shown in Table 3, 27.8% (110/395) were infected with a sexually transmitted disease (STD), 83.0% (328/395) had more than two male sexual partners in the past three months, only 20.5% (81/395) used condom consistently when having anal sex in the past three months, while 5.1% (20/395) used condom consistently when having oral sex in the past three months.
Univariate Analysis of Factors and QOL
The results of the ANOVA and t-test are presented in Tables 2-5. Of the 395 respondents, with respect to the total QOL score, MSM who had a higher level of education, who had a monthly income ≥3000 RMB, and who participated in online dating had a relatively higher total QOL score. Furthermore, monthly income, sexual orientation, and dating route were associated with QOL in the physical domain; the level of education, monthly income, employment status, and dating route were associated with QOL in the psychological, independence, and relationship domains; the level of education, monthly income, sexual orientation, employment status, and dating route were associated with QOL in the environment domain; and monthly income was associated with QOL in the spirituality domain (Table 2). Table 3 indicates that consistent condom use during anal sex with men occurring within the past three months was associated with the total QOL score and with the scores for the independence and spirituality domains. Consistent condom use during oral sex with men occurring within the past three months was associated with all QOL domains except the relationship domain. Finally, young MSM who had numerous male sexual partners within the past three months had lower scores in the relationship domain.
With respect to disease, MSM who had AIDS exhibited lower scores in all QOL domains. MSM who had been infected for one or more years received higher QOL scores in the physical and spirituality domains, whereas those with CD4 counts ≥350 had higher total QOL scores as well as higher scores in the physical, psychological, and independence domains. Finally, taking drug on time is associated with a higher QOL score in the independence domain ( Table 4).
The results of the relationships between QOL and psychosocial factors indicate that a lack of discrimination, higher self-efficacy, and greater social support are positively correlated with QOL. Young MSM who have higher self-efficacy and greater social support and who are not subjected to discrimination reported higher QOL scores in all domains (Table 5).
In a multivariate analysis (Table 6), the total QOL score was the dependent variable. The independent variables included the level of education, monthly income, sexual orientation, employment status, dating route, consistent condom use during anal and oral sex with men in the past three months, disease name, infection year, CD4 count, missing drugs, sexual aid use, discrimination, self-efficacy, and social support. The results revealed that self-efficacy (β = 8.071, p < 0.001), discrimination (β = −8.338, p = 0.000), monthly income (β = 5.398, p < 0.001), employment status (β = −1.344, p = 0.022), condom use during oral sex with men in the past three months (β = 2.131, p = 0.001), social support (β = 1.692, p = 0.016), and disease progression (β = −2.491, p = 0.032) are the factors associated with QOL. More specifically, the results indicated that a higher self-efficacy, monthly income ≥3000 RMB, student status, condom use during oral sex with men in the past three months, and greater social support are associated with better QOL values. In addition, discrimination and AIDS were negatively related to QOL. For example, when young MSM experienced discrimination at the AIDS stage, they demonstrated a lower QOL. With respect to psychosocial factors, self-efficacy and discrimination were associated with QOL in all domains. Additionally, social support was associated with the relationship and spirituality domains, while smoking was related to the physical domain. Regarding sociodemographic factors, monthly income was associated with QOL in all domains except spirituality; employment status was associated with the psychological, independence and relationship domains; sexual orientation was associated with the physical and environment domains; and level of education and dating route were associated with the environment domain. With respect to sexual factors, the number of male sexual partners within the past three months was associated with the relationship domain, and condom use during oral sex with men occurring within the past three months was associated with QOL in all domains except the relationship domain. With respect to antiviral therapy factors, the CD4 count was associated with the physical and independence domains, disease progression was associated with the psychological domain and missing drugs was associated with the independence domain.
Discussion
The aim of this study was to assess the QOL of young HIV-infected MSM and the relationship between QOL and various influencing factors, such as social support and self-efficacy, in Zhejiang and in the eastern coastal cities of China. As a measurement of physical, psychological, and social adaption indicators, QOL reflects the health status of people living with HIV/AIDS (PLWHA) [33]. In this study, the QOL for psychological, environment, and spirituality domains and total score were significantly higher than in the general population in China. Furthermore, the QOL scores were significantly higher than other populations of PLWHA [34,35]. Ma et al.'s study (2015) [36] on PLWHA in Zhejiang also stated that younger PLWHA demonstrated higher QOL scores. This finding implies that the QOL of young HIV-infected MSM in Zhejiang is better than those of other populations of PLWHA.
In our study, the results indicated that self-efficacy and discrimination were significantly correlated with all QOL domains. Specifically, increased self-efficacy resulted in higher QOL scores among young MSM. The study of Cramm et al. (2013) [37] demonstrated that general self-efficacy was an important factor that influences the social QOL in youths with chronic conditions. Chen et al.'s study (2013) [9] concluded that health care providers improved QOL among PLWHA by enhancing their self-efficacy and self-esteem. Specifically, the results indicated that self-efficacy was a significant predictive factor of QOL in the population of young HIV-infected MSM. Our findings further revealed that stigma and discrimination were negatively associated with QOL in all domains. Although MSM anti-discrimination propaganda has been promoted in China, stigma and discrimination related to sexual orientation still exist. Thus, MSM are inclined to hide their homosexual identities. Moreover, such shame and discrimination are major barriers that cause MSM and young HIV-infected MSM to hesitate to seek health services [38,39]. In addition, HIV-related stigma and fear evoke negative emotions among most HIV-infected MSM [40]. Both are factors that contribute to reducing the QOL of this population.
Our study showed that social support was positively associated with the total QOL scores and with the scores for the relationship and spirituality domains. Social support helps young MSM cope with stressful life events and adapt to life. Peter et al.'s study from southern India [41] revealed that health-related QOL was positively associated with social support from family and friends. Lan et al.'s study (2015) [33] concluded that subjective support and the use of social support were positively correlated with QOL. These findings indicate that there should be a focus on improving the social support provided to young HIV-infected MSM because increased social support can help individuals more effectively cope with their difficult situation [42]. Moreover, social support plays an important role in reducing risky sexual behaviors [43], whereas a lack of social support may result in increased risky sexual behaviors; accordingly, reducing risky sexual behaviors by improving social support can result in an improved QOL.
In our study, young MSM reported a high number of sexual partners; specifically, the prevalence of young HIV-infected MSM who had two or more male sexual partners within the past three months was 83%. Moreover, the use of condoms during oral sex occurring within the past three months was positively associated with the total QOL score and with all domains except the relationship domain. Young MSM who had more male sexual partners had lower QOL scores in the relationship domain. Young MSM who smoked exhibited an impaired score in the physical domain of the QOL. It was also determined that the young MSM population engaged in high-risk sexual behaviors [44][45][46] and that there was a relationship between high-risk behavior and QOL. For example, high-risk behaviors were associated with negative emotions, such as depression and anxiety, and higher depression and anxiety scores were associated with lower QOL scores in the physical, psychological, and relationship domains [41,47].
Among the sociodemographic factors, monthly income was found to be related to QOL, a finding that is consistent with those of previous studies [14,48]. In this study, increased monthly income was positively related to QOL, while higher income was correlated with a better QOL in all domains except spirituality. In the present study, the level of education and dating route influenced the environmental domain. Specifically, those with a higher level of education and those who made friends at school exhibited higher QOL scores. Similarly, young MSM who accepted their homosexuality reported better QOL scores in the physical and environmental domains. The higher QOL scores among these populations may be because young MSM are more open-minded and more accepting of their homosexuality, especially within higher education populations. Both of these groups demonstrated more enlightened attitudes toward HIV.
Although no correlation was found between antiviral therapy and QOL in our study, the CD4 count was correlated with better scores in the physical and independence domains. Nachega et al.'s study (2011) [49] indicated that interrupted ART limited the recovery of CD4 and increased the risk of opportunistic complications and death. Additionally, missing drugs was also an important factor correlated with medication adherence, with missing drugs exhibiting lower scores in the independence domain. Furthermore, medication adherence is positively correlated with QOL, such that people with a high QOL have a higher adherence to treatment, while low adherence to ART may result in an adverse effect, causing people to exhibit poorer health, more physical symptoms, and a lower QOL [50].
We also found a negative association between young MSM who were in the AIDS stage and the total QOL score, particularly in the score for the psychological domain. This result may be because AIDS patients experience more symptoms and increased complications as well as increased psychosocial problems, which together may result in poorer QOL for AIDS patients.
This study has several limitations. First, as this study was based on a cross-sectional survey, it is difficult to describe variation tendencies with respect to self-efficacy, social support, and QOL. Furthermore, causal relationships between QOL and related factors cannot be determined. Thus, a future investigation should conduct a cohort study on QOL. Second, to facilitate the investigation, randomly selected samples were obtained from counties where there were more than 20 cases of young HIV-infected MSM. This, however, is not a truly random sampling; therefore, the sampling method should be improved in a future study. Third, stigma and discrimination are acknowledged as important factors with respect to the QOL of PLWHA, as PLWHA are subjected to more negative stigma and greater discrimination than other groups. In our study, we used only one question to measure stigma and discrimination. Thus, it is recommended that future studies use appropriate scales to measure stigma and discrimination and to better measure the association between discrimination and QOL.
Conclusions
Our study revealed that there is a higher QOL among young HIV-infected MSM than among other HIV/AIDS patients. Factors such as social support, self-efficacy, income level, and high CD4 count have a protective effect on the QOL in young HIV-infected MSM, whereas discrimination, AIDS stage, and high-risk behaviors have a negative effect on QOL. The results imply that we should focus on and enhance psychosocial factors, such as social support, self-efficacy, and anti-discrimination, and strengthen the strategies and techniques to diagnose HIV and discover and treat young HIV-infected individuals earlier. Such changes will help to control the disease in its HIV stage through treatment, reduce the complications associated with the disease, and improve the QOL of infected individuals. Finally, agencies and involved individuals must advocate for safe sex behaviors to reduce the possibilities of coinfection.
Author Contributions: T.J., L.C., Q.M., and X.P. designed and conducted the study, X.Z. and H.W. performed the survey, T.J. and M.L. analyzed the data, and T.J. and Q.M. wrote the paper. All authors read and approved the final manuscript. | 2019-07-28T13:03:21.950Z | 2019-07-25T00:00:00.000 | {
"year": 2019,
"sha1": "cfbef41569c2569f5dca27ce7c1fea603fed18f5",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/16/15/2667/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1c1d35cfa2089cc5f2bb3c97a43432b4ae0520d6",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236352102 | pes2o/s2orc | v3-fos-license | Sensitization of the UPR by loss of PPP1R15A promotes fibrosis and senescence in IPF
The unfolded protein response (UPR) is a direct consequence of cellular endoplasmic reticulum (ER) stress and a key disease driving mechanism in IPF. The resolution of the UPR is directed by PPP1R15A (GADD34) and leads to the restoration of normal ribosomal activity. While the role of PPP1R15A has been explored in lung epithelial cells, the role of this UPR resolving factor has yet to be explored in lung mesenchymal cells. The objective of the current study was to determine the expression and role of PPP1R15A in IPF fibroblasts and in a bleomycin-induced lung fibrosis model. A survey of IPF lung tissue revealed that PPP1R15A expression was markedly reduced. Targeting PPP1R15A in primary fibroblasts modulated TGF-β-induced fibroblast to myofibroblast differentiation and exacerbated pulmonary fibrosis in bleomycin-challenged mice. Interestingly, the loss of PPP1R15A appeared to promote lung fibroblast senescence. Taken together, our findings demonstrate the major role of PPP1R15A in the regulation of lung mesenchymal cells, and regulation of PPP1R15A may represent a novel therapeutic strategy in IPF.
Idiopathic pulmonary fibrosis (IPF) is an unrelenting, chronic, and progressive lung disease. The rate of decline in lung function varies between patients, with some having a more rapidly progressive disease than others 1,2 . Despite this, all patients will ultimately succumb to respiratory failure. At the cellular level, normal alveolar lung tissue is replaced by excess extracellular matrix as a consequence of aberrant mesenchymal cell proliferation and activation 2 . These mechanisms are often linked to an impaired wound healing response 3 . There are currently two approved drugs for IPF, nintedanib and pirfenidone. Both offer some benefit, albeit patient responsiveness is not predictable or consistent. Furthermore, both drugs show minimal disease modification and have challenging side effect profiles, often limiting their long-term use 4,5 . Therefore, new therapeutic approaches, targeting the underlying cycle of continued injury and impaired repair responses in IPF, are needed.
One process that may perpetuate these aberrant repair processes in IPF is the unfolded protein response (UPR) (see Table S1, Fig. S1 for overview) 6 . The UPR is comprised of three endoplasmic reticulum (ER) stress signalling pathways that are required for maintenance of cellular and ER homeostasis and regulated by the ER resident chaperone HSPA5 (BiP/GPR78) 7 . The UPR is activated in response to ER stress when the capacity for folding secreted and transmembrane proteins is overwhelmed by cellular demand. ER stress has been linked to IPF, specifically as a process of promoting alveolar type 2 epithelial cell apoptosis 8 , and also during fibroblast to myofibroblast differentiation 9 . During periods of ER stress, the UPR restores proteostasis via several mechanisms, most notably the attenuation of protein synthesis by PERK phosphorylation of eIF2α (EIF2A, www.nature.com/scientificreports/ resolution of the ER stress, the block on general protein translation is reversed by dephosphorylation of eIF2α, which is directed by PPPIR15A/GADD34 (Table S1, Fig. S1) and leads to the restoration of normal ribosomal activity 10 . PPPIR15A is constitutively expressed at low levels but its transcription is rapidly upregulated by ATF4 via DDIT3 (CHOP, Fig. S1) following cell stress, and may enhance ER stress 10,11 . Recently, bleomycin treatment leading to lung fibrosis in mice has been shown to increase ER stress, notably GRP78/BiP, ATF4 and DDIT3 levels 12 . Conversely, the inhibition of ER stress signalling by heterozygous Grp78 gene deletion 13 or treatment with TUDCA, ameliorated bleomycin-induced lung fibrosis; in the latter case there was a correlated inhibition of PI3K/mTOR pathway activation 12,13 . Interestingly, Grp78 −/− mice completely lacking this factor are more susceptible to bleomycin-induced fibrosis, with the enhanced susceptibility linked to the loss of AT2 epithelial cells 14 . Cellular senescence of several cell types including epithelial cells and fibroblasts in the lungs of IPF patients appears to contribute to protein-misfolding stress and to stimulation of the UPR 15 . Indeed, we have shown that the loss or inhibition of DNA damage repair pathways are linked to cellular senescence in IPF and contribute to the progressive nature of disease 16 . Growing evidence suggests that the targeting of senescent cells via senolytic strategies might provide therapeutic benefit in IPF possibly due to the attenuation of UPR 17 .
In this study we focused on the UPR in mouse and human primary lung fibroblasts. We observed that PPPIR15A was reduced in lung fibroblasts in IPF patients and that TGFβ-induced fibroblast activation further decreased PPPIR15A. PPPIR15A deficiency or its pharmacological inhibition exacerbated experimental lung fibrosis. Together, this study highlights that PPP1R15A deficiency promotes lung fibroblast senescence and this constitutes a pathological feedforward mechanism that represents an attractive potential target for therapeutic intervention.
Patients. Ethical approval for the use of samples and data collection was obtained (Royal Papworth Hospital
Research Tissue Bank (Research Ethics Committee), 08/H0304/56). All methods were carried out in accordance with relevant guidelines and regulations. All patients provided informed consent using the Royal Papworth Hospital Research Tissue Bank consent form. Patients were diagnosed with either sporadic (n = 11) or familial (n = 5) IPF in accordance with ATS/ERS/JRS/ALAT clinical practice guideline 18 . Most patients were male (9/16), age 67.2 ± 7.5 years (mean ± SD, n = 16) and 9 were never smokers. At the time of lung biopsy, forced vital capacity (FVC) was 85.9 ± 11.7% predicted (mean ± SD, n = 13) and gas transfer (TLco) was 53.1 ± 17.7% predicted (mean ± SD, n = 13). IPF immunohistochemistry. Formalin fixed, paraffin embedded (FFPE) human lung tissue sections were obtained from patients undergoing diagnostic lung biopsy at Royal Papworth Hospital, Cambridge UK. 5 μm-thick sections of FFPE human lung tissue were subjected to haematoxylin and eosin (H + E)-staining as previously described 19 . For immunohistochemistry, FFPE sections were baked in an oven at 65 °C for 1 h. PT link tanks (Dako, Glostrup, Denmark) were used to perform deparaffinisation and heat-induced epitope retrieval (EnVision FLEX Target Retrieval Solution High pH; Dako). All slides were incubated for 20 min at 97 °C and left in buffer (EnVision FLEX wash buffer; Dako) at room temperature for a minimum of 5 min to cool down. Staining was performed using an automated immunostainer (AutostainerLink48; Dako). The protocol was as follows: slides were incubated for 5 min in an endogenous block (EnVision FLEX peroxidase-blocking reagent; Dako) and then incubated with rabbit polyclonal anti-GADD34 1:100 (10449-1-AP; Proteintech, Chicago, IL, USA) or rabbit monoclonal anti-BiP 1:200 (C50B12; Cell Signaling Technology, Boston, MA, USA)for 30 min. Then all sections were incubated for 30 min in labelled polymer (EnVision FLEX/HRP; Dako). Each individual stage was followed by buffer rinses (EnVision FLEX wash buffer; Dako). Staining was visualised using the chromogen 3,30-diaminobenzidine for 10 min, counterstained with haematoxylin (EnVision FLEX; Dako) for 5 min and manually cover-slipped (Surgipath) with DePeX mounting medium (VWR International, Poole, UK). Semi-quantitative analyses were performed using light microscopy (200 × magnification), where six high power fields were randomly selected from each lung biopsy section and were scored for epithelial staining of GADD34/PPP1R15A and BiP by semi-quantitative analysis by two independent, blinded investigators (HP and DR). Similarly, the H + E stained sections were used to assess for inflammation and fibrosis. Data were analysed by linear regression analysis using Prism software.
Bleomycin-induced lung fibrosis models. All in vivo mouse experiments were approved by the Institution's Animal Welfare and Ethical Review Body (AWERB) committee. All in vivo work were conducted under the authority of a UK Home Office issued Project License and in accordance with the Animals (Scientific Procedures) Act 1986. For the bleomycin timecourse analyses, female C57Bl/6 mice received an oropharangeal bleomycin challenge (Blenoxane, Sigma, St. Louis, MO) or sterile saline control as previously described 20 . Mice were then sacrificed at specific timepoints following bleomycin or saline delivery and lungs were collected for analysis. Ppp1r15a null mice (Ppp1r15a −/− ) were generated using B6.129P2-PppIr15a/Mmnc sperm (http:// www. mmrrc. org) on an C57Bl/6jax background and the colony maintained in the Biological Services Group facility. For the knockout bleomycin studies, age and sex-matched, and wildtype littermate controls received an oropharyngeal bleomycin or saline challenge on Day 0 and were sacrificed on Day 5 or Day 21. For the Sephin1 bleomycin model, mice were challenged as above, and treated daily with either vehicle control (saline, 10 ml/kg) or Sephin1 by oral gavage on Days 14-20 (5 mg/kg). Reporting in this manuscript follows the recommendations of the ARRIVE guidelines.
Gene expression and protein analyses in mouse fibrosis model. Gene www.nature.com/scientificreports/ els of genes of interest measured by quantitative RT-PCR were normalized to housekeeping gene mRNA. For protein analysis, collagen levels were determined in lung homogenates using an established hydroxyproline assay as previously described 21 . TNFα protein levels were measured by MSD analysis (Mesoscale Discovery), according to manufacturer's protocols.
Mouse histology. Formalin-fixed and paraffin-embedded lung sections were stained with hematoxylin and eosin to assess gross morphology or Masson's trichrome stains to visualize collagen deposition.
In vitro fibroblast assays. IPF Immunocytochemistry for αSMA. Cells were permeabilized with 0.2% Tween in PBS for 15 min (100 µl/ well). Cells were then blocked with 5% (w/v) BSA in PBS for 1 h. Primary antibody was added (Daco, cat no M0851; 1:250) and incubated for 1 h, followed by washing 3 times with PBS. Next, secondary antibody (Invitrogen, cat no A11001; 1:500) + Hoechst (Invitrogen, cat no H3570; 1:10,000) were added and incubated for 1 h in the dark, followed by washing 3 times with PBS. Plates were sealed and image analysis performed using ImageXpress.
In vitro senescence assay. IPF fibroblasts were seeded at 5000 cells per well in a 96-well plate. After 24 h incubation at 37 °C and 5% (v/v) CO 2 , the medium was replaced with fresh medium and the cells were treated with the compounds. For the NGS experiment, the mTOR inhibitor, AZD8055, was added at concentrations of 1 µM, 0.01 µM, and 0 µM. For the cell imaging endpoint, AZD8055 or nintedanib were added at ten different concentrations (0-10 µM). All samples were normalised with the vehicle control with 0.1% (v/v) DMSO. After 1 h incubation with the compound, a final concentration of 3 µM etoposide or the vehicle control (0.006% (v/v) DMSO) were added to the cells, after which the cells were incubated for 0, 6 h and, 24 h for the NGS experiment and 72 h for the imaging experiment. The cells for the NGS experiment were harvested by removing the medium and adding 63 µl of freshly prepared lysis buffer (60 µl lysis buffer and 3 µl proteinase K) per well. The plates were sealed and incubated at room temperature for 30 min and stored at − 80 °C. The plates for the imaging endpoint were fixed by addition of 50 µl of 4% (v/v) formaldehyde (VWR, cat. no. 9713.1000) per well. After 15 min incubation, the cells were washed three times with PBS after which they were sealed and stored at 4 °C.
Senescence assay p21 imaging endpoint. Cells were permeabilised by the addition of 100 µl of 0.1% (v/v) Triton X-100 in PBS per well for 20 min. Thereafter, the cells were blocked by incubation for 1 h with 50 µl blocking buffer (5% (w/v) BSA in PBS) followed by incubation overnight at 4 °C with 50 µl of anti-p21 antibody (Abcam, cat. no. Ab109520) diluted 1:1000 in blocking buffer. The cells were then washed three times for 5 min and incubated with 50 µl of an Alexa Fluor 594-conjugated donkey anti-rabbit secondary antibody (Invitrogen, cat. no. A21207) diluted 1:500 in blocking buffer for 1 h at room temperature in the dark, after which the cells were washed for 5 min with PBS and stained for 15 min with 50 µl Hoechst (Invitrogen, cat. no. H3570) and HCS CellMask™ Deep Red (Life technologies, cat. no. H32721) diluted 1:10,000 and 1:5000 in PBS, respectively. Finally, the cells were washed twice with PBS and imaged using a Yokogawa CV7000 confocal microscope and the images were analysed using the Columbus Image Analysis software (PerkinElmer).
RNA sequencing.
Total RNA was isolated from samples containing a maximum of 50,000 cells/sample using the RNAdvance Cell v2 protocol (Beckman Coulter) in a 96 well plate format on a Beckman Biomek i7 liquid handler instrument (Beckman). The quantity and quality of RNA samples was assessed using the high sensitivity RNA fragment analysis kit on a 96 well Fragment Analyzer (Aligent Technologies). All samples had an RNA integrity number > 9.9 and were deemed of sufficient quality and quality for RNA-seq analysis. Samples were diluted to a final quantity of 50 ng/well of total RNA using a Tecan Fluent liquid handling automation system (Tecan). The KAPA mRNA HyperPrep kit (Roche) was used for reverse transcription, generation of double stranded cDNA and subsequent library preparation and indexing to facilitate multiplexing (IDT), all of which was performed through automation on a Tecan fluent (the manual protocol was adapted for automation inhouse). All libraries were quantified with the Fragment Analyzer using the standard sensitivity NGS kit (Aligent www.nature.com/scientificreports/ Technologies), pooled in equimolar concentrations and quantified with a Qubit Fluorometer (ThermoFisher Scientific) with the DNA HS kit (ThermoFisher Scientific), the library pool was further diluted to 1.9 nM and sequenced at > 20 M paired end reads/sample using the S2 regent kit to 100 cycles on an Illumina Novaseq 6000.
RNA sequencing data processing. Fastq were processed via the bcbio-nextgen v 1.2.0-b pipeline with Hisat2 (v2.2.0) as the aligner and Salmon (v0.14.2) as the pseudo-aligner. The reference genome was hg38, ensembl annotation release 94. For downstream analysis the TPM values generated from Salmon were used as the expression values. For heatmaps and expression plots log2(TPM) were used unless otherwise stated. Differential expression was performed using DESeq2 (v1.26.0) to generate multiple testing adjusted P-values (represented on plots). Heatmaps were generated using Morpheus (https:// softw are. broad insti tute. org/ morph eus) with clustering metric being one minus Pearson correlation and complete linkage. Pathway and expression regulation for UPR genes was extracted from Ingenuity Pathway Analysis (IPA; Qiagen) relationships.
Re-processing of GSE32537 and GSE47460. Both datasets were reprocessed from raw data obtained from GEO. For GSE32537 22 , the CEL files were processed using RMA method along with the BrainArray (http:// brain array. mbni. med. umich. edu/ Brain array/ Datab ase/ Custo mCDF/ genom ic_ curat ed_ CDF. asp) ensembl gene ID version 24 cdf to collapse the probes to a single gene. This generated normalised expression data on log2 scale with probes collapsed to gene level. Subsequent QC indicated one sample did not correspond to the assigned gender in the metadata and was removed from subsequent tests. Differentially expressed genes were generated by using Limma to compare the IPF samples to the controls controlling for both age and gender as the controls were generally younger than IPF patients. For GSE47460 23 only the samples on the SurePrint G3 Human GE 8 × 60K Microarray (GPL14450) were used as this the majority of the samples and more genes. The text files obtained from GEO were processed using the Limma package including background correction with the 'normexp' method offsetting for internal background measurements, quantile normalization between arrays with log2 transformation and then probes were filtered based on the PA-matrix. Again with this dataset a sample was excluded due to gender mismatch between gene expression and metadata. Additional samples were excluded due to failing QC metrics during processing. Again Limma was used to generate differentially expressed genes comparing the IPF samples to controls controlling for gender and age. To collapse the probes, the probe with the lowest false discovery rate (FDR) for each gene was selected to generate the final data set of differentially expressed genes. The Log2 fold changes and FDR for each of the 2 above data sets was plotted in R to generate the bubble plot.
Processing of single cell RNA sequencing data to generate differentially expressed genes. GSE12296. Data was downloaded from GEO with additional metadata (cell type annotation) provided by authors (A. Misharin) when requested. Data was analysed using Seurat 24 and only the IPF samples (n = 4) and control samples (N = 10) were analyzed (54,544 cells; 41,170 Control and 13,374 IPF). Differential analysis was performed for each of the annotated cell types provided by the authors comparing IPF to control using the negative binomial method from Seurat(Find Markers).
GSE135983. Data was downloaded from GEO as a filtered and processed Seurat object. This object contained data that had been through the QC, normalization and annotation as described by the authors. The object was subsetted to contain only the samples from 10 control and 12 IPF samples (89,326 cells; 31,644 Control, 57,682 IPF). Differential analysis was performed for each of the annotated cell types provided by the authors comparing IPF to control using the negative binomial method from Seurat (Find Markers). HAS1 High fibroblasts had no DE genes, hence were removed from the list containing DGE information.
GSE136831. Data was downloaded from GEO as a 78 single datasets (28 normal lung, 32 IPF lung, 18 COPD lung). The metadata describing the cell cluster identity defined by the authors, was included along with addition required metadata The object was subset to contain only the samples from 28 control and 32 IPF samples (243,472 cells). QC and normalization was performed in Seurat. Differential analysis was performed for each of the 37 annotated cell types provided by the authors comparing IPF to control using the negative binomial method from Seurat (Find Markers).
GSE128033. Data was downloaded from GEO as 18 single datasets (8 normal lung, 8 IPF lung, 2 bronchioalveolar lavage cell extracts (one fresh, one frozen)). The authors were contacted for the required metadata describing the cell cluster identity defined by the authors (based on Fig. 2b Genotyping analysis by Sanger sequencing. PCR of the genomic region flanking the CRISPR target site (~ 500 bp) and Sanger sequencing was carried out to assess editing efficiencies at the cell pool level. Three days after electroporation, cells were collected and resuspended in lysis buffer (DirectPCR lysis buffer (Viagen Biotech); 0.4 mg/ml proteinase K (Qiagen)) at a ratio of 10,000 cells per 10 μl lysis buffer. Cells were incubated for 2 h at 55 °C and 30 min at 85 °C to generate cell lysates. 2 μl of cell lysate was used in a 20 μl PCR reaction with 2 × Flash Phusion PCR Master Mix (Thermo Fisher) and 0.5 μM of custom forward and reverse primers (Sigma). The PCR products were amplified using 63 °C annealing temperature. Amplified products were sequenced in Eurofins and sequencing reads were analysed using both TIDE analysis (internal AZ v3 adapted from version 1.1.2 created by Bas van Steensel lab) and online ICE analysis tool (Synthego; https:// ice. synth ego. com/).
Statistics.
For in vivo studies, data were expressed as means ± SEM, and assessed for significance by Student's t test or ANOVA as appropriate. Patient demographics were compared using Student's t test or Mann Whitney analysis. Categorical variables were compared using Fisher's exact test. P values were determined for multiple comparisons using the Bonferroni correction. Values of *P ≤ 0.05, **P ≤ 0.01, ***P ≤ 0.005, ****P ≤ 0.001 were considered significant.
Results
Enhanced UPR in IPF lung samples and primary lung fibroblasts. To confirm the presence of UPR in IPF, we profiled the expression of genes from the PERK, IRE1 and ATF6 branches of the UPR pathway (Fig. S1, Table S1) in two large cohort IPF genomic studies (Fig. 1a). This analysis highlighted that most genes in this pathway were differentially expressed in IPF as compared to healthy controls or individuals with chronic obstructive airways disease (COPD). For example, many genes were up-regulated, including the unfolded protein sensor BiP (HSPA5), PERK (EIF2AK3, the mediator of the translation arm of the UPR) along with many of the components of the endoplasmic-reticulum-associated protein degradation (ERAD) complex that lies downstream of both the IRE1/ERN1 and ATF6 UPR pathways. Of those genes down-regulated in IPF, the majority were those transcriptionally regulated by ATF4 and include PPP1R15A, DDIT3, and ATF4 itself. PPP1R15A was the UPR gene with the largest decrease in IPF (i.e. over twofold in GSE47460 as well as being decreased in GSE32537) (Fig. 1b,d). DDIT3, an upstream regulator of PPP1R15A and downstream of ATF4 was also reduced in IPF in both GSE47460 (Fig. 1c) and GSE32537 (Fig. 1e).
The recent publication of multiple lung single cell IPF studies [25][26][27][28] has allowed us to further determine which cell types may contribute to the IPF gene expression changes described above. We reprocessed the counts data from all four IPF scRNAseq studies and generated differential expression data for each of the cell type defined by the authors of each study ( Fig. 1f and Table S2). PPP1R15A was found to be decreased in IPF fibroblasts as well as many cell types (Fig. 1f). DDIT3 was not found to be significantly altered in fibroblast cells but was down in IPF epithelial cell types, while ATF4 was also down in IPF epithelial cells and other immune cells but up in IPF fibroblasts (Table S2).
Assessment of protein level expression of PPP1R15A and BiP by immunohistochemistry of IPF lung sections identified immunoreactive staining for both PPPIR15A (Fig. 2a-c) and BiP (Fig. 2e,f) within areas of fibrosis localised to reactive type II pneumocytes and columnar epithelium. There was minimal staining of either protein in the endothelium and alveolar macrophages. No PPPIR15A was evident in the mesenchymal cells (Fig. 2b). There were significant associations with epithelial PPP1R15A expression and fibrosis (Fig. 2d, r 2 = 0.289, P < 0.001) and epithelial BiP to fibrosis (Fig. 2g, r 2 = 0.408, P < 0.001). There was no correlation with PPP1R15A or BiP with areas of inflammation (data not shown).
Because of the major pathologic role of fibroblasts in IPF, we exposed primary IPF fibroblasts to TGFβ1 (n = 5 donors) and determined transcriptomic changes in UPR gene expression at both 6 h and 24 h after addition of cytokine (Fig. 3a). We observed that the UPR-related transcripts could be organized into 3 main groups: (1) those that show a transient, early (6 h) increase in response to TGFβ1 and comprises genes mainly in the ERAD complex (Fig. 3a, group 2); (2) transcripts induced by TGFβ1 at 24 h (and in some cases also at 6 h) which included DDIT3, ATF4 and other genes whose expression was regulated by either of these (Fig. 3a, group 1); and (3) genes whose expression was decreased by TGFβ1 at 24 h (and in some cases at 6 h also) which includes PPP1R15A (Fig. 3b), as well as BCL2 and NRF2 (group 3, Fig. 3a). TGFβ inhibited PPP1R15A expression, with a trend observed at 6 h and significant inhibition observed at 24 h (Fig. 3b). PPP1R15A protein was measured by western blot in human lung fibroblasts stimulated with TGFβ1 in order to confirm that changes in gene expression also can be detected at the protein level. PPP1R15A protein is undetectable by western blot in the fibroblast cell lysates (Fig. S2), and there is a clear increase upon stimulation with the ER stress inducer tunicamycin. In contrast, TGFβ1 stimulation does not induce an increase in PPP1R15A, and is undetectable in these cells (Fig. S2). Additional protein analyses indicated that CHOP was undetectable in unstimulated cells or TGFβ stimulated cells, but upregulated by tunicamycin (Fig. S2) www.nature.com/scientificreports/ x-axis is log2 fold change (LFC), the size of the bubble indicates the − log10 FDR adjusted P-value (padj). The genes shown are grouped by pathway (shown to right, also in Table S1). While the majority of UPR genes are up-regulated in IPF lung compared to control, most of the genes in the ATF4 pathway, including PPP1R15A, are down-regulated in IPF. Genes shown but with no bubble displayed have data present but are not significant at FDR adjusted P-values < 0. www.nature.com/scientificreports/ and increased by tunicamycin, but not modulated by TGFβ (Fig. S2). Furthermore, p-eIF2a was only slightly increased by TGFβ (Fig. S2).
Both nintedanib and AZD8055 inhibit the effect of TGFβ1 on IPF fibroblasts. TGFβ1 signifi-
cantly promoted fibroblast to myofibroblast transition in IPF cells at 24 h, as shown by increased α-smooth muscle actin (αSMA) (Fig. 3c), and pre-treatment with nintedanib attenuated this effect in a dose-and donordependent manner. Nintedanib has previously been shown to inhibit TGFβ responses 29 . Additionally, as pre- www.nature.com/scientificreports/ viously reported 30 , preincubation with the mTOR inhibitor AZD8055 also attenuated the TGFβ1 response (Fig. 3d). However again, this modulation was donor dependent and differed between donors. Table S1 for IPF fibroblasts (5 donors, 3-5 replicates) treated with TGFβ1 (0.123 ng/ml) for 6 h or 24 h. The genes are further colour coded into pathway (P) and expression regulators (E) according to Table S1. The genes fall into three groups based on their expression: those genes whose expression is repressed by TGFβ1 at either or both times including PPP1R15A (bottom panel); those genes whose expression is only transiently up-regulated by TGFβ at 6 h (middle panel); and those genes including DDIT3 and ATF4 whose expression is up-regulated by TGFβ at 24 h (top panel www.nature.com/scientificreports/ Transcriptomic analysis of these same cells indicated that both nintedanib and AZD8055 were able to suppress many of the genes downstream of PERK/eIF2α and IRE1/ERN1 induced by TGFβ1 in primary lung fibroblasts (Fig. S3). Nintedanib (at 10 µM) enhanced the expression of PPP1R15A and reversed TGFβ1-mediated suppression of this factor (Fig. 3e). In contrast, AZD8055 (at 0.1 µM) promoted the expression of PPP1R15A to baseline levels and further induced gene expression at 10 µM (Fig. 3e). We next assessed the role of PPPIR15A in TGFβ-mediated responses in fibroblasts using CRISPR-based gene knockdown in the same assay. In these experiments, we observed that CRISPR-mediated PPPIR15A knockdown had no impact on TGFβ-induced fibroblast activation, nor did it modulate mTOR-mediated inhibition of TGFβ-induced ACTA2 induction (Fig. 3f, Fig. S4). Together, these data suggested that both nintedanib and AZD8055 enhanced PPP1R15A expression but the effects of these drugs on TGF-β-induced fibroblast activation was likely due to PPP1R15A-independent mechanisms.
Bleomycin-induced lung fibrosis correlates with senescence and ER stress.
Intrapulmonary bleomycin delivery to mice results in classic acute inflammatory response followed by increased fibrosis in the lung. In this experiment, the fibrotic response as measured by whole lung hydroxyproline levels (Fig. 4a) and matrix-associated gene expression (Fig. 4b) was present at days 14, 21 and 28 after bleomycin administration. In addition, we observed an increase in cellular senescence markers including P16 and P21 gene expression at these times after bleomycin challenge (Fig. 4c). Interestingly, we also observed alterations in Ppp1r15a, Bip and Ddit3 gene expression over time following bleomycin administration (Fig. 4d) in the lungs of these mice, with Ppp1r15a and Ddit3 showing differences in expression when comparing day 14 and day 21 after bleomycin.
Genetic deletion of Ppp1r15a exacerbates bleomycin-induced lung fibrosis and inflammation.
To gain mechanistic insights into the exacerbated fibrotic response observed in absence of Ppp1r15a, we next profiled the acute response to bleomycin in Ppp1r15a −/− mice and wildtype (wt) littermate controls at day 5 after intrapulmonary bleomycin. In assessing the expression of epithelial-related genes including the surfactant proteins and e-cadherin after bleomycin, we observed significantly increased surfactant protein d (Sfptd) but comparable modulation of other epithelial-related genes assessed in both wt and Ppp1r15a −/− mice (Fig. 5a). Ppp1r15a −/− mice also had increased Tgfb1, Tgfbr1 and the senescence-associated gene Serpine2 in response to bleomycin in comparison to wt controls (Fig. 5b). We also observed an increase in the bleomycin-induced fibrosis-related genes including Col1a2, Col3a1 and the collagen chaperone Hsp47 (Fig. 5c). There was no detectable P16 or P21 in the lung at day 5 after bleomycin (data not shown), hence we focused on Serpine2 expression in our assessment of changes in cellular senescence. Interestingly, bleomycin challenged Ppp1r15a −/− mice exhibited altered expression in a number of ER stress/UPR-related genes at day 5 compared to similarly bleomycin challenged wt mice (Fig. 5d). We confirmed that the augmented response to bleomycin was not due to increased bleomycin delivery to the Ppp1r15a −/− mice as both groups of mice had a comparable induction of Mmp12 in the lungs at both days 5 and 21 after bleomycin (Fig. S5a). There was however an increase in lung TNFa (Fig. S6a) and a trend towards an increase in lung IL1B (Figs. S5, S6) in bleomycin challenged Ppp1r15a −/− mice. www.nature.com/scientificreports/ To determine whether the exacerbated responses observed in the Ppp1r15a −/− mice in the response to bleomycin impacted fibrosis, we next challenged Ppp1r15a −/− mice with low dose bleomycin since we hypothesized that the Ppp1r15a −/− mice would be more sensitive to bleomycin challenge. We initially compared their responses to those of bleomycin challenged wt mice at day 21. There was no detectable Ppp1r15a in the lungs of Ppp1r15a-/-at any time point assessed (data not shown). As hypothesized, Ppp1r15a −/− mice had significantly enhanced lung hydroxyproline content (Fig. 5e) and Col3a1 gene expression (Fig. 5f). Furthermore, image analysis of Masson's Trichrome-stained lung sections further revealed an exacerbated response to bleomycin (Fig. 5g,h). There was also an increase in whole lung tissue TNFα protein levels, as measured by MSD analysis (Fig. S5b).
To further confirm the role of PPP1R15A in the bleomycin model, we targeted PPP1R15A with the small molecule inhibitor, Sephin1 31 . C57Bl/6 mice (i.e. wt) were administered a low dose of bleomycin and therapeutically dosed with Sephin1 daily from days 14 to 20 after bleomycin, thus avoiding the acute inflammatory features of the model. Like bleomycin challenged Ppp1r15a −/− mice, wt mice treated with Sephin1 had significantly increased lung fibrosis at day 21 compared with the vehicle control treated mouse group, as measured by lung tissue hydroxyproline (Fig. 6a), and whole lung gene expression of Col1a2 (Fig. 6b). Interestingly it was only the Sephin1-treated bleomycin-challenged mice that exhibited elevated Col1a1 (Fig. 6c) and Hsp47 (Fig. 6d) compared to Sephin1-treated control mice, with bleomycin not eliciting a significant increase in Col1a1 or Hsp47 in vehicle-treated animals.
Loss of PPP1R15A accelerates fibroblast senescence. PPP1R15A was reduced in proliferating, normal human lung fibroblasts maintained in 10% FBS for 24 or 72 h (Fig. 7a). Treatment of IPF fibroblasts with etoposide increased many UPR genes including both PPP1R15A and XBP1 (top section of heatmap Fig. 7b). A smaller group of mostly ERAD genes were repressed by etoposide (bottom part of Fig. 7b). The remaining genes show variable responses. We next assessed the effects of nintedanib and AZD8055 on IPF fibroblast senescence. Nintedanib had no effect on etoposide-induced cellular senescence (Fig. S7). However, inhibition of mTOR with AZD8055 has been previously reported to impact cellular senescence 32 , and in this study AZD8055 attenuated the effect of etoposide on p21 expression (Fig. 7c) and on the other senescence endpoints of cell and nuclear area (Fig. S8). Interestingly, unlike the TGFβ1 assay, all IPF donor cell lines responded similarly to AZD8055. Etoposide upregulated PPP1R15A, however AZD8055 induced PPP1R15A in both unstimulated and etoposidestimulated cells (Fig. 7d). CRISPR-mediated PPP1R15A knockdown induced cellular senescence directly, as observed by increased p21 + fibroblast number, and further promoted etoposide-induced cellular senescence (Fig. 7e). However, mTOR inhibition mediated reduction of senescence was independent of PPPIR15A (Fig. 7e). www.nature.com/scientificreports/
Discussion
Herein, we demonstrate that IPF is associated with altered expression of the UPR pathway in lung mesenchymal cells. We show that the UPR/ER stress pathway is altered in two large transcriptomics studies employing IPF lung samples 22,23 and in both studies the PERK/EIF2AK3 pathway and the IRE1 pathway were activated in IPF compared with normal lung (Fig. 1). The key roles for these pathways in controlling the UPR are to maintain tissue homeostasis via attenuating protein translation and facilitating cytoprotection 33 . Recently, the ATF4 pathways has also been implicated in mitochondrial dysfunction, which then drives ER stress and subsequent lung pathology in preclinical lung damage models 34 . The loss of PPP1R15A promotes continual phosphorylation of eIF2a, turning off protein synthesis, which in IPF could prevent normal alveolar repair. Through a detailed gene-based analysis of whole lung and single cell RNAseq, we demonstrate that a key component of the UPR pathway PPP1R15A is reduced in IPF fibroblasts compared with normal lung fibroblasts. Furthermore, using both genetic deletion or pharmacologically based inhibition of PPP1R15A, we demonstrate that reduced PPP1R15A activity exacerbated bleomycin-induced lung fibrosis. Mechanistically, we linked ER stress to senescence in lung fibroblasts. Specifically, PPP1R15A is involved in cellular senescence since it's deletion in fibroblasts resulted in increased p21 and exacerbated etoposide-induced senescence of these mesenchymal cells. Together, these data demonstrate that reduced PPP1R15A enhances lung fibroblast to myofibroblast differentiation and accelerates the senescence of these cells in lung fibrosis. Modulation of PPP1R15A might provide therapeutic benefit in IPF via direct effects on lung fibroblasts. It has been hypothesized that IPF is a consequence of chronic and unabated epithelial injury which then maintains the aberrant wound healing response of excess fibroblast-mediated matrix deposition. Consequently, most studies linking ER stress and IPF have predominantly focused on the epithelium 8 . One such link is via surfactant proteins, which are processed via the endoplasmic reticulum 35,36 . Familial IPF has been linked with mutations in the surfactant protein genes, SFTPC 37 and SFTPA2 38 . These mutations are thought to result in ER stress and UPR pathway activation in AT2 cells [38][39][40][41] . Other potential inducers of ER stress that are relevant to IPF include viral infection including herpesvirus such as Epstein Barr Virus (EBV) or cytomegalovirus (CMV), which are www.nature.com/scientificreports/ commonly detected in IPF lung and not healthy lung tissue 8,41 , and have been linked to fibrosis development 42 .
Additional key drivers of ER stress in IPF include oxidative stress and environmental pollution 2,43 , and oxidative stress mediated ER stress has been linked to fibroblast differentiation induced by TGFβ 9 .
Our studies point to a distinct role for PPP1R15A in lung fibroblasts that is apparent in both the proliferative and senescent state of these cells. In a proliferative state, lung fibroblasts exhibit a relative loss of PPP1R15A as revealed in immunohistochemical analysis of advanced fibrotic areas such as fibroblastic foci. Moreover, the relative loss was greater when whole lung samples were analysed in the large cohort transcriptomic studies. The absence of PPP1R15A in lung fibroblasts enhanced the response of these cells to TGF-β and enhanced expression of PPP1R15A was associated with diminished response to this pro-fibrotic cytokine. Another downstream response following oxidative stress is cellular senescence. Elevated epithelial and fibroblast senescence has been observed in the lungs of IPF patients 15,16 and this process has been more broadly linked to premature ageing 44 and promoting lung fibrosis 45,46 . Importantly, targeting cellular senescence has resulted in therapeutic benefit in preclinical models of lung fibrosis 47,48 . Etoposide is a commonly used agent for mediating fibroblast senescence, by eliciting DNA damage 49 . Previously we have reported an impairment in the DNA damage response in IPF fibroblasts 16 . In the present study, we used etoposide as an in vitro model to assess the role of PPP1R15A. A recent paper has highlighted the significance of the mTOR pathway in regulating fibrosis via senescence, as well as inhibiting TGFβ-induced collagen production 30 . Here we observed that mTOR pathway inhibition with AZD8055 inhibited etoposide-induced fibroblast senescence. PPP1R15A knockdown resulted in an increase in fibroblast senescence in resting fibroblasts and a further increase in etoposide-induced p21 expression. However, the mTOR-mediated effect on senescence pathway is independent of PPP1R15A.
The potential dual role of PPP1R15A is also apparent depending on the extent of injury to the lung. In settings of mild injury such as following acrolein challenge, PPP deficiency will enable misfolded proteins to be cleared and a coordinated repair response to occur 50 . However, if the injury is overwhelming, such as demonstrated in our study following bleomycin administration, PPP1R15A deficiency resulted in exacerbated lung damage and fibrosis. Notwithstanding the bleomycin model spontaneously resolves in healthy, young mice 16 , in both the gene-deficient and Sephin1 treated mice challenged with bleomycin this was augmented in lung fibrosis. The mechanism through which PPP1R15A deficient mice exhibit exacerbated fibrosis might be due to a reduction in this resolution process as both col1a2 and hsp47, a collagen chaperone, are involved in stabilizing mature collagen 51 . Another mechanism by which PPP1R15A deficiency might be promoting overall lung fibrosis is via TGFβ receptor phosphorylation. A previous report indicated that PPP1R15A dephosphorylates TGFβ type 1 receptor 52 . Herein, we observed an increase in bleomycin induced TGFβ1 and the type I receptor at day 5 after bleomycin in the Ppp1r15a gene deficient animals, however we did not observe an exacerbation of TGFβ induced effects in the fibroblast CRISPR model suggesting that any TGFβ receptor-associated mechanisms are fibroblast-independent in lung fibrosis. Thus, future studies looking at extended timepoints and also confirming specific genes that are modulated at the protein level, this may indicate that although the acute inflammation and subsequent collagen deposition is exacerbated, the resolution processes may also be accelerated.
Overall, our study highlights a potential protective role for PPP1R15A in both the initiation and maintenance of lung fibrosis. The protective role appears to encompass both proliferating and senescent lung fibroblasts. In light of our novel findings, we speculate that the restoration of PPP1R15A activity in lung fibroblasts might represent an attractive therapeutic approach that interrupts the relentless deposition of extracellular matrix in IPF. | 2021-07-27T00:04:43.589Z | 2021-06-01T00:00:00.000 | {
"year": 2021,
"sha1": "70a93539074ff3c8257a39760cd201e658ad11f5",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-00769-7.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "efcff53b0c28ccab0846e0dee7b58a8f8682f5ba",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248085252 | pes2o/s2orc | v3-fos-license | Singular limits of a coupled elasto-plastic damage system as viscosity and hardening vanish
The paper studies the asymptotic analysis of a model coupling elastoplasticity and damage depending on three parameters -- governing viscosity, plastic hardening, and convergence rate of plastic strain and displacement to equilibrium -- as they vanish in different orders. The notion of limit evolution obtained is proven to coincide in any case with a notion introduced by Crismale and Rossi in 2019; moreover, such solutions are closely related to those obtained in the vanishing-viscosity limit by Crismale and Lazzaroni in 2016, for the analogous model where only the viscosity parameter was present.
Introduction
Rate-independent processes model evolutionary phenomena where the external loading is much slower than the internal oscillations of materials, while viscosities may be neglected. Despite a wide literature on the subject (see [MR15] and references therein), a further understanding is needed of the relations between the different notions of solution that have been proposed. In particular, a problem of interest in applications is to determine which kind of solution captures the limiting behavior of dynamical systems for small viscosity or inertia. For this reason, in this paper we compare different notions of solution, obtained with different approximation methods as viscosities tend to zero at different rates (but with no inertia).
We focus on a rate-independent system modeling damage in an elasto-plastic body occupying a bounded Lipschitz domain Ω ⊂ R n , n ≥ 2. The model was advanced and first studied in [AMV14,AMV15], while the existence of globally minimizing quasistatic evolutions (or, equivalently, Energetic solutions) was first proved in [Cri16]. In [CL16,CR19], the vanishing-viscosity approach was instead adopted to find the so-called Balanced Viscosity solutions, obtaining the rate-independent system as the limit of a viscously perturbed system. We refer to the pioneering [EM06], and the subsequent [MRS12a,MRS16a], for the definition and properties of such solutions in the context of an 'abstract' rate-independent system. The vanishing-viscosity technique has also been adopted in various concrete applications, ranging from plasticity (cf., e.g., [DMDMM08,DMDS11,BFM12,FS13,Sol14]), to damage, fracture, and fatigue (see for instance [KMZ08, LT11, KRZ13, Alm17, CL17, ACO19]).
In this paper we aim to gain further insight into the different ways of constructing Balanced Viscosity solutions to the model for damage and plasticity from [AMV15], which were explored in [CL16] and [CR19]. We show that these notions of solutions essentially coincide if the hardening vanishes together with viscosities, while they retain different features if the hardening parameter is positive. In particular, it turns out that perfect plasticity coupled with damage may equivalently be approximated by means of processes where viscosity is confined to the flow rule for damage, or with viscosity also in the momentum equation and in the plastic flow rule.
z(t, x) = 0) the material is in the undamaged (fully damaged, resp.) state, at the time t ∈ (0, T ) and 'locally' around the point x ∈ Ω. In fact, the related PDE system consists of -the momentum balance −div σ = f in Ω × (0, T ) , σn = g on Γ Neu × (0, T ), (1.1a) with f , g some external forces, n the outer unit normal vector to Ω, σ the stress tensor where Γ Dir is the Dirichlet part of the boundary ∂Ω and w a time-dependent Dirichlet loading, while Γ Neu is the Neumann part of ∂Ω and g an assigned traction. Alternative models for damage and plasticity have been analyzed in, e.g., [RV16,RV17,DRS19], albeit from a different perspective. In fact, those papers address the rate-independent evolution of the damage and plastic processes coupled with a rate-dependent momentum balance, featuring viscosity and even inertial terms. Therefore, the resulting system has a mixed rate-dependent/independent character and is formulated in terms of a weak, energetic-type notion of solution. Instead, both in [CL16] and [CR19], (two distinct) viscous regularization procedures, described below, were advanced to construct Balanced Viscosity solutions to the fully rate-independent system (1.1).
Balanced Viscosity solutions. In [CL16] the vanishing-viscosity approximation of system (1.1) was carried out by perturbing the damage flow rule by a viscous term, which led to the viscously regularized system in Ω × (0, T ), (1.2a) supplemented by the boundary conditions u = w on Γ Dir × (0, T ), σn = g on Γ Neu × (0, T ), ∂ n z = 0 on ∂Ω × (0, T ). (1.2d) Passing to the limit in a reparameterized version of (1.2) led to a first construction of BV solutions to system (1.1). We shall illustrate the notion of parameterized Balanced Viscosity solution thus obtained in the forthcoming Section 3.2. In what follows, for quicker reference we will call the BV solutions from [CL16] BV 0 solutions to system (1.1), where the subscript 0 indicates that the solutions are obtained in the limit as ε ↓ 0.
In [CR19], a different construction of BV solutions to system (1.1) was proposed, based on a viscous regularization of the momentum balance and of the plastic flow rule, in addition to that of the damage flow rule. This alternative approach was first proposed in [MRS16b] in a finite-dimensional context, and extended to infinite-dimensional systems in the recent [MR21]. Both papers address the vanishing-viscosity analysis of an abstract evolutionary system, that can be thought of as prototypical of rate-dependent systems in solid mechanics, governing the evolution of an elastic variable u and of an internal variable z. In those papers, z has relaxation time ε, while u has a viscous damping with relaxation time ε α , α > 0. To emphasize the occurrence of these three time scales (the time scale ε 0 = 1 of the external loading, the relaxation time ε of z, and thepossibly different -relaxation time ε α of u), the term 'multi-rate system' was used in [MRS16b]. Therein, as well as in [MR21], it was shown that, in the three cases α ∈ (0, 1), α = 1 and α > 1, the vanishing-viscosity analysis leads to different notions of Balanced-Viscosity solutions, including in particular different descriptions of the jump behavior developing in the limit ε ↓ 0.
We have referred to ν as a rate parameter, since it sets the mutual rate at which, on the one hand, the displacement and the plastic strain converge to equilibrium and rate-independent evolution, and, on the other hand, the damage parameter converges to rate-independent evolution. More precisely, if ν > 0 stays fixed then u and p converge at the same rate as z, while their convergence occurs at a faster rate if ν ↓ 0. This is clear if one chooses e.g. ν = µ = ε, so that the viscous terms E(u) andṗ in (1.3a) and (1.3c) are modulated by the coefficient ε 2 , as opposed to the coefficient ε in the damage flow rule. Observe that, upon taking the vanishing-hardening limit µ ↓ 0, the constraint ν ≤ µ forces the joint vanishing-viscosity and vanishing-hardening limit to occur at a faster rate for u and p than for z.
We will refer to the vanishing-viscosity analyses in system (1.3) as full, as opposed to the partial vanishingviscosity approximation provided by system (1.2), where only the damage flow rule is regularized.
Indeed, in [CR19] three full vanishing-viscosity analyses have been carried out for system (1.3), leading to three different notions of solution for system (1.1), possibly regularized by a hardening term. Let us briefly illustrate them.
(1) BV µ,ν 0 solutions: The limit passage in a (reparameterized) version of (1.3) as ε ↓ 0, while the positive parameters µ and ν stayed fixed, has led to BV solutions for a variant of the system (1.1), where the plastic flow rule was regularized by the hardening term µp, i.e. for the rate-independent system with hardening coupled with the boundary conditions (1.4d) The BV solutions to system (1.4) thus constructed reflect their origin from a system viscously regularized in all of the three variables u, z, p. In fact, in the jump regime the system may switch to viscous behavior in the three variables u, z, and p. Since the convergence of u, z, and p to elastic equilibrium and rate-independent evolution has occurred at the same rate (as ν > 0 stayed fixed), viscous behavior in u, z, and p may equally intervene in the jump regime. We shall refer to such solutions as BV µ,ν 0 solutions to system (1.4). The subscript 0 suggests that they have been obtained in the vanishing-viscosity limit ε ↓ 0, while the occurrence of the parameter µ keeps track of the presence of hardening. Also the parameter ν appears in the notation, since it still features in the limiting evolution as a coefficient of the viscous terms in the displacement equation and in the plastic flow rule, which may be active in the jump regime.
(2) BV µ,0 0 solutions: The limit passage in a (reparameterized) version of (1.3) as ε ↓ 0 simultaneously with ν ↓ 0, while µ > 0 stayed fixed, has again led to BV solutions for the rate-independent elastoplastic damage system with hardening (1.4). These solutions still have the feature that, in the jump regime, the system may switch to viscous behavior in u, z, and p. However, the BV solutions thus obtained reflect the fact that the convergence of u and p to elastic equilibrium and rate-independent evolution has occurred at a faster rate (as ν ↓ 0) than that for z. To emphasize this, such solutions were termed BV solutions to the multi-rate system for damage with hardening. We will refer to them as BV µ,0 0 solutions to system (1.1). In this notation, the double occurrence of 0 relates to the fact that such solutions were obtained in the limit ε, ν ↓ 0, as opposed to the BV 0 solutions from [CL16] (arising in the limit of system (1.3) as ε ↓ 0 and µ = ν = 0).
(3) BV 0,0 0 solutions: The limit passage in a (reparameterized) version of (1.3) as ε, µ, ν ↓ 0 jointly led to BV solutions to the multi-rate system for damage and perfect plasticity (1.1), again reflecting the fact that the convergence of u and p to elastic equilibrium and rate-independent, perfectly plastic evolution happened at a rate faster than that for z. The vanishing-viscosity solutions arising from this joint limit will receive specific attention in this paper. In what follows, we will refer to them as BV 0,0 0 solutions to system (1.1). Here, the triple occurrence of 0 relates to the fact that such solutions were obtained in the limit ε, µ, ν ↓ 0 and thus immediately suggests the comparison with the BV 0 solutions from [CL16].
Our results. The aim of this paper is twofold: (i) We propose to gain further insight into BV 0,0 0 solutions to system (1.1) (cf. item #3 in the above list). More precisely, first of all we shall provide a differential characterization of such solutions, cf. Proposition 3.11 ahead. Relying on that, in Theorem 4.1 we will subsequently prove that, after an initial phase in which z is constant while u and p, evolving by viscosity, relax to elastic equilibrium and to rate-independent evolution, respectively, it turns out that u never leaves the equilibrium, and p the rate-independent regime. Afterwards, the evolution of system (1.1) is captured by the notion of BV 0 solution as obtained in [CL16] by taking the vanishing-viscosity limit as ε ↓ 0 of system (1.2). In other words, viscosity in u and p (may) intervene only in an initial phase in the reparameterized time scale, corresponding to a time discontinuity in the time scale of the loading. After this initial phase, the BV 0,0 0 solutions to the perfectly plastic system for damage (1.1) arising by the full vanishing-viscosity approach of [CR19] comply with the same notion of solution of [CL16], where viscosity for u and p was neglected.
We point out that an analogous characterization can be proved for the BV µ,0 0 solutions to the multirate system with fixed hardening parameter µ > 0, obtained in the limit passage #2 of the above list; see Remark 5.2 ahead. (ii) We aim to 'close the circle' in the analysis of the singular limits of system (1.3), by showing that, for two given sequences solutions to the single-rate system with hardening converge as k → ∞ to a BV 0,0 0 solution of the perfectly plastic damage system, which will be shown in Theorem 5.6 ahead; (2) BV µ k ,0 0 solutions to the multi-rate system with hardening converge as k → ∞ to a BV 0,0 0 solution, cf. Theorem 5.3. In particular, we will prove that the diagram in Figure 1 commutes.
We emphasize that these results establish asymptotic relations between BV solutions, already obtained as vanishing-viscosity limits. These convergence analyses show that BV 0,0 0 solutions are robust enough to capture the asymptotic behavior of a wide class of BV solutions depending on different parameters. However, BV 0,0 0 solutions reduce to BV 0 solutions after an initial phase in which u and p converge to elastic equilibrium and . Dashed lines represent convergences proved in the present paper, the corresponding theorems being referred to in the diagram. Starting from V µ,ν ε one may either pass to the limit as ε ↓ 0 and then as µ, ν ↓ 0; or pass to the limit as ε, ν ↓ 0 and then as µ ↓ 0. Since there is no uniqueness, it is not guaranteed that one gets the very same solution found in the joint limit ε, µ, ν ↓ 0. However, we prove that through the three different procedures one finds evolutions satisfying the same notion of solution. In this sense we may say that the diagram commutes. stability, respectively; in particular, if the initial conditions are at equilibrium, then the two notions of solution coincide. This feature may be traced back to the convex character of perfect plasticity and to the multi-rate character inherent to the system, since with ν ↓ 0 we have forced faster convergence to equilibrium in u and stability in p. Plan of the paper. In Section 2 we detail the setup of the problem, list our assumptions, and provide some preliminary results. In Section 3 we illustrate the notion of BV 0 solution to system (1.1) arising from the partial vanishing-viscosity approach of [CL16], and that of BV 0,0 0 solution via the full vanishing-viscosity analysis in [CR19]. In Section 4 we establish Theorem 4.1, providing a complete characterization of BV 0,0 0 solutions. Section 5 is devoted to the vanishing-hardening analysis of BV µ,0 0 and BV µ,ν 0 solutions to the system with hardening. The proofs of Theorems 5.3 and 5.6 rely on some technical results collected in the Appendix.
Setup and preliminaries
Throughout the paper we will use the following Notation 2.1 (General notation and preliminaries). Let X be a Banach space. By ·, · X we denote the duality between X * and X or between (X n ) * and X n (whenever X is a Hilbert space, ·, · X will be the inner product), while · X stands for the norm in X or in X n . The inner Euclidean product in R n , n ≥ 1, is denoted by ·, · and the Euclidean norm in R n by | · |. The symbol B r (0) stands for the open ball in R n with radius r and center 0.
We write · L p for the L p -norm on the space L p (O; R d ), with O a measurable subset of R n and 1 ≤ p < +∞, and similarly · H m for the norm of the Sobolev-Slobodeskij space H m (O), for 0 ≤ m ∈ R. Given a function v : Ω × (0, T ) → R differentiable, w.r.t. time a.e. on Ω × (0, T ), its (almost everywhere defined) partial time derivative is indicated byv : Ω × (0, T ) → R. A different notation will be employed when considering v as a (Bochner) function, from (0, T ) with values in a Lebesgue or Sobolev space X (with the Radon-Nikodým property): if v ∈ AC([0, T ]; X), then its (almost everywhere defined) time derivative is indicated by v ′ : (0, T ) → X.
The symbols c, c ′ , C, C ′ will denote positive constants whose precise value may vary from line to line (or within the same line). We will sometimes employ the symbols I i , i = 0, 1, ..., as place-holders for terms appearing in inequalities: also in this case, such symbols may appear in different proofs with different meaning.
Function of bounded deformation. The state space for the displacement variable for the systems with hardening will be H 1 Dir (Ω; R n ) := {u ∈ H 1 (Ω; R n ) : u = 0 on Γ Dir } (recall that Γ Dir is the Dirichlet part of ∂Ω, cf. (2.Ω) ahead).
For the perfectly plastic damage system, displacements will belong to the space of functions of bounded deformations, defined by . Indeed, BD(Ω) is the dual of a normed space, cf. [TS80], and such duality provides a weak * convergence on . It holds BD(Ω) ⊂ L n/(n−1) (Ω; R n ). Up to subsequences, every bounded sequence in BD(Ω) converges weakly * , weakly in L n/(n−1) (Ω; R n ), and strongly in L p (Ω; R n ) for any 1 ≤ p < n n−1 . Finally, we recall that the trace u| ∂Ω of a function u ∈ BD(Ω) is well defined and is an element in L 1 (∂Ω; R n ).
(2. = σn, where the right-hand side is the standard pointwise product of the matrix σ and the normal vector n in ∂Ω.
For the treatment of the perfectly plastic system for damage it will be crucial to work with the space Furthermore, our choice of external loadings (see (2.8a)) will ensure that the stress fields σ that we consider, at equilibrium, have the additional property that [σn] ∈ L ∞ (Ω; R n ) and σ ∈ Σ(Ω) (cf. Lemma 3.1). Therefore, any of such fields induces a functional −Div σ ∈ BD(Ω) * via With slight abuse of notation, we shall denote by −Div σ also the restriction of the above functional to H 1 (Ω; R n ) * .
The Then, The inner product z 1 , z 2 H m (Ω) := Ω z 1 z 2 dx + a m (z 1 , z 2 ) makes H m (Ω) a Hilbert space. Throughout the paper we shall assume m > n 2 and rely on the compact embedding H m (Ω) ⋐ C 0 (Ω).
2.1.
Assumptions and preliminary results. This section and Sec. 2.2 collect all our assumptions on the constitutive functions of the model and on the problem data. We will omit to invoke them explicitly in the statement of the various results.
The reference configuration. In what follows we will assume that Ω ⊂ R n , n ∈ {2, 3}, is a bounded Lipschitz domain satisfying the so-called Kohn-Temam condition Γ Dir and Γ Neu relatively open in ∂Ω, and ∂Γ Dir = ∂Γ Neu = Σ their relative boundary in ∂Ω, with Σ of class C 2 and H n−1 (Σ) = 0, and with ∂Ω Lipschitz and of class C 2 in a neighborhood of Σ. (2.Ω) The elasticity and viscosity tensors. We assume that the elastic tensor C : [0, +∞) → Lin(M n×n sym ; M n×n sym ) fulfills the following conditions For the viscosity tensor D we require that Thus, D induces an equivalent (by a Korn-Poincaré-type inequality) Hilbert norm on H 1 The related 'dual norm' is for all η ∈ H 1 Dir (Ω; R n ) * with η = Div ξ for some ξ ∈ Σ(Ω). (2.5) The potential energy for the damage variable. In addition to the regularizing, nonlocal gradient contribution featuring the bilinear form a m , the z-dependent part of the mechanical energy functional shall feature a further term with density W satisfying Indeed, the energy contribution involving W forces z to be strictly positive; consequently, the material never reaches the most damaged state at any point.
The plastic and the damage dissipation densities. The plastic dissipation potential shall reflect the requirement that the admissible stresses belong to given constraint sets which, in turn, depend on the damage variable z. More precisely, as in [CL16] we ask that the constraint sets (K(z)) z∈[0,+∞) fulfill with d H the Hausdorff distance between two subsets of M n×n D , defined by The associated support function H : (2.6) will act as density function for the plastic dissipation potential, cf. (2.14) later on. We choose as damage dissipation density the function R : R → [0, +∞] given by with κ > 0 a constant related to the toughness of the material.
Body and surface forces, Dirichlet loading, and initial data. We assume that the volume force f and the assigned traction g fulfill The induced total load is the function Furthermore, as customary for perfect plasticity, we shall impose a uniform safe load condition, namely that there exists As for the time-dependent Dirichlet boundary condition w, we assume that Finally, we shall consider initial data q 0 = (u 0 , z 0 , p 0 ) with . (2.8f) The stress-strain duality. For the treatment of the perfectly plastic damage system it is essential to resort to a suitable notion of stress-strain duality that we borrow from [KT83,DMDM06], also relying on the more recent extension to Lipschitz boundaries from [FG12], to which we refer for the properties mentioned below. Following [DMDM06] we introduce the class A(w) of admissible displacements and strains associated with a function w ∈ H 1 (R n ; R n ), that is : where n denotes the normal vector to ∂Ω and ⊙ the symmetrized tensorial product. The space of admissible plastic strains is Given σ ∈ Σ(Ω) (cf. (2.2)), p ∈ Π(Ω), and u, e such that (u, e, p) ∈ A(w) we define for every ϕ ∈ C ∞ c (R n ); in fact, this definition is independent of u and e. Under these assumptions, σ ∈ L r (Ω; M n×n sym ) for every r < ∞, and [σ D : p] is a bounded Radon measure with [σ D : p] 1 ≤ σ D L ∞ p 1 in R n . Restricting such measure to Ω ∪ Γ Dir , we set (2.10) By (2.Ω) and (2.9), since u ∈ BD(Ω) ⊂ L n n−1 (Ω; R n ), we get the following integration by parts formula, valid if the distribution [σn] defined in (2.1) belongs to ∈ L ∞ (Γ Neu ; R n ): for every σ ∈ Σ(Ω) and (u, e, p) ∈ A(w).
Energetics.
A key ingredient for the construction of BV solutions to the rate-independent systems (1.1) (damage with perfect plasticity) and (1.4) (damage and plasticity with hardening) is the observation that their rate-dependent regularizations (1.2) and (1.3) have a gradient-system structure. Namely, they can be reformulated in terms of the generalized gradient flow for suitable choices of -the state space Q for the triple q = (u, z, p); -the driving energy functional E : , with convex analysis subdifferential ∂Ψ : Q ⇒ Q * , as rigorously proved in [CR19]. Observe that also the rate-independent systems (1.1) and (1.4) have a gradient structure that is, however, only formal due to the fact that the functions u, z, and p may have jumps as functions of time. Nonetheless, for our analysis it is crucial to detail the energetics underlying both the rate-dependent and the rate-independent systems.
The state spaces. The state space for the rate-dependent/independent damage systems with hardening is .
For the rate-independent damage system with perfect plasticity, the displacements are just functions of bounded deformation and the plastic strains are only bounded Radon measures on Ω ∪ Γ Dir , so that the associated state space is (2.11) Observe that in the definition of Q PP it is in fact required that (u, e, p) ∈ A(0): indeed, the condition u ⊙ n H n−1 + p = 0 is a relaxation of the homogeneous Dirichlet condition u = 0 on Γ Dir .
The energy functionals. The energy functional governing the rate-dependent and rate-independent systems with hardening (1.3) and (1.4), respectively, consists of (1) a contribution featuring the elastic energy Q(z, e) = Ω 1 2 C(z)e : e dx; (2) the potential energy for the damage variable and for the hardening term; (3) the time-dependent volume and surface forces.
Namely, for µ > 0 given, In what follows, we will use the short-hand notation e(t) := E(u+w(t))−p for the elastic part of the strain tensor. The energy functional for the rate-independent perfectly plastic damage system (1.1) is (2.12) Observe that The dissipation potentials. The dissipation density R from (2.7) clearly induces an integral functional R : However, since the damage flow rule will be posed in H m (Ω) * (cf. (2.20) ahead), we will restrict R to the space H m (Ω), denoting the restriction by the same symbol. Hence, we will work work with the functional and with its convex analysis subdifferential ∂R : H m (Ω) ⇒ H m (Ω) * . We will often use the following characterization of ∂R, due to the 1-homogeneity of the potential R: For the systems with hardening (i.e. (1.3) and (1.4)), the plastic dissipation potential H : where H is given by (2.6) and π is a place-holder for the plastic rateṗ. Its convex analysis subdifferential (2.17) A characterization analogous to (2.17) holds for the subdifferential ∂ π H(z, ·) : . In order to handle the perfectly plastic system for damage, the plastic dissipation potential H(z, ·) has to be extended to the space is a positive measure such that π ≪ µ and dπ dµ is the Radon-Nikodým derivative of p with respect to µ; by one-homogeneity of H(z(x), ·), the definition of H PP does not depend of µ. For the theory of convex functions of measures we refer to [GS64]. By [AFP05, Proposition 2.37], for every z ∈ C 0 (Ω; [0, 1]) the functional p → H PP (z, p) is convex and positively one-homogeneous. We recall that by Finally, from [FG12, Proposition 3.9] it follows that for every σ ∈ K z (Ω) In particular, we have The (partially) viscously regularized system (1.2) also features the 2-homogeneous dissipation potential while the (fully) viscously regularized system (1.3) additionally involves the quadratic potentials The gradient structure for system (1.3). It was proved in [CR19, Lemma 3.3] that for every t and that for all q ∈ Q H the function t → E µ (t, q) belongs to H 1 (0, T ). Relying on this, it was shown that system (1.3) reformulates as the generalized gradient system involving the overall dissipation potential and its rescaled version
The partial versus the full vanishing-viscosity approach
In this section we aim to gain further insight into the notion of BV 0,0 0 solution to system (1.1) arising from the full vanishing-viscosity approach of [CR19] and compare it with the BV 0 concept from [CL16]. In order to properly introduce both notions, it can be useful to recall the reparameterized energy-dissipation balance where one passes to the limit to obtain Balanced Viscosity solutions.
At this heuristic stage, we will treat the partial vanishing-viscosity approximation of [CL16] and the full approximation of [CR19] in a unified way, although at the level of the viscous approximation in [CL16] there is the significant difference that the plastic strain evolves rate-independently. Still, we will disregard this and instead focus on the similarities between the rate-dependent systems (1.3) (wherefrom the BV 0,0 0 solutions of [CR19] stem), and (1.2) (wherefrom the BV 0 solutions of [CL16]). In fact, (1.2) can be formally obtained from [CR19] by setting ν = µ = 0. That is why, in what follows to fix the main ideas we will illustrate the limit passage in the energy-dissipation balance associated with system (1.3).
Throughout this section and the remainder of the paper, we will suppose that the assumptions of Sec. 2 are in force without explicitly invoking them.
3.1. Preliminary definitions. The vanishing-viscosity potentials at the core of the definitions of BV 0 solutions in [CL16] and BV 0,0 0 solutions in [CR19] have a structure akin to that of the functional M µ,ν ε from (3.3), but they are tailored to the driving energy E 0 (2.12) for the perfectly plastic system. Recalling the definition of M µ,ν ε , it is thus clear that, in order to define such vanishing-viscosity potentials, one needs suitable surrogates of the (D, H 1 ) * -norm of D u E 0 (t, q), and of the L 2 -distance of D p E 0 (t, q) from the stable set ∂ π H(z, 0), cf.
(3.4). Indeed, such quantities are no longer well defined for the functional E 0 at every (t, q) ∈ [0, T ] × Q PP ; instead, note that the L 2 -distance, in the sense of (3.5), of D z E 0 (t, q) from the stable set ∂R(0) still makes sense. Following [CR19, Sec. 7], we will set at every (t, q) ∈ dom(E 0 ) = [0, T ] × D (cf. (2.13)) (3.6b) with σ(t) = C(z)e(t) and e(t) = E(u + w(t)) − p. Observe that the expressions in (3.6a) and (3.6b) are well defined since e(t) and, a fortiori, σ(t) are elements in L 2 (Ω; M n×n sym ) whenever (u, z, p) ∈ Q PP . Formulae (3.6a) and (3.6b) have been inspired by the obvious fact that which was proved in [CR19, Lemma 3.6]. In particular, we note that for every q ∈ D For later use, we also record here the following result.
Proof. The implication ⇒ was proved in [CR19, Lemma 7.4]. A close perusal of the proof, also mimicking the convexity arguments from [DMDM06,Prop. 3.5], also yields the converse implication.
We then set We are now in a position to define the vanishing-viscosity contact potentials involved in the definitions of BV solutions in [CL16] and [CR19]. We will use the notation I {0} for the indicator function of the singleton {0}, namely +∞ otherwise for all ξ ∈ R.
3.2. BV 0 solutions via the partial vanishing-viscosity approach. The vanishing-viscosity contact potential for the BV 0 solutions from [CL16] is the functional (3.10a) where for q = (u, z, p) and q ′ = (u ′ , z ′ , p ′ ) we have otherwise .
(3.10c) Observe that the definition of M 0 (t, q, 0, q ′ ) again reflects, in the jump regime, the tendencies of the system to evolve viscously and to relax towards equilibrium and rate-independent evolution. However, here viscous dissipation only affects the damage variable. Likewise, rate-independent behavior is solely encompassed by the relation d L 2 (−D z E 0 (t, q), ∂R(0)) = 0.
Let us now detail the properties of the parameterized curves providing BV 0 solutions. We are now in a position to give the definition of Balanced Viscosity solution in the sense of [CL16].
The existence of BV 0 solutions was proved in [CL16, Thm. 5.4] under the condition that the initial data q 0 = (u 0 , z 0 , p 0 ) for the perfectly plastic damage system (1.1) fulfill (2.8f) and the additional condition that 3.3. A differential characterization for BV 0 solutions. We now aim to provide a differential characterization for the notion of BV solution from Definition 3.2, in terms of a suitable system of subdifferential inclusions for the displacement variable (which in fact shall satisfy the elastic equilibrium equation), the damage variable, and the plastic strain. In order to properly formulate the flow rule governing the latter, we need the following result; the proof of one implication can be found [CL16], in turn based on arguments from [DMDM06].
Prior to carrying out the proof of Proposition 3.5, we record the following key chain-rule inequality, whose proof may be immediately inferred from that of [CR19, Lemma 7.6] (cf. also Lemma 3.13 ahead).
Lemma 3.7. Along any admissible parameterized curve we have that s → E 0 (t(s), q(s)) is absolutely continuous on [0, S] and there holds for a.a. s ∈ (0, S). We are now in a position to carry out the Proof of Proposition 3.5. By Corollary 3.8, BV 0 solutions in the sense of Def. 3.2 can be characterized in terms of (3.20), whence we deduce that M 0 (t(s), q(s), t ′ (s), q ′ (s)) < ∞ for almost all s ∈ (0, S). Therefore, D * (t(s), q(s)) = 0 for a.a. s ∈ (0, S). (3.21) In turn, (3.21) is equivalent to the validity of (3.18a). Observe that (3.18a) extends to every s ∈ [0, S] by the continuity of the functions s → σ(s), s → f (t(s)), s → g(t(s)) and by the continuity w.r.t. the Hausdorff distance of s → K z(s) (Ω) thanks to (2.K 3 ). In view of (3.21) and recalling the definition (3.10) of M 0 , (3.20) can be then rewritten as for a.a. s ∈ (0, S). Now, on the one hand the right-hand side of the above equality is positive thanks to (2.19).
3.4. BV solutions via the full vanishing-viscosity approach. The vanishing-viscosity contact potential for the BV solutions from [CR19] is the functional M 0,0 where for q = (u, z, p) and and D * from (3.9). (3.23d) As previously remarked, for every (t, q) ∈ [0, T ] × Q PP we have that S u E 0 (t, q) < +∞ and W p E 0 (t, q) < +∞, hence D * (t, q) < +∞ and the product D(u ′ , p ′ ) D * (t, q) is well defined as soon as u ′ ∈ H 1 (Ω; R n ) and p ′ ∈ L 2 (Ω; M n×n D ); otherwise, we mean D(u ′ , p ′ ) D * (t, q) = +∞. The (reduced) vanishing-viscosity contact potentials M 0,red from (3.10b) and M 0,0 0,red differ from each other, both in their definition for t ′ > 0 and for t ′ = 0. For t ′ > 0, M 0,0 0,red has to additionally enforce elastic equilibrium (i.e. S u E 0 (t, q) = 0) and the stability constraint that σ D ∈ ∂ π H(z, 0) (i.e. W p E 0 (t, q) = 0) since, for the fully rate-dependent viscous systems, these costraints are no longer fulfilled. Accordingly, viscous behavior in u and p may intervene in the jump regime of the rate-independent limit system. This is encoded in the new term D(u ′ , p ′ ) D * (t, q) featuring in (3.23c), which appears in the energy-dissipation balance at jumps (i.e. for t ′ = 0), when z ′ = 0.
The following definition specifies the properties of the parameterized curves that are BV 0,0 0 solutions and is to be compared with Definition 3.2 of admissible parameterized curves in the sense of [CL16]. Hence, enhanced admissible curves enjoy better spatial regularity, with u(·) ∈ H 1 (Ω; R n ) and p(·) ∈ L 2 (Ω; M n×n D ), in the set in which either S u E 0 (t(·), q(·)) > 0 or W p E 0 (t(·), q(·)) > 0. With that definition at hand, we are now in a position to give the definition of BV solution in the sense of [CR19].
3.5. A differential characterization for BV 0,0 0 solutions. In this section we provide a differential characterization for BV 0,0 0 solutions. Preliminarily, we need to make precise in which sense we are going to understand the subdifferential inclusions governing the evolution of the reparameterized displacement and of the plastic variables. Indeed, by formally writing with λ : [0, S] → [0, 1] a measurable function (below we will have λ = λ u,p ), we shall mean where in (3.26) Div σ(s) denotes the restriction of the functional from (2.3) to H 1 (Ω; R n ). In particular, let us emphasize that, when λ > 0 the displacement variable enjoys additional spatial regularity, and the quasistatic momentum balance (3.26b) allows for test functions in H 1 (Ω; R n ). Likewise, by writing and the curves (t, q) satisfy for a.a. s ∈ (0, S) the system of subdifferential inclusions where (3.29a) and (3.29c) need to be interpreted as (3.26) and (3.27), respectively.
Remark 3.12. In comparison to the differential characterization for BV 0 solutions provided by system (3.5), system (3.29) features two parameters, instead of one. Both λ z and λ u,p have the role of activating the viscous contributions to the damage flow rule, and to the displacement equation/plastic flow rule, respectively, in the jump regime (i.e. when t ′ = 0). In fact, the viscous terms in (3.29a) and (3.29c) are modulated by the same parameter, which reflects the fact that viscous behavior intervenes for the variables u and p equally (or, in other terms, that u and p relax to elastic equilibrium and rate-independent evolution at the same rate, faster than z).
As in the case of Prop. 3.5, the proof of Prop. 3.11 will rely on a suitable chain-rule inequality, which we recall below.
With this, we conclude the proof. The main result of this section is the following theorem, whose proof adapts that of [MRS16b,Prop. 5.5].
As shown in
Step 1, the evolution on the intervals (s 1 , S) and (s 2 , s 3 ) is characterized by (4.3). Recall that system (4.3b)-(4.3d) admits a unique solution. Now, since S u E 0 (t(s j ), q(s j )) = W p E 0 (t(s j ), q(s j )) = 0 for j = 1, 2, it is immediate to check that the constant functions u(s) ≡ u(s j ) and p(s) ≡ p(s j ) provide the unique solution to (4.3b)-(4.3d). Thus, we conclude that BV 0,0 0 solution (t, q) on an interval of the type (s 1 , S] or (s 2 , s 3 ) must be constant, which is a contradiction to (4.3a).
Vanishing-hardening limit of BV solutions
In this section we carry out the asymptotic analysis as the hardening parameter µ tends to 0 for the BV solutions to system (1.4), both in the single-rate case (i.e., for BV µ,ν 0 solutions, with 0 < ν ≤ µ), and in the multi-rate case (i.e, for BV µ,0 0 solutions). Indeed, we first address the latter case in Section 5.1 ahead, while the former will be sketched in Sec. 5.2. For both analyses, we will resort to some technical results collected in the Appendix.
5.1.
Vanishing-hardening analysis for multi-rate solutions. As recalled in the Introduction, BV solutions to the multi-rate system with hardening have been constructed in [CR19] (cf. Theorem 6.13 therein) by passing to the limit in the (reparameterized) version of (1.3) as the viscosity parameter ε tends to 0 simultaneously with the rate parameter ν ↓ 0, while the hardening parameter µ > 0 stayed fixed. The solutions accordingly obtained, hereafter referred to as BV µ,0 0 solutions, thus account for multiple rates in the system with hardening. In particular, like for BV 0,0 0 solutions, the way in which viscous behavior in u, z, and p manifests itself in the jump regime reflects the fact that the convergence of u and p to elastic equilibrium and rate-independent evolution has occurred at a faster rate (as ν ↓ 0) than that for z, cf. Remark 5.2 ahead.
In order to recall the definition of BV µ,0 0 solutions for fixed µ > 0, we need to introduce the related vanishingviscosity contact potential In (5.1) we have employed the notation , (5.1c) We are now in a position to recall the notion of We say that (t, q) is non-degenerate if it fulfills (4.1).
Remark 5.2. In [CR19, Prop. 6.11] a characterization was provided for BV µ,0 0 solutions in terms of a subdifferential system that features two positive parameters λ u,p and λ z activating viscous terms in the displacement equation & plastic flow rule, and in the damage flow rule, respectively. We refrain from recalling that system because it is completely analogous to (3.29) (with the same switching conditions (3.28)).
Accordingly, repeating the very same arguments as in the proof of Theorem 4.1, it is possible to provide an additional characterization of BV µ,0 0 solutions completely analogous to that from the latter result. In particular, if a BV µ,0 0 solution (t, q) originates from an unstable datum q 0 with D * ,µ (0, q 0 ) > 0, then, during an initial interval the damage variable z is frozen and the pair (u, p) evolves, possibly in a viscous way. If (t, q) reaches, at some time s * , the state in which elastic equilibrium (−D u E µ (t, q) = 0) and stability (−D p E µ (t, q) ∈ ∂ π H(z, 0)) are fulfilled, then it never leaves that state afterwards, and subsequently behaves as a Balanced Viscosity solution with viscous behavior in the variable z, only (namely, the counterpart, for the system with hardening, of the BV 0 concept).
We mention in advance that the analogue of Theorem 4.1 does not hold, instead, for BV µ,ν 0 solutions.
We now consider a vanishing sequence (µ k ) k and set µ = µ k . By [CR19, Theorem 6.13], under the assumptions of Section 2 and (3.13) (cf. also Remark 5.4 below), for any fixed k there exists a parameterized Balanced Viscosity solution (t k , q k ) in the sense of the previous definition. Moreover, for the sequence (t k , q k ) k we may assume the validity of the following a priori estimates: Theorem 5.3. Let (µ k ) k be a vanishing sequence and (t k , q k ) k be a sequence of BV µ k ,0 0 solutions to system (1.4), such that estimate (5.3) holds.
Remark 5.4. The validity of Theorem 5.3 extends to sequences (t k , q k ) k originating from initial data In particular, the last condition yields µ k p k 0 → 0 in L 2 (Ω; M n×n D ), as needed for (5.10) below.
Proof. The proof is divided in two steps.
Step 1: Compactness. For later use, we observe that, due to estimate (5.3), By the assumptions on initial data and external loading and by (5.3), we have sup s∈[0,S] E µ k (t k (s), q k (s)) ≤ C for a constant independent of k. In particular, this implies that, as k → ∞, In view of (5.3), we find a (not relabeled) subsequence and a Lipschitz curve (t, q) ∈ W 1,∞ ([0, S]; [0, T ]×Q PP ) such that the following convergences hold as k → ∞ where e = E(u+w(t)) − p. Furthermore, an argument based on the Ascoli-Arzelà theorem (cf. [AGS08, Prop. , respectively, on the balls of radius R that contain (u k ) k and (p k ) k , resp. (cf. (5.6); here we use that BD(Ω) is the dual of a separable space). The second and third convergences have an analogous meaning. Hence, (5.4) follows.
Step 2: energy-dissipation upper estimate. By Corollary 3.14, it is sufficient to show that the pair (t, q) complies with the energy-dissipation inequality for every s ∈ [0, S]. We start from (5.2) for the solution (t k , q k ) with µ = µ k . It is straightforward to see that It remains to show that s 0 M 0,0 0 (t(r), q(r), t ′ (r), q ′ (r)) dr ≤ lim inf k→+∞ s 0 M µ k ,0 0 (t k (r), q k (r), t ′ k (r), q ′ k (r)) dr. (5.11) In fact, it will be sufficient to obtain the above estimate only for the reduced functionals M 0,0 0,red and M µ k ,0 0,red . In view of (3.23) we distinguish two cases. Let A := {s ∈ [0, S] : t ′ (s) > 0}.
All in all, (5.11) follows. This finishes the proof.
5.2.
Vanishing-hardening analysis for single-rate solutions. BV solutions to the single-rate system with hardening have been obtained in [CR19, Section 6.1] by performing the asymptotic analysis of the reparameterized energy-dissipation balance (3.2) as the viscosity parameter ε tends to 0, while keeping the hardening and the rate parameters µ and ν fixed. That is why we refer to them as BV µ,ν if t ′ = 0, M µ,ν 0,red (t, q, 0, q ′ ) := D ν (q ′ ) D * ,µ ν (t, q). (5.13b) For better readability, we also recall that It is worthwhile to remark that the reduced functional M µ,ν 0,red , at t ′ = 0, simultaneously encompasses viscosity for the three variables u, z, and p. Instead, its counterpart M µ,0 0,red for BV µ,0 0 solutions features, in the jump regime t ′ = 0, a viscous contribution in the variables (u, p) when z ′ = 0, and viscosity in z when D * ,µ (t, q) = 0, i.e. when u is at elastic equilibrium and p is locally stable.
Let us now address the asymptotic analysis of the above solutions for a vanishing sequence (µ k ) k . As mentioned in the Introduction, in the construction of BV µ,ν 0 solutions the rate parameter is always supposed smaller than the hardening parameter, which forces us to also consider a sequence (ν k ) k such that ν k ≤ µ k for all k ∈ N, so that ν k → 0 as well. In fact, the technical condition ν k ≤ µ k comes into play in the proof of [CR19,Prop. 4.4]. The latter result and [CR19, Theorem 6.8] ensure the existence of BV µ k ,ν k 0 solutions (t k , q k ) k enjoying the following a priori estimates ∃ C > 0 ∀ k ∈ N for a.a. s ∈ (0, S) : t ′ k (s)+ u ′ k (s) W 1,1 (Ω) + z ′ k (s) H m (Ω) + p ′ k (s) L 1 (Ω) + √ µ k p ′ k (s) L 2 (Ω) + e ′ k (s) L 2 (Ω) +D ν k (u ′ k (s), p ′ k (s)) D * ,µ k ν k (t k (s), q k (s)) ≤ C (5.15) (and, up to a reparametrization, the non-degeneracy condition).
In Theorem 5.6 below we are going to show that, as the hardening and rate parameters µ k and ν k vanish, (up to a subsequence) BV µ k ,ν k 0 solutions converge to a BV 0,0 0 solution of system (1.1).
Theorem 5.6. Let (µ k ) k , (ν k ) k be two vanishing sequences, and let (t k , q k ) k be a sequence of BV µ k ,ν k 0 solutions to system (1.4) such that estimate (5.15) holds.
Proof. The argument is split in the same steps as the proof of Theorem 5.3.
Step 2b: energy-dissipation upper estimate when t ′ = 0. We will now show that (0,s)\A M 0,0 0,red (t(r), q(r), 0, q ′ (r)) dr ≤ lim inf k→+∞ (0,s)\A M µ k ,ν k 0,red (t k (r), q k (r), t ′ k (r), q ′ k (r)) dr. (5.16) As in the proof of Thm. 5.3, we will apply Lemma A.3 below in the context of the space Q := Q PP , with S the ball of radius R from (5.6), to the functionals M k := M µ k ,ν k 0,red (extended to R × (Q H \Q PP ) × R × (Q H \Q PP ) as described for M µ k ,0 0,red in the proof of Thm 5.3), and M 0 := M 0,0 0,red . With this aim, we only need to check that the Γ-liminf estimate in (A.2) holds in our context. Clearly, it is sufficient to check that for any sequence (t k , q k , t ′ k , q ′ k ) k with (t k , q k , t ′ k , q ′ k ) * ⇀ (t, q, 0, q ′ ) in R × S × R × Q PP as k → ∞ there holds M 0,0 0,red (t, q, 0, q ′ ) ≤ lim inf k→+∞ M µ k ,ν k 0,red (t k , q k , t ′ k , q ′ k ). (5.17) The above estimate easily follows from Lemma A.2 in the case in which (t k ) k admits a strictly positive subsequence. Instead, if there existsk ∈ N such that t ′ k ≡ 0 for k ≥k, then M µ k ,ν k 0,red (t k , q k , t ′ k , q ′ k ) = D ν k (q ′ k ) D * ,µ k ν k (t k , q k ) for all k ≥k and we may argue in the following way. When z ′ = 0 and D * (t, q) = 0 we use that , ∂R(0)) , which follows by neglecting some terms in the expression for M µ k ,ν k 0,red . Then, as in Thm. 5.3 we use Lemma A.2 to pass to the limit in the two terms on the right-hand side of the above inequality in the cases z ′ = 0 and D * ,µ (t, q) = 0, respectively. In the remaining case z ′ L 2 D * (t, q) > 0, it holds lim k→∞ M µ k ,ν k 0,red (t k , q k , t ′ k , q ′ k ) = +∞: indeed, by Lemma A.2 and since z ′ k ⇀ z ′ in L 2 (Ω), we have that z ′ k L 2 D * ,µ k (t k , q k ) > c for a suitable c > 0. Since we again conclude estimate (5.17) as (µ k ) k and (ν k ) k vanish. Thus, by Lemma A.3, we have proven (5.16). This finishes the proof. | 2022-04-12T01:16:05.776Z | 2022-04-08T00:00:00.000 | {
"year": 2022,
"sha1": "b76c8e106f4a69905a631c66c863467189c5a5fc",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b76c8e106f4a69905a631c66c863467189c5a5fc",
"s2fieldsofstudy": [
"Mathematics",
"Engineering"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
238755337 | pes2o/s2orc | v3-fos-license | Short-term Load Forecasting for Distribution Substations Based on Residual Neutral Networks and Long Short-Term Memory Neutral Networks with Attention Mechanism
Electric load at distribution substation level has strong volatility due to various customer characteristics, making it challenging for short-term load forecasting. This paper proposes a novel short-term load forecasting method for distribution substations based on combined residual neutral networks (ResNet) and Long Short-Term Memory (LSTM) neutral networks with attention mechanism (ResNet-LSTM-Attention) where the advantages of ResNet and LSTM are combined together. A 34-layer ResNet is built to extract the latent features of data. Then, a two-layer LSTM is added to learn the time series characteristics of the extracted features, followed by an attention mechanism to selectively pay attention to the state of the hidden layer. The K-fold cross-validation is further adopted to make full use of data and improve the generalization ability of the model. Finally, the data set from North Carolina and the smart meter energy consumption data from the Low Carbon London project are employed to verify the validity of the method. Compared with traditional LSTM model, the proposed ResNet-LSTM-Attention method shows smaller mean absolute percentage error and better performance on load forecasting.
Introduction
Short-term load forecasting refers to forecasting the load in the next few hours or several days. Accurate load forecasting results can be used to arrange day-ahead scheduling, equipment maintenance, monitor system operation status, and prevent accidents. It is of great significance to improve resource utilization and economic benefits, and to ensure the normal production of society and people's daily life.
At present, the methods applied to short-term load forecasting can be divided into three categories: classic methods, traditional methods and intelligent methods. Among the classical methods, the regression analysis method has a simple structure, fast calculation speed, and good extrapolation performance. However, linear equations are used to express complex problems and cannot accurately predict the impact of various factors on the results [1]; the time series method requires less data and can reflect the continuous characteristics of the load in a short period of time, but the uncertain factors that have a greater impact on the load such as holidays are not considered enough [2]. Among the traditional methods, the Kalman filter method performs better: the load is divided into random components and deterministic components. The random components are represented by state variables, and the deterministic components are described by a first-order linear model [3]. A state space model is established to achieve prediction. The best estimate of the state of the moment, combined with the future state of the system, makes the model prediction results more accurate, but in actual scenarios, it is difficult to obtain the statistical characteristics of noise. With the development of data collection and storage technology, historical load data has increased exponentially [4], and various intelligent methods have been applied. Among them, neural network can deal with massive data [5], and because of the characteristics of fast convergence and strong adaptive ability, it performs best.
Nowadays, two types of deep learning models, convolutional neural networks (CNN) and recurrent neural networks (RNN), are widely used in load forecasting. CNN is a good machine learning method that can learn the relationship of input data. Using the existing model to train the convolutional neural network, it has the ability to express the relationship between input and output data. CNN has been widely used in pattern recognition tasks such as speech recognition, image recognition, and behaviour cognition, and has shown good performance. The special feature of RNN is that it uses the relationship between the current output in the network and the previous information, and can selectively use the information of the sequence to process the current input, and therefore it is more suitable to deal with problems with correlation between the samples [6]. As variants of CNN and RNN, ResNet and LSTM have advantages. Deep residual networks can effectively overcome the problem of overfitting [7] while LSTM can solve the problem of gradient vanishing and gradient explosion during RNN training [8].
In this paper, we proposes a novel short-term load forecasting method for distribution substations based on ResNet and LSTM with attention mechanism (ResNet-LSTM-Attention), where the advantages of ResNet and LSTM are combined together. The attention mechanism is further added to take the different contributions of historical data at different times into account.
The rest of this paper is organized as follows. The proposed load forecasting methodology via ResNet-LSTM-Attention is described in Section II. In Section III, simulation results are demonstrated. Finally, conclusion will be drawn in Section IV.
ResNet model
Theoretically, deeper CNN is more complex, and the prediction effect should be better. However, the learning ability of the network begins to degenerate, although the network can be converged through regularization. Because when the number of network layers increases, some values will be infinitely close to zero, i.e. the phenomenon of gradient vanishing. The residual structure breaks through the inherent idea that the neural network can only propagate sequentially when it propagates forward. That is, the output of the current layer of the network can only be the inertial thinking of the next layer of network as the input, so that the output of the ith layer of network can directly pass the adjacent. The jth layer network is used as the input of the (i+j)th neural network. The neural network is no longer degraded by factors such as gradient disappearance, and can reach hundreds of layers as needed.
By superimposing y=x on a shallow convolutional neural network, it can be ensured that the prediction result of the neural network will not degenerate compared to the shallow network [9]. It is customary to call the residual structure of y=x an identity connection, as shown in the Figure 1 Let the first weight layer be 1 w and the second weight layer be 2 w . Therefore, the final result y should be: When the number of convolutional layers is deep, even if some parameters in F(x) tend to zero, due to the existence of the y=x layer, it can still be guaranteed that the learning ability will not degenerate.
In the residual basic learning unit, by adding an identity connection layer, the input x i is directly passed to the output as the final result, so the output H(x)=F(x)+x. When the number of layers increases, F(x) is approximately equal to 0, then the output H(x) of the residual result is approximately equal to x, which is the so-called identity connection. When F(x) appears in the back propagation process when the gradient disappears, it can directly pass the identity connection part as the output. At this time, the neural network no longer learns the sum H(x) of F(x) and x, but learns the difference F(x) between the output function H(x) and x, which is the so-called residual: F(x)=H(x)-x, therefore, if the learning ability of F(x) is poor, the input will be output directly, so that the accuracy of the entire network will not decrease as the number of layers deepens.
Forget gate
Input gate Output gate The forget gate determines how much information of the last moment in the memory unit will be transferred to the current moment for learning. Realized by the parameter , the value range of is (0,1), and the forgetting gate function uses the sigmoid function to control the ratio of output.
( 2 ) where W f is the intermediate output of the hidden layer, h t and input data x t are the weight parameters of the gate operation; b f is the bias of the gate operation; is the non-linear activation function sigmoid.
The input gate determines how much new information is added to the unit. To achieve this, two functions are needed as: The output gate determines the proportion of the memory stored in the memory unit that will be output: ICEECT
Attention mechanism
Attention mechanism is essentially a resource allocation mechanism that can highlight the impact of important information. This method is added to LSTM [10].
Figure 3 Structure of the attention mechanism
The structure of the attention mechanism is shown in Figure 3, where x t is the input of the LSTM network, h t corresponds to the state of the hidden layer at time t, t is the attention weight value of the attention to the hidden layer at time t, y is the output of the LSTM that introduces the attention mechanism. The attention weight coefficient formula is: where e t is the attention probability distribution value determined by the output h t of the LSTM layer at time t; u and w are the weight coefficients and b is the bias; s t is the output value of the attention layer at time t.
Output
The input of the output layer is the output of attention layer, which is fed to a fully connected layer, and the activation function is the Relu function. The calculation formula is:
ResNet-LSTM network structure
Combine several basic block residual basic modules with LSTM to build a ResNet-LSTM hybrid network model, whose composition is shown in the figure 4. The ResNet network consists of 34 layers of convolution. In order to prevent the model from overfitting, in the two-layer LSTM, a part of neurons is randomly discarded using the dropout method.
Simulation results
In this section, the 2017 global energy forecasting (GEF2017) competition dataset from Charlotte, North Carolina [11] project records kWh of end customers every half hour. We convert the kWh data to kW data and randomly aggregate data of some customers in year 2013 to simulate the total load curve of a distribution substation. The performance of the proposed model is compared with LSTM which is commonly used in short-term load forecasting.
Data preprocessing
Firstly, load the original data, and encode the hour, month, day of the week, whether it is weekend, whether it is holiday or not, into one-hot coding format. Second, discard the features that the neural network model cannot directly use in the original data. Thirdly, add the interactive influence of temperature, humidity information and month to enrich the characteristic dimension of the data. Then, take each consecutive 24 moments as a sequence. In order to facilitate calculations, the feature dimensions are split so that the length and width are approximately equal. The data is normalized using the maximum and minimum normalization method to eliminate the adverse effects of singular samples. The normalization method formula is as follows:
Evaluation Index
The mean absolute percentage error (MAPE) is used as the loss function for each iteration. Its expression is as follows: where x true (t), x pred (t) are the true load and predicted load at time t. The adaptive moment estimation (Adam) is used as the optimization algorithm [13]. This algorithm can be regarded as a combination of momentum method and RMSprop algorithm. Not only does it use momentum as the parameter update direction, but it can also adjust the learning rate adaptively.
K-fold cross-validation method
This method uses the K-fold cross-validation method to improve the data set to obtain stronger randomness. First, the data set is randomly shuffled and divided into K sub-data sets evenly. When predicting, randomly select K-1 sub-data sets as the training set of the model, and use the remaining sub-data set as the test set of the model. The above process is repeated K times, and finally the K results obtained are averaged as the final output [14]. In this way, every piece of data can participate in the two processes of forward propagation and backward propagation. K-fold cross-validation not only prevents the model from overfitting, but also prevents the inability to make full use of the data.
Simulation results
Training loss curves of the LSTM model and the proposed model using the GEF2017 dataset are shown in Figure 5. It can be seen that compared with traditional LSTM model, the proposed ResNet-LSTM-Attention method shows smaller MAPE on the training set. Figure 7 shows the comparison of the 168 hours (one week) load forecasting results using the GEF2017 dataset and the LCL dataset. Similar conclusion can be drawn that the proposed model gives lower MAPE and better forecasting performance than the LSTM model. The results show that the method can also be used to predict the load of even a week.
Conclusions
This paper proposes a novel load forecasting method for distribution substations based on combined residual neutral networks and long short-term memory neutral networks with attention mechanism. A 34-layer ResNet is built to extract the latent features of data. Then, a two-layer LSTM is added to learn the time series characteristics of the extracted features, followed by an attention mechanism to selectively pay attention to the state of the hidden layer. The K-fold cross-validation is adopted to make full use of data and improve the generalization ability of the model. Finally, the data set from North Carolina and the smart meter energy consumption data from the low carbon London project are employed to verify the validity of the method. Compared with traditional LSTM model, the proposed ResNet-LSTM-Attention method shows smaller mean absolute percentage error and better performance on load forecasting. Therefore, the proposed method is very effective, which can help the power grid company adjust the dispatching plan and work arrangement, predict the load change in advance, and take targeted measures to ensure the stable and efficient operation of the power grid, so as to provide better power supply services.
Future work will be focused on integrating multiple models and improving the generalization capabilities of models. | 2021-10-14T20:11:36.384Z | 2021-09-01T00:00:00.000 | {
"year": 2021,
"sha1": "5729697a32b72c02d7aacc9b2effdf6086b9614f",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/2030/1/012087",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "5729697a32b72c02d7aacc9b2effdf6086b9614f",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
214194019 | pes2o/s2orc | v3-fos-license | Transformation of Social and Labor Relations: Theory and Methodology
The article is devoted to the theoretical and methodological of social-labor relations system transformation. The author suggests organizing the transformation of the sociallabor relations system on the base of the system axiological approach that allows actualizing the analysis and modeling of the transformation processes of the social-labor relations system, point out the bifurcation moment of social-labor relations system. Such transformation peculiarities and parameters as institutional space, labor processes, human resources and interaction process have been considered. Keywords—social-labor transformation, labor processes, classification of types of transformation of social-labor relations, institutional space, value space, values
I. INTRODUCTION
The development trend of the modern economy is directed to high technologies, targeted to resource and labor saving, labor and working process automation, wide usage and application of information technologies in the social-labor sphere. The role of human factor as a production factor increases (labor activity individualization), the content and structure of the labor activity, labor processes changes (intellectual labor meaning and ratio grow). The attitude to labor (labor values) and attitude to personnel (as a recourse and company asset) is modified. It is worth mentioning, that value space transformation in social-labor relations is the result of socialeconomy development turbulence that defines the content and direction of the changes. The problems of workforce quality, attitude to the working process, work performance, labor behavior distortion lead to the value complex transformation both of a person and company, society. Due to some objective reasons the circle of subjects (participants) of social-labor relations grows (transfer from the subjects of the social-labor relations → to the stakeholders of the social-labor relations), which preconditions the growth of the regulation (institutialization) of the stakeholder interaction. The necessity of the transformation research in the relationship between the labor and the capital arise respectively in the frames of new institutional and value-based space.
Research and monitoring of the social-labor relations in the contemporary conditions conducted by the authors have shown a vast variety and non homogeneity and incoherence in social-labor relationship in the modern-day step. The ongoing value changes transformation of social-labor relations is not aligned with the existing institutions, as follows contradictions arise between them affecting negatively the labor activity motivation and behavior of the social-labor relations participants, leading to misbalance of their interests, which prevents any organizational changes and as the result decreases the efficiency of economic system. Therefore, actualizes the theoretical and methodological necessity of the common coherent and consistent approach to the study of social-labor relationship transformation based on the balance of interests of the participants.
The authors of the reported article suggest that the success of any changes and coping with recessional trends demand development of the directions of social-labor relations system transformation, build up of motivation to changes and organization of the labor relationship system based on the combined system and axiological approaches.
II. LITERATURE REVIEW Nowadays it is worth mentioning the diversity of theories of social-economy systems transformation (neocapitalistic theory, stage economic development theory, the theory of industrial, post industrial and information society, convergence theory, theory of technotronic age, new institutionalism, the theory of live quality and other ). Scientists from different countries were dealing with formation and development of the theories of social-labor relations transformation, namely U. A. Nazarova [1], V. S. Polovinko [2], Lanchenko Yu.O. [3], Frey C.B., Osborne M.A. [4], Howe J. [4]. Thus, G. Jackson has studied the application of management instruments and development of the corporate and social responsibility in business management labor relations transformation [6], Cavalcanti J.S.B. and Bendini M.I. have studied globalization and transformation of labor relations in the regions of Argentina and Brazil [7], Kilhoffer Z. has studied transformation of the existing institutes of labor relations of the present day [8], Sorgner A. has considered automatisation impact on change in the labor relations [9].
The vast variety of the social-labor relations transformation studies can be explained by multidimensionality and interdisciplinarity of academic fields of the labor economics, law, political science, social science, psychology etc. implying different objects of analysis can be reflected in different approaches to the process of social-labor relations transformation nature. Thus, a number of Russian scientistssociologists (Т. I. Zaslavskaya [10]) outline the term «social transformation». A modern concept of T.A. Medvedeva [11] on transformation of social-labor relations processes seem to be of the interest: «values (meaning, ideas, dogmata) → form (rules, regulations, institutions) → process (social learning)». According to T.A. Medvedeva socio-cultural values of social-labor relations subjects define to a great extent their economic choice, serve as a base for taking decisions that reflects in forming of respective rules, regulations and values of the social-labor life», the system of social-labor relations can be commented as a «reflective system» and its participants are able to «social-labor relations system re-excogitation » [11].
The vast variety and diversity of the studies in the research fields of the labor values (Magun V.S. [12], N. W. Rodionova [13], S. Dolan, S. Garcia [14], Joe Isaac [15], Cushen J. [16], Chahestani F.J., Fazel A., Mirjafari S. A [17] and others). The system-based approach to the research in the social-labor field widely represented in studies of the number of Russian and foreighn scientists is considered to be relevant. Thus, K. Dooley [18] reckons the system of social-labor relations as non linear network of interacting agents. Studies in the field of system-based approach find receptive audience in the systembased approach to understanding of biological and social phenomena of F.Capra [19].
Contrary to the huge number of scientific studies and literature related to transformation processes of the socialeconomic systems many of theoretical and methodological aspects of transformation in social-labor relations system remain open, namely drivers (reasons, factors), transformation aims and objects, possibilities and conditions of any pathway of social-economic system transformation. This is due to the fact that the majority of theories and transformation concepts show the necessity of social-labor relations changes. Moreover, one can note down incoherence and non homogeneity of the scientific views regarding the nature of the social labor relations, which is explained by variety of the nature conceptions of social-labor relations, sources and building up mechanisms (to be related to different research fields of social-labor relations in labor economics, social science, political science etc.), and as follows can be reflected in the variety of the conceptual approaches to social-labor relations transformation.
The object of the reported article is the conceptual basis formation (theoretical and methodological) of social-labor relations transformations aimed to the long term development based on interest balance of the participants and combination of scientific approaches.
III. METHODS AND MATERIALS
The research methodology is based on the systemaxiological approach using the analytical method and synthesis of the existing scientific approaches as well as on the co-evolution concept presupposing development of the interlocking elements of the unified system maintaining the system integrity (inter alia as the common result of the environmental factors and institutional effects).Amid the presentation of the materials we will be guided by the following logic: Concept (nature) of the social-labor system transformation; Special aspects and characteristic of the social-labor system transformation; Classification of types of the social-labor system transformation.
III. RESULTS AND DISCUSSION
Taking into account social-labor relations as the network system of nonhomogeneous relations, with the majority of participants and a certain coordination of relations between them, the authors have defined the social-labor relations transformation as a build-up process of a new social-labor relations network based on formation (change) of corporate values networks, preconditioning the value transformation of the subject's system of values, modification of the social-labor relations with changes in goals, characteristics and parameters, building up the necessary motivation and responsibility in labor processes.
We will consider transformational processes in the sociallabor relations network system in tree directions ( fig. 1): «Z» axis shows technical economical changes including labor and labor processes transformation based on the technological determinism and deficiency law. Thus, the scientific-technical process changes the content of labor activity moving the accent to the mental labor demanding new knowledge, skills and acquirements; the labor process is updated, limiting in time and changing in space. Process of the social-labor relations transformation in XYZ-axes (Social-labor relations configuration «А» → social-labor relations configuration «B»→ social-labor relations configuration «В») The «X» axes reflects institutional changes as well as transformation of impact mechanisms between the social-labor relations participants, preconditions reasons of the social-labor relations transformation, mainly by the influence of institutional space adaptation mechanism to the economic changes (economic reformation, transfer to new economic models, political regime change etc.) (Howe J.) [5], creation or import of new institutions, structural changes of institutional space happen introducing institutions with new characteristics.
The «Y» axes shows axiological changes including modification (radical change) of value systems of all the participants of social-labor relations, including mental level leading to the motivational field transformation. In other words, one of the key questions of social-labor relations is an axiological aspect, bringing a system-based and semantic factor in to the social-labor relations processes, what can define the workers behavior at their workplaces in an organization (I. Elers) [20]. In this field we consider subjects of social-labor relations involved in value creation process [15], [16], arising inter alia in social-labor relations transformation processes. For this purpose we use the term «corporate value networks» (corporate value networks) (СVN), in terms of which the value exchange in interorganizational networks of social-labor relations system must be understood contributing to necessary motivation and responsibility in labor processes formation, including the period of organizational changes and reforms.
The transformation of values on the institutional level in the social-labor relations system (Porter М.Е., Kramer М.R.) [21] presupposes generating (modification) of «corporate value networks» widening the field of economic and social values, which according to our assumption promotes increase of the collective negative influence (opposition) and building up the necessary motivation of the personnel creating the balance of interests of the participants based on collaboration and interaction.
All considered transformation directions of the social-labor system are independent on one hand, but they have a coevolutional conditionality intermutual dependence.
Special characteristics and parameters of social-labor system transformation can be defined by the reasons of transformation. According to Emil Durkgeim, the reason of transformation is the « development specialization of labor (as the как differentiation of people and social groups), whereas the specialization of labor can be explained by growth of the moral and physical density of the population» [22]. Some scientists envisage reasons of transformation of social-labor systems in labor values devaluation, other [2] speak about repugnance accumulation in labor relations and contraventions in the employee-employer interactions.
According to the suggested by the author methodology of the system-axiological approach, reasons for social-labor relations system transformation is the achieving of a point by the system (moment) of bifurcation, what can be preconditioned by a complex of radical changes in three subsystems: «technical and economic changes↔axiological changes», « technical and economic changes ↔ institutional changes», « institutional changes ↔ axiological changes ».
The emphasis on the labor transformation process and social-labor relations transformation is made in works of Nekhoda Е.W. [23], where based on the comparative analysis four transformation parameters of labor and social-labor relations transformation: economic, technical, value-based and cultural. The author suggests specifying parameters of sociallabor relations transformation: economic, technical, social and cultural, as "value-based" do not refer to transformation parameters, but to the social-labor system components. Let us consider social-labor relations transformation in different parameters.
The transformation of «institutional space» regulating relations in the social-labor system presupposes changes on: «Economic parameter of transformation»: changes in institutional aspects (rules, regulations, mechanisms), regulating labor activity parameters. Thus, the changes in institutional conditions of production; system of economic structure and employment structure, property to economic resources in labor activity, changes in the structural mechanism of value creation network (from the labor process to the final product), changes in the mechanism according to the results of labor (rules and regulations change, labor remuneration rules (ex. electronic money) new formal and informal labor remuneration forms); On «technical parameter»: changes in institutional aspects, defining content of the labor activity. Thus, automatisation and technological determinism modify institutional basis of interaction in labor processes, changes the system of statutory and regulatory, engineering, business documentation and other; on «social cultural parameter» takes place transformation of institutional aspects preconditioning inclusivity and tolerance in social, ethnic, confessional and cultural differences in labor activity. Thus in present times in Russia the majority of regulatory documents contain inclusivity and tolerance aspects as the required condition of social-labor relations.
Labor processes, stating parameters of works (labor) social-labor relations system within the scientific and technical process development are transformed from the physical to mental labor domination in the labor processes preconditioning observation, managing and service functions. In this respect transformation on «economic parameter» presupposes changes in the labor efficiency and production volume changes, equity structure change in labor processes and the transformation of production engineering cycle (decrease of production time due to the labor process automatisation, increase of quality of labor operations etc.). On social cultural parameter take place the modernization of local cultures in labor processes with achievement of global multicultural activity in labor processes, namely interactional and collaboration transformation of different stakeholders in labor processes.
Transformation of «human resources» presupposes changes in structure of the human asset (on economic parameter), changes of knowledge structure, intellectualization of the labor activity (on technologic parameter), followed by changes in the motivational structure of the social culture networks. In this respect takes place changes in motivation structure (on social culture parameter).
The author suppose that the view of those social-labor relations transformation is based on interest transformation and value system, which contributes to the passage to the new growth quality, image about efficiency of the labor activity and labor efficiency of social-labor relations changes. In this regard on the «economic parameter » takes place change of the structure, change of consumer's values. According to the «technical parameter» the attitude to the labor and labor
Advances in Economics, Business and Management Research, volume 113
behavior change in technical processes. It is worth mentioning that in the given component transformation of the social cultural parameter has bigger meaning, where complex of social and cultural values changes building u the motivation field in the stakeholder's interaction. According to Nekhoda E.W. the opinion of whom the author shares, in the present time qualitative changes happen in the social-labor relations system, new level of relations appear "society -naturehuman". Values state the perception of live conditions, labor activity and a model of labor behavior. Value space states the interaction of the value-based regulation and its result.
The changes in relations as interaction process (value exchange) presuppose building up of interaction, communication and value exchange mechanism in the social labor relations between the subjects. The changes in relations (interactions) precondition the necessity of regulations of interactions within the value exchange process, formation of new rules, procedures and institutions.
Transformation of internal (personal) values of a person participating in the labor processes, interconditionally explains the model of the motivational behavior in labor processes based on professional characteristic changes, level of competence, knowledge, building up new value networks. The transformation of the «value space» (value network) includes changes in organizational culture (corporate culture), to which its own value network corresponds. Value orientation states the perception of live conditions, labor activity and behavioral model. Value space states the interaction regulating of the interaction of the value-based regulation and its result [5]. Changes in relations within the interaction process (value exchange) presupposing interaction mechanism formation, communication and value exchange in the social labor relation system between subjects. Relation appears within the «mutual relation, binding them to each other» following by a synergetic effect, where marks of the interacting parts сappear and deflect. Moreover, «the relation» in philosophy can be interpreted as a form of positedness and representativity of one thing through another. According to the classical dialectics «relation» is the process and result of transfer of the internal moment of the qualitative definition of a thing into an external instance…. ».
The changes in relations (interactions) precondition the necessity of regulations of interactions within the value exchange process, formation of new rules, procedures and institutions. According to the views of a number of scientists, institutions can be transformed faster than values, according to their research data; at least two generations are needed for value changes [23]. Therefore a situation can appear when institutions have changed but not the values what does not lead to the full social-labor system transformation and the system remains not balanced and not effective.
All the elements of the sociallabor system are closely connected to change (transformation) and lead to the chain reaction and further transformation. Thus, the changes in knowledge level of the human assets lead to the appearing of new technologies which respectively leads to the labor and labor process transformation following by the changes in price level for resources (including labor), modifies value system and institutional space.
The review of social-labor relations shows that they comprise the wide circle of the social economic aspects, processes and connections, as the result they include different types of social-labor relations transformation. The analysis of classification approach of social-labor relations has been conducted by the author within the generalization key basis were consolidated upon which types of social-labor relations transformation can be classified (Table 1). On the source: internal (endogenous) the transformation of social-labor relationsare caused by processes inside of the social-labor relations system External (exogenous)indicated by external reasons and factors outside the social labor system, ex. reformation processes of the economic system. They precondition internal changes in social labor system.
On the procedure:
Fundamental transformation social-labor relationsare connected with the systematic expedient of the social-labor relations system component transformation; Bifurcation (bursting)qualitative uneven changes of the social labor component system which precondition transfer of the system into one or several possible qualitative states, building up dependences on the initial conditions internal and external factors in the social-labor relations system.
Diffusive transformation of the social-labor systemgradual, reciprocal distribution of changes and their adaptation in the social-labor components system On the innovative potential: Advances in Economics, Business and Management Research, volume 113 fundamental transformationsconceptual, structural functional changes in the social-labor relations system leading to the full destruction of the old social-labor relations system and building up of the new social-labor relations system with absolutely new features and characteristics, including new components Revolutionary transformationsdramatic changes in the components of the social-labor relations system precondition appearing of the new features and characteristics in the sociallabor relations system; Incremental (gradual) transformations − state some minor changes of the social-labor relations system, having a qualitative character, for example gradual change of the relations system between the stakeholders etc.
Partial transformations of the social-labor relations system dramatic changes in the number of components of the sociallabor relations system, not leading to the full changes in the social-labor relations system On dynamics: evolutionary, revolutionary, continuous, cumulative, discrete, cyclic.
On the transformation results. Depending on targets, goals and reasons, the transformation of social-labor relations system can be marked out on: Origin (transfer) to the absolutely new relations system based on innovative fundamental and conceptual situations, presupposing building up of a new system; Formation of the new configuration of social-labor relations system (including possible reformation (modernization) of an old social-labor relations system); Modification of an old system without transfer to a new social-labor relations system, namely in the part of sociallabor relations system; Return to an old social-labor relations system (failures and disturbance in the transformation process); A new transformation process appears while the current transformation period is not finished.
Following the reported classification it becomes obvious that types of social-labor transformation differ on different criteria. Each type of social-labor system transformation has its transformational states and period of transformation, preconditioning limits and consequence of transformation. It is worth mentioning that one of the most difficult aspects of problems in the transformation period is the presence of old (which is being transformed) and new relations forms, institutions and values which precondition some contradictions, conflicts and oppositions in the social-labor relations system.
One of the main aspects of transformation is the time (transformation period), preconditioning some limitations and consequences of the transformation process. The author defines the transformation process as a transient state of social-labor relations system, reflecting dramatic transfer from one configuration of the social-labor relations system to another. Dynamics analysis of reformations and organizational changes has shown that the complicated the axiological field is and the higher level of social-labor relations system institutionalization, the longer transformation period is тем. Moreover, the duration of the transformation period is influenced by the motivational field level and specific (volume) of the corporate values network.
Two approaches exist in the transformation theory, namely, "chocking" and evolutionary. «Chocking» presupposes maximally short transformation period and demand quite big investments, unlikely leading to the full process transformation of the axiological field, whereas the evolutionary directs to the gradual parameter change of the social-labor relations system and from our point of view is more efficient.
As special features of the social-labor relations system transformation one can outline accumulative character of the changes in the social-labor relations system of different level and inability to jumping, which is preconditioned by the internal mechanism of consistency attainment and balance of social-labor relations system only at transformation (changes) of all social-labor relations system parameters. Partial changes of any parameter inevitably destroy the social-labor relations configuration, leading to new structural changes and new transformation period.
Considering the problem of transformation social-labor relations system in system dynamics it is worth mentioning that any system can have several variants of consequence to external influence: increase influence (at intensifying feedback), slow influence (at a compensating feedback), which means that it is necessary to study closed circuits of reaction (feedback circuits), to speed up transformation processes in the necessary direction (сократить decrease transformation period), or give a possibility to the system to conduct self regulation and reach the necessary development (self transformation ).
IV. CONCLUSION From this perspective, firstly, the conducted research of the theoretical and methodological basis of social-labor relations transformation allow suggesting a new conceptual point of view on the social-labor relations system transformation as the network process of the building up of the new value-based social-labor relations system Secondly, the author has regarded the transformation of social-labor relations through its parameters, which contributes to building up of the theoretical basis of sociallabor relations transformation.
Thirdly, the author suggested a classification of transformation social-labor relations types, defining different conditions of the social-labor relations system transformation processes, as the result of complex changes in three directions «technical economic changes», «institutional changes», «axiological changes».
Advances in Economics, Business and Management Research, volume 113
Fourthly, the methodology of axiological-system approach provides an opportunity to control transformation processes, resonant influence on social-labor relations system parameters directing the social-labor relations system from supportive (necessary) social-labor relations system configurations, enabling mechanisms of «self-managing development» of the social-labor relations system. | 2020-02-20T09:14:49.310Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "d59a06fa03b411e1c53db876cef1f74cb16069e5",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.2991/fred-19.2020.101",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "a1c25f209af63f5b208715fa6a2421791ddf265e",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Economics"
]
} |
256020080 | pes2o/s2orc | v3-fos-license | rhErythropoietin-b as a tissue protective agent in kidney transplantation: a pilot randomized controlled trial
Extended criteria donor (ECD) and donation after circulatory death (DCD) kidneys are at increased risk of delayed graft function (DGF). Experimental evidence suggests that erythropoietin (EPO) attenuates renal damage in acute kidney injury. This study piloted the administration of high dose recombinant human EPO-beta at implantation of ECD and DCD kidneys, and evaluated biomarkers of kidney injury post-transplant. Forty patients were randomly assigned to receive either rhEPO-b (100,000 iu) (n = 19 in the intervention group, as 1 patient was un-transplantable post randomisation), or placebo (n = 20) in this, double blind, placebo-controlled trial at Manchester Royal Infirmary from August 2007 to June 2009. Participants received either an ECD (n = 17) or DCD (n = 22) kidney. Adverse events, renal function, haematopoietic markers, and rejections were recorded out to 90 days post-transplant. Biomarkers of kidney injury (neutrophil gelatinase-associated lipocalin, Kidney Injury Molecule-1 and IL-18) were measured in blood and urine during the first post-operative week. The incidence of DGF (53% vs 55%) (RR = 1.0; CI = 0.5-1.6; p = 0.93) and slow graft function (SGF) (32% vs 25%) (RR = 1.1; CI = 0.5-1.9; p = 0.73) respectively, serum creatinine, eGFR, haemoglobin and haematocrit, blood pressure, and acute rejection were similar in the 2 study arms. High dose rhEPO-b had little effect on the temporal profiles of the biomarkers. High dose rhEPO-b appears to be safe and well tolerated in the early post- transplant period in this study, but has little effect on delayed or slow graft function in recipients of kidneys from DCD and ECD donors. Comparing the profiles of biomarkers of kidney injury (NGAL, IL-18 and KIM-1) showed little difference between the rhEPO-b treated and placebo groups. A meta-analysis of five trials yielded an overall estimate of the RR for DGF of 0.89 (CI = 0.73; 1.07), a modest effect favouring EPO but not a significant difference. A definitive trial based on this estimate would require 1000-2500 patients per arm for populations with base DGF rates of 50-30% and 90% power. Such a trial is clearly unfeasible. EudraCT Number 2006-005373-22 ISRCTN ISRCTN85447324 registered 19/08/09.
Background
In 2011-2012, 34% of UK deceased donors were over the age of 60 yrs while 40% were donation after circulatory death (DCD) donors [1]. Both expanded criteria donors (ECD) and DCD kidneys are more likely to develop delayed graft function (DGF) in the early post-transplant period [2,3], with its ensuing clinical and financial implications. Since its introduction, recombinant human erythropoietin has been a major advance in the management of renal anaemia, enhancing patient cognitive function, physical activity and quality of life [4,5]. In addition, there is now a large body of evidence that rhEPO has pleiotropic effects on the body beyond the erythroid compartment. Animal studies examining acute kidney injury, including ischaemia reperfusion injury (IRI) have shown functional improvements [6,7], and antiinflammatory effects [8] after EPO administration, either before [9,10], during [11,12], or very importantly, after the injury has taken place [6,7,12]. In 2002, Ehrenreich et al. [13] reported a human study in acute ischemic stroke, where high-dose rhEPO was well tolerated and associated with an improvement in outcome at 1 month, as assessed by clinical endpoints of stroke and outcome scales.
Based on results from clinical trials including 1725 patients approximately 8% of patients treated with NeoRecormon® are expected to experience adverse reactions. Undesirable effects are observed predominantly in patients with chronic renal failure or underlying malignancies and are most commonly an increase in blood pressure or aggravation of existing hypertension and headache. The therapeutic margin of NeoRecormon® is very wide. Even at very high serum levels no symptoms of poisoning have been observed [14].
Against this background of evidence supporting the theory that administration of rhEPO after injury may be beneficial, we performed a pilot study to assess the safety of administering high dose rhEPO-beta to ECD and DCD kidney recipients intra-and peri-operatively, to obtain preliminary data on efficacy, and to evaluate changes in three extensively reported biomarkers of kidney injury in blood and urine: NGAL [15,16], IL-18 [17], and KIM-1 [18,19], during the first post-operative week.
Patients
Eighty-two recipients were screened ( Figure 1) providing 63 eligible candidates 34 donors contributed 39 kidneys, with 5 DCD's contributing a single kidney into both the rhEPO-b and placebo treated groups. Thirty-nine patients were transplanted, 19 in the rhEPO-b treated group (one patient in the intervention arm was deemed untransplantable following randomisation), and 20 in the placebo group. No difference was seen in age, sex, ethnicity and BMI of the donor between the groups. Donor cause of death was predominantly an intra-cerebral event in >75% of cases. The rhEPO-b group received more DCD kidneys (63% vs 50%, p = 0.41), particularly DCD kidneys meeting extended criteria (33% vs 20%, p = 0.65). One patient withdrew from the trial following the initial rhEPO-b dose, due to an event attributed to an arterial intimal flap and unrelated to the study drug, but consented to allow continued collection of samples and follow-up data. The patient was included in the final analysis on an intention to treat basis. Demographics for donors and recipients are shown in Table 1. One patient in the rhEPO-b treated group received ATG induction, due to a weakly positive flow cross-match in accordance with transplant unit policy.
Pre-transplant dialysis
In the rhEPO-b group 5/19 received 2 hours of haemodialysis and one patient received rapid cycling peritoneal dialysis with varying degrees of ultrafiltration immediately prior to surgery. Of these, four patients subsequently developed DGF. In the placebo group 4/20 received 2 hours of haemodialysis prior to surgery and all developed DGF. Of interest, 17/21 patients on maintenance haemodialysis prior to surgery developed DGF, as opposed to 3/15 patients on maintenance peritoneal dialysis (RR 4.05; CI 1.44-11.38; p = 0.0005).
Adverse events
There were two deaths due to sepsis in the placebo group. Hypertension occurred in 6 rhEPO-b recipients and 5 patients in the placebo group (RR = 0.79; CI = 0.29-2.17; p = 0.73). One patient in the rhEPO-b group developed a generalised tonic-clonic seizure due to severe hypertension, attributed to hypervolaemia and the cessation of anti-hypertensive medication peri-operatively (unit policy). In the opinion of the respective attending consultants and the independent trial data monitor, none of the observed adverse events was attributable to rhEPO-b treatment.
Graft function early post-transplant
Numbers and duration of dialysis episodes were similar in both groups ( Table 2). DCD kidney recipients were more likely to develop DGF (15/22) Graft function out to 90 days post-transplant Sequential serum creatinine and 4-variable MDRD eGFR are shown in Figure 2A and B respectively. Kidney function (eGFR) was not analysed prior to day 7, apart from baseline pre-transplant, due to the high rate of DGF and the impact of dialysis, preventing meaningful analysis. Acute rejection rates were similar in the rhEPOb treated and placebo groups during the first 3 months post-transplant (5 vs 3; RR = 0.57; CI = 0.16-2.1; p = 0.45).
Haematological parameters
Haemoglobin levels were similar in the rhEPO-b and placebo groups on entry into the study (11.5 ± 0.3 g/dl vs 11.3 ± 0.4 g/dl, respectively; p = 0.78). The number of blood transfusions required during the in-patient stays did not differ significantly between groups (p = 0.20) and the groups had similar levels of maintenance rhEPO-b usage post-transplant (Table 1). There was no effect on platelet levels at any time point (data not shown). Haemoglobin and haematocrit profiles are shown in Figure 2C and D respectively, showing no significant differences between the groups.
Biomarkers
The temporal profiles in blood and urine of the biomarkers of renal injury, neutrophil gelatinase-associated lipocalin (NGAL), Kidney Injury Molecule-1 (KIM-1) and, IL-18 are shown in Figure 3A-E. There were small differences between the rhEPO-b and placebo treated groups, none of which reached statistical significance.
Meta-analysis of 5 trials of rhEPO in transplantation
A meta-analysis ( Figure 4) including data from this trial and from those described by Sureshkumar et al. [20], Hafer et al. [21], Martinez et al. [22] and Aydin et al. [23] yielded an overall estimate of the RR for DGF of 0.89 (CI = 0.73; 1.07), a modest effect favouring rhEPO, but not demonstrating a significant difference between rhEPO and placebo treatments.
Discussion
This pilot study supported the view that the intra-and peri-operative intra-venous administration of high dose rhEPO-b (a total infusion of 100,000 iu of rhEPO-beta) appeared to be safe in the early post-transplant period. The study reports only small effect sizes of rhEPO-b on adverse events, renal function, haematopoietic factors, acute rejection episodes, and the profiles of biomarkers of kidney injury post-transplant.
The dose regimen in our study was adapted from Ehrenreich et al. [13], where high levels of EPO were required to cross the blood-brain barrier to provide a high concentration in the cerebrospinal fluid. In renal patients, Figure 1 Screening and randomisation. 1 The exclusion limit for haemoglobin was raised to 15 g/dl as a substantial amendment to the protocol with ethical and regulatory authority approval. 2 The study staff trained in the blinded sequence of delivering the intervention were not always available for out of hours transplants.
the maximum recommended dose in renal failure is 720 iu/kg/week [24], which in the average 70 kg patient, is half the dose administered in this study. Furthermore, mathematical modelling in healthy volunteers has demonstrated that a similar dose (1000 U/kg) resulted in >98% occupation of EPO receptors, which persisted for 2 days following the dose [25]. The majority of the recipients in this study were receiving maintenance rhEPO for anaemia associated with end-stage renal failure, though administration of rhEPO stopped more than 2 days prior to transplant in all but 7 recipients. It is conceivable that tissue protection was afforded by low levels of rhEPO to both groups, and thus confounded the outcome, with a dose ceiling effect for EPO. However, the retrospective review from Mohiuddin et al. [26] of patients receiving anaemia maintenance doses of rhEPO at the time of transplantation found no difference in the DGF rate or haemoglobin levels compared to those patients not receiving rhEPO, out to three months post-transplant.
Song et al. [27] administered 300 mg/kg of erythropoietin-beta IV to 36 adults undergoing coronary artery bypass grafting at induction of anaesthesia and noted a reduction in the incidence of acute kidney injury (EPO 8% vs Placebo 29%, p = 0.03), defined as a 50% increase in serum creatinine over baseline in the first 5 post-operative days. However Poulsen et al. [28] examined the effect of high dose EPO (500 iu/kg IV 12-18 hrs pre-op and at induction) in patients undergoing coronary artery bypass with no apparent difference between treatment and placebo groups with regard to serum creatinine. Martinez reported a French multicentre placebo controlled trial examined the effect of 40,000U EPO IV administered before kidney transplantation, at 12 hours, 7 days and 14 days post-operatively on the incidence of DGF [22]. There was no difference in DGF rates between the groups (EPO 32% vs placebo 29%), with similar eGFRs at 1 month post-implantation. A retrospective study in France compared recipients on 250 U/kg/week of EPO at the time of transplantation to recipients not receiving EPO and found no difference in graft function or haemoglobin level at 1 month [29]. Furthermore, the German multi-centre EPO Stroke Trial, a Phase II/III trial designed to reproduce the promising results of the earlier EPO Stroke Study (improved clinical recovery in EPO treated patients with ischemic stroke), was a negative trial that also raised safety concerns [30]. Aydin within 4 hours of percutaneous coronary intervention did not reduce infarct size [31]. Two other randomised controlled trials investigating high dose EPO in renal transplantation have been published recently. Sureshkumar et al. [20] conducted a randomised double blind clinical trial where the primary end point was the level of graft function in the early posttransplant period. Thirty-six patients in each group were included in the final analysis and their data gave similar event rates to the study reported here, with a RR for DGF of 0.88 (0.53-1.48). There were no clinically demonstrable beneficial effects of high dose EPO-alpha given intraarterially during the early reperfusion phase in deceased donor kidney recipients, in terms of reducing the incidence of DGF or improving short-term allograft function. The authors also measured two biomarkers (NGAL and IL-18) post transplantation and found similar levels in the EPO treated group versus the control group. They did not report an increase in adverse events in the EPO treated group.
In the second study Hafer et al. [21] evaluated high dose EPO-alpha administered intra-and post-operatively to recipients of deceased donor kidneys, with a primary study end point of allograft function 6 weeks posttransplant. The study recruited 45 patients to each arm. There was no significant effect of EPO-alpha on either long-term graft function (eGFR at 12 months) or histology (protocol biopsies at 6 weeks and 6 months), with a RR for DGF again similar to our study at 0.71 (0.36-1.43), but with a lower overall event rate of 27%.
The doses of rhEPO used by Sureshkumar et al. (40,000 iu rhEPO-alpha injected into the iliac artery) and Hafer et al. (3 doses of 40,000 iu rhEPO-alpha) were similar to our study (3 doses of 33,000 iu rhEPO-beta). It is conceivable that a lack of efficacy is a consequence of the timing of EPO administration. Injury prior to reperfusion is multi-factorial and due to recurrent insults to the kidney rather than a single ischaemia reperfusion event. Typically injury begins with peri-mortem events including, severe hypertension or hypotension and nephrotoxic agents. Retrieval of the kidneys is associated with warm ischaemia, cold ischaemia and the anastomotic ischaemic phase. The lack of efficacy in these studies may be due to the delay in rhEPO-b administration until reperfusion. However, therapeutic intervention in the donor, prior to organ retrieval and storage, would impact on all organs donated and prevent any potential recipients from declining involvement in the study without declining organ transplantation. It was not possible to intervene during cold storage firstly because we did not have access to machine perfusion technology at the time, and secondly the potential recipient would not have been able to decline involvement in the study. We decided to intervene immediately prior to reperfusion, with the knowledge that significant injury to the kidney may have already occurred, but based on the experimental evidence, intervention could still be beneficial to the graft. The participants in our study received kidneys from DCD and ECD donors. In the study from Sureshkumar et al. the kidneys were from ECD donors in only 46% of the cases. The donor details were not presented in the study from Hafer et al., though their exclusion criteria included immunological loss of a previous graft, and cold ischaemic time of longer than 24 hours.
Conclusions
Taking the five studies together, a range of quality of donated kidneys has been examined and the evidence appears convincing that high dose rhEPO in deceased kidney transplants, although well tolerated, has little effect on DGF, early or later allograft function, graft histology, or levels of biomarkers of kidney injury, including NGAL, IL-18 and KIM-1. Additionally, concerns were raised over the increased number of thrombotic events in the EPO treated groups in some of the trials. A metaanalysis of the five trials yielded an overall estimate of the RR for DGF of 0.89 (CI = 0.73; 1.07), a modest effect favouring EPO but not demonstrating a significant difference. At a time when there is obvious need to maximise the lifespan of donated organs it is disappointing that the promising experimental evidence of rhEPO tissue protection does not readily translate into the clinical setting of transplantation.
Ethical approval
The study was conducted in accordance with the ethical principles of the [32].
Patients
Patients were eligible if aged 18 years or more, able to give consent, and in receipt of a Maastricht category III, (awaiting cardio-circulatory death after withdrawal of treatment), a Maastricht category IV (cardio-circulatory death in a brain dead donor), a kidney from an extended criteria donor (defined as equal to or greater than 60 years old, or 50-59 years with combinations of cerebrovascular accident, hypertension or serum creatinine greater than 133 μmol/l, or a kidney with a cold ischaemic time greater than 24 hours). Exclusion criteria included inability to consent, pregnancy, breastfeeding, acute infection, previous intolerance of the trial drug, a diastolic blood pressure > 100 mm/Hg pre-transplantation, or initially a haemoglobin level = or > 13 g/dl. However, it became clear that a haemoglobin cut-off of 13 g/dl would exclude approximately one third of otherwise eligible participants. A review of the pre-operative haemoglobin levels of adults transplanted at this centre in 2007 reported a mean preop haemoglobin of 12.2 g/dl, with a range of 7.4-17.7 g/dl, and a mean decrease of 2.4 g/dl within 24 hours of surgery. As a result, the haemoglobin exclusion criterion was reset to ≥15 g/dl, as a substantial amendment to the protocol with ethical and regulatory authority approval. All patients received immunosuppression as per unit protocol. Induction immunosuppression consisted of basiliximab 20 mg intravenously on day 0 and day 4, as well as a single dose of methylprednisolone 1 g given intra-operatively. Maintenance immunosuppression consisted of tacrolimus (Prograf®), prednisolone and mycophenolate mofetil (Cellcept®). DGF was defined as the need for haemodialysis or peritoneal dialysis within the first seven days post transplantation, and slow graft function was defined as a creatinine reduction ratio at day 7 of <70% [33].
Randomisation and blinding
Eligible patients were randomly assigned by the trial Pharmacy using a computer-generated list to receive either rhEPO-b (Neorecormon® Roche) or 0.9% saline. All study participants and the clinical team were blinded to the trial drug for the duration of the study. Pharmacovigilance was undertaken by PDMS Roche Products Limited.
Sample collection and processing
The first dose of rhEPO-b (33,000 iu) (Neorecormon® Roche) or 0.9% saline were administered intra-operatively in a 100 ml intra-venous infusion over 15 minutes, as clamps were released to allow graft reperfusion. The second and third infusions (each 33,000 iu) were given at 24 hour intervals post-operatively, making a total rhEPOb dose of approximately 100,000 iu over approximately 48 hours. Prior to reperfusion, a 20 ml systemic blood sample was collected via the central line (the prereperfusion sample). The post-reperfusion 20 ml blood sample was collected approximately 15 minutes following reperfusion. 10 ml blood samples were collected at 2, 8 and 24 hours, and daily for the first 7 days postoperatively, coinciding with venepuncture for routine care where possible. Blood samples were centrifuged at 2000 rpm for 10 minutes to separate the plasma, which was aliquoted and stored at -80C. Urine samples were collected at similar time points where possible, centrifuged at 2000 rpm for 5 minutes, aliquoted and stored at -80C. Samples were batched for analysis of biomarkers.
Measurement of biomarkers
Plasma NGAL (pNGAL) and urine NGAL (uNGAL) were measured by immunoassay using the Duoset DY1757, and urine KIM-1 (uKIM-1) using the duoset DY1750, R&D Systems, OXON UK, following the manufacturer's guidelines. Mature IL-18 in plasma and urine was measured using an in-house immunoassay (capture antibody -Clone 125-2H, detection antibody -Clone 159-12B, and standard -rHuIL-18, MBL, Medical and Biological Labs co. Ltd, Nagoya, Japan). Sample concentrations in all assays were calculated from a 4parameter standard curve (SOFTmax PRO v4 software, Molecular Devices, Ca 94089). Biomarkers in urine were corrected for creatinine concentration in the sample. An internal standard was included on each assay plate confirming an inter-assay coefficient of variation (CV) of <20% and an intra-assay CV of <10%.
Sample size and statistical analysis
As a pilot, the study had no formal power calculation, but was designed to test the safety of delivering the intervention, to describe effect sizes on the measured variables (adverse events, renal function, haematopoietic markers, and acute rejection episodes), and to evaluate the profiles of three biomarkers of kidney injury. A sample size of 20 per arm was chosen on the basis of feasibility and to gather sufficient data to design and power a definitive trial [34].
Data analysis was performed using GraphPad Prism 5. Results were presented as median ± interquartile range, or as percentages as appropriate. Mann-Whitney U tests were used to compare continuous variables between groups. Categorical data was analysed using a Fisher's exact test to generate a risk ratio (RR) and confidence interval (CI). Clinical and biomarkers assessed at multiple times were compared using mixed-effect ANOVA models to allow for correlations between repeat measures using the lmer package in the R statistical environment [35]. A meta-analysis including this trial and those described by Sureshkumar et al. [20], Hafer et al. [21], Martinez et al. [22] and Aydin et al. [23] was conducted to derive a pooled RR estimate using the Mantel-Haenszel estimator and a random effects estimator, and the meta package in R. A two-tailed significance level of 0.05 was used throughout. | 2023-01-20T14:26:36.452Z | 2015-02-03T00:00:00.000 | {
"year": 2015,
"sha1": "2bc395a914ebbebe039db6a65a3779634252c6dc",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s13104-014-0964-0",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "2bc395a914ebbebe039db6a65a3779634252c6dc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
219759595 | pes2o/s2orc | v3-fos-license | Students' Perceptions of Collaborative Summary Writing
Writing is one of the four main language skills that are given emphasis in Second Language Learning. Summary writing is often viewed as a difficult and a challenging skill in learning a second language, which may result in negative attitudes forming, both toward summary writing and to writing in general. The main purpose of this study is to investigate students’ perceptions and problems related to collaborative summary writing in a university in Saudi Arabia. The study involved five undergraduate EFL Saudi female students as a case study and who were exposed to writing course participated in this study. The students were given different collaborative writing tasks during the semester and completed one summary writing task collaboratively for the purpose of this study. Then their views about the task were recorded via semi-structured interview. The findings suggest that most of the participants express positive attitudes toward collaborative writing and consider it beneficial for improving different aspects of writing skills, second language proficiency, and confidence. Several problems occurred during the process of collaboration, and these are also identified and discussed.
I. INTRODUCTION
Writing is considered to be one of the most important academic skills, especially in settings like schools and universities. Among the different genres of writing, summary writing is found to be challenging and difficult to master, hence, collaborative writing has been suggested as a potential solution (Lin & Maarof, 2013). Collaborative writing is defined as the joint production of a text by two or more writers (Storch, 2011). In recent years, there has been a significant growth of research on collaborative writing in the L2 classroom (see, for example, Shehadeh, 2011;Storch, 2005;Lin & Maarof, 2013). Collaborative writing emphasizes the significance of interactions and cooperation to solve problems in creating a text. Both cognitive and sociocultural theories of L2 learning encourage the use of collaborative writing tasks. From the cognitive perspective, Long's (1996) interaction hypothesis claims that language negotiation for meaning and form facilitates L2 learning. On the other hand, the sociocultural perspective is based on the Vygotskian notion that learning is a socially mediated process (Vygotsky, 1978). Specifically, students engage in scaffolding where lesser-advanced students learn from more advanced learners through their interactions, generating ideas, and languaging (Vygotsky, 1978).
Statement of the problem Summary writing is considered a useful albeit sophisticated skill to master. Norisma, Sapiyan, & Rukaini (2007) stated that summarizing is a significant skill which involves multiple cognitive activities that occur during understanding a text and generating a shortened version of it is known as summarizing. Writing a summary requires students to read a text, find the main points, delete the redundant details, combine similar ideas, and write a text using their own words (Casazza, 1993). Norisma et al. (2007) found that most students failed to summarize texts effectively and concluded that they were weak at summarization. In addition, a study conducted by Chen and Su (2011) reveals that in writing summaries, students tend to copy rather than paraphrase which can be considered plagiarism.
Due to the difficulties involved in writing a summary, the problem addressed in this study is to determine student's perceptions about collaborative summary writing as a learning approach. Seen in this light, collaborative writing, which has a beneficial effect on writing skills and encourages students to actively take part and produce a text, provides an alternative for teachers to foster their students' summary writing skills due to its collaborative focus. However, to date, few studies have been conducted on collaborative summary writing. Subsequently, this investigation aims to explore and summarize student's perceptions about collaborative summary writing. This study concerns those interested in the educational process, especially in the field of writing, because it sheds light on the collaborative summary writing approach. It is hoped this approach may overcome the shortcomings of traditional summary-writing practices.
The main purpose of this study is to answer the following question: Language plays an essential role in our lives because it enables us to express our desires, feelings, and ideas. Creating a learning environment where language learning is an efficient experience has been the focus of several previous studies (Sajedi, 2014). Second language acquisition (SLA) involves learning another language besides one's mother tongue either in a classroom setting or outside the classroom (Ellis, 1997). Second language learners are recommended to seek a great exposure to the language being learned (Krashen, 1981). Similar to the first language development, exposure to the second language enables learners to develop different aspects of their new language. Learning a second language is difficult and requires a lot of practice to avoid interference with learners' first language. In addition, a lot of mental and cognitive processes occur in the learner's mind like perception, memorizing, thinking, and so forth, during the acquisition of a second language (Ellis, 2008).
Second language writing Language consists of receptive skills (reading and listening) and productive skills (speaking and writing). However, the productive skills, particularly writing, are considered more difficult for second language learners (Gö kç e, 2011). Many learners find writing boring and tough because it requires many skills such as good organization, creativity, imagination, and good language knowledge (Gö kç e, 2011). Indeed, some students have valuable ideas, but they find composing meaningful thoughts very hard, especially in a second language. Also, some students have negative attitudes toward writing in their second language. Using different pedagogical techniques and strategies in teaching writing may foster students' abilities and raise their level of interest. Powell (1984) conducted a study comparing student's attitudes with their success in writing and found that learners do not like or enjoy writing either in a first or second language because writing tends to be accomplished in only one way most of the time: traditional essay writing. Powell (1984) also stated that writing teachers generally use rigidly structured approaches, and generally do not attempt to change their approach to teaching writing. For instance, teachers mostly ask their students to write in different styles (e.g., narrative or argumentative), but they always teach them using the same methods, without changing their style of teaching. In another study, Fareed, Ashraf, and Bilal (2016) investigated the problems that occur in ESL learner's writing. The study revealed that the major problems with ESL learners' writing are related to a lack of linguistic proficiency, writing anxiety, lack of ideas, reliance on their L1, and weak structural organization. Fareed et al (2016) added that many factors affect learners' problems in writing; for example, untrained teachers, ineffective teaching methods and examination system, a lack of reading and writing practice, large classrooms, low levels of motivation, and lack of ideas. The study also suggested a number of remedies to encourage learners to read and write more extensively such as training teachers and making changes to the examination system.
L2 collaborative learning
Unlike traditional, individual-focused learning methods, collaborative learning offers great benefits to language learning. However, collaborative learning is defined differently by different scholars; Slavin (1980) defines collaborative learning as when a group of students work together and are given rewards and recognition based on the whole group's performance. On the other hand, Artz and Newman (1990) describe collaborative learning as when a group of learners work as a team to solve a problem, complete a task or achieve a common goal. Indeed, over the last few years, group work has become very common in L2 classrooms. Several studies have emphasized the significance of collaborative work for teaching and learning a second language (see, for example, Shehadeh, 2011;Dobao, 2012;Lin & Maarof, 2013;Wigglesworth & Storch, 2012).
The use of pair or group work in the classroom is mainly based on the sociocultural theory, which states that human development is related to social interaction (Vygotsky, 1978). In first language acquisition, children's linguistic and cognitive development (novices) arises from their social interactions with more knowledgeable others (experts). Vygotsky (1978) claimed that the assistance provided to children (in the form of scaffolding) enables children to improve their cognitive and linguistic abilities and therefore allows them to reach their potential level of development. Research has also shown that scaffolding can be applied to second language contexts (Shehadeh, 2011). A number of studies have investigated and analyzed group talk (metatalk or language-related episodes) and concluded a positive effect for second language learning (Shehadeh, 2011). In a comparative study, Pica and Doughty (1985) examined the difference between the effects of teacher-centered classes and cooperative classes; they found that more opportunities for language practice that were given to learners while in small groups produces more scaffolding which enables learners to stretch both their cognitive and linguistic development beyond their current level towards their potential level of development. Based on previous studies, learners should be encouraged to engage in group work to enhance and facilitate their learning (Shehadeh, 2011).
L2 collaborative writing For students, writing skills are crucial to master because their writing skills are constantly evaluated as a measure of their academic success (Ismail & Maasum, 2009). L2 teachers adopt different methods of teaching writing. One of the recommended methods is the incorporation of collaborative writing in which students have a shared responsibility and work together to produce a written text (Storch, 2005). The last few years have witnessed several studies on collaborative writing. Many of these studies have been conducted to examine the benefits of collaborative work on L2 writing (Shehadeh, 2011). The process of collaborative writing allows participants to explore, discuss, cooperate and improve their learning capabilities (Dobao, 2012). Vygotsky (1978) argued that social interaction precedes development. Collaborative writing is built on the Vygotskian notion of having to interact with others, cooperate, and exchange ideas in order to allow development to take place (Heidar, 2016). Many studies have attempted to measure the effect of collaborative writing on overall writing performance. Shehadeh (2011) investigated two groups of student's perceptions of collaborative writing; the majority of students in the experimental group reported that the experience was interesting because the other members enabled them to write to higher standards and develop their content, organization and vocabulary skills. Other studies found reinforcement in students' writing in terms of increased grammatical accuracy (Altai, 2015;Storch, 2005;Wigglesworth & Storch, 2009). In general, the students perceived collaborative writing positively. Positive reactions included opportunities to compare ideas (Storch, 2005) and negotiating meaning (Altai, 2015). Also, Scotland (2014) investigated student's perceptions of assessed group work and found that they had positive perceptions towards it. In another study (Dobao & Blum, 2013), students were given one collaborative writing task and their reactions were generally positive although a third of them did not see a positive effect on developing grammatical accuracy or vocabulary knowledge. Therefore, more research needs to be conducted about student's perceptions toward collaborative writing in order to gain a better understanding of learners' observed behaviour and language learning outcomes.
Collaborative summary writing
Summary writing is a significant academic skill for L2 learners and is used most frequently in universities. It requires students to read and understand a passage, then paraphrase and write a summary. Summary writing is considered an essential exercise to enhance students' comprehension skills (Choy & Lee, 2012). Most students, especially those with limited vocabularies, find summary writing a challenging task. Students encounter several problems when writing a summary; one of these is their inability to paraphrase passages. In addition, many students find summary writing a cognitively demanding task which requires instruction and practice to do it properly, otherwise, students may simply copy rather than accurately paraphrase (Nambiar, 2007). In addition, Perin, Keselman & Monopoli (2003) conducted a study on summary writing which found that students faced difficulties in finding the main ideas in a text. Furthermore, they found that students who had prior knowledge about the topic were able to summarize the passage more effectively than those who did not. Therefore, collaborative summary writing is introduced as an approach to be used in second language classes in order to improve student's reading and writing skills. Lin and Maarof (2013) investigated both student's perceptions of collaborative writing in summary writing as well as the problems they encountered and found that students had positive perceptions in terms of motivation, grammar, vocabulary and co-construction of knowledge. In the same study, students reported several problems such as limited second language proficiency, unwillingness to offer their opinions, and an inability to finish the task in the allocated time. Shehadeh (2011) found that when students collaborate to compose a summary, it not only reduces the cognitive burden of this complex task, but it also develops different aspects of their writing. Sajedi (2014) has shown that collaborative writing is useful for L2 improvement because it encourages students to become more engaged in the task, enhancing their confidence, and increasing their responsibility.
III. METHODOLOGY
The participants of this research study are five Saudi female students currently studying at level two and are majoring in English. Students are chosen in this level particularly because they have mastered paragraph writing but struggle with summarizing, which is a skill included in their course. The students are enrolled in a writing class for three hours a week and had several collaborative writing tasks. The study employed interviews as the method of data collection. A collaborative summary writing activity was given to the students followed by a semi-structured interview with the five participants.
Instrument
The instrument used was an interview protocol comprised of semi-structured questions to elicit students' perceptions of the experience of collaboration after completing one summary writing task in the classroom (see appendix 1 for the interview questions). Questions are made based on the results of previous studies (see, for example, Shahedah, 2011; Sajedi, 2014). Although the students were used using different collaborative writing experiences in writing essays or paragraphs, this was their first time using a collaborative approach to write a summary.
Procedures
The study was carried out in the 12 th week of a 15-week semester. Five students were selected randomly as a group. They were given a passage of an appropriate level and length, asked to read it carefully, find the main ideas, delete the irrelevant points and write the summary collaboratively using their own words. Students worked together in which each one of them contributed in one or two sentences to the summary while the other group members checked for grammatical accuracy, use of vocabulary, and clarity of ideas. Then, the participants were interviewed by the researcher. The interviews were informal, semi-structured and were conducted using the student's first language to provide the participants with the opportunity to express their ideas clearly. Furthermore, the researcher interviewed each one of the participants individually to ensure privacy.
THEORY AND PRACTICE IN LANGUAGE STUDIES
Based on the participant's answers during the interviews, all of them agreed that collaborative summary writing is beneficial in different ways especially in terms of improving their grammar, expanding their vocabulary, and improving their paraphrasing skills. Participant A, for example, pointed out that: "I prefer collaborative writing, particularly when everyone of the group provides her sentence structure and we as group members decide for the best." So, participant A relates collaborative writing to the ability to build the best sentence structure. Another feature of collaborative writing is that students might benefit from the feedback given by the other group members in order to improve their writing. In this regard, participant B said that: "I think group interactions are useful because they help to correct errors and remind us of things we might forget when working individually." Participant C also found group interactions important in improving her writing skills and claimed: "…by interacting, we search, discuss and think of the best words and grammatical structures that fit in the context." Participant C, therefore, claimed that group interactions are effective to improve both vocabulary and grammar. Further, participant D said that: "My level of English is not proficient, so I need someone else to polish my language and group discussions are often valuable." Overall, the respondents relate collaborative writing to the development of L2 writing in different areas (e.g., language proficiency, grammar improvement, expanding of vocabulary and developing paraphrasing skills).
The four participants revealed that writing collaboratively was able to positively affect their confidence and motivation for writing. For example, participant E reported that: "If I make a suggestion and they accept it, my confidence increases." In contrast, participant C claimed that writing collaboratively did not affect either her confidence or motivation, revealing that collaborative writing might even destroy someone's self-confidence, reporting that: "If a student is put with a group that has members who are beyond her level, this might make her less confident." In sum, the majority of the participants (80%) claimed that collaborative writing has a positive effect on their motivation and confidence.
Students were also interviewed about the evaluation criteria. Two participants thought receiving a group grade is fair, whereas two other participants reported that it depends on the members of the group, the fifth one thought it is unfair to have a group grade. For example, participant D claimed: "If the group members are cooperative, then I think the marking is fair." Therefore, participant D relates the fairness of evaluation to the collaboration of the group members. On the other hand, participant C had a different point of view: "…in all kinds of collaborative learning, I think it is unjust that all members take the same grade while there is a disparity in their contributions." Therefore, participant C sees it from a different angle, that grading should be related to each member's own contribution. Overall, the participants had different views about accepting group evaluation.
Most of the participants claimed that the success of collaborative writing depends largely on the group members in the team. Participant B, for example, claimed that: "Collaborative writing, in general, is a very useful tool especially if you have group members who are similar to your level or beyond." Therefore, participant B finds the usefulness of collaborative writing relies on the members of the group. Participant C also agrees: "I believe the success of collaborative writing depends largely on who are you going to join? Is the relationship between the group members strong? Is the work divided in a way that is fair for all the members?" So, participant C thinks writing collaboratively depends on the group members, their relationships, and how the work is divided among them (i.e., whether fairly or not). Similarly, participant A said: "I prefer it when the professor gives us the opportunity to decide who to be within the group, instead of dividing us randomly." Overall, the participants claimed that group members play a crucial role in the success of collaborative summary writing.
Students' problems of collaborative summary writing When the participants were asked about the problems they encountered in the process of collaboration, the majority of them reported that the main problem with collaborative writing was working with uncooperative group members. Participant E, for example, mentioned that: "Sometimes, the group members rely mostly on one or two members, while the rest are doing nothing." For participant E, some group members are more responsible than others. Participant B agreed with participant E and claimed: "The members' different levels will make the most work on those who are more proficient."
THEORY AND PRACTICE IN LANGUAGE STUDIES 703
So, participant B views the problem differently and thinks that some members do not collaborate because their level of proficiency is not sufficient enough. Participant C also sees the issue from a different angle, and claimed: "Sometimes, all the group members will be thanked by the doctor for the great work while in reality, only one or two members have participated. This might affect the relationship among group members." For participant C, some group members may not contribute at all to the group discussions but will be praised by their tutor for great work, which may lead to negative attitudes toward such members. Overall, all the participants found the lack of collaboration as the major problem in writing as a group.
Another problem that was reported by only two participants is the limited time that is given for writing. For instance, participant B claimed that: "Sometimes, collaborative writing is a kind of wasting time as some members are dependent on the others." For participant B, some members wait for the others to do the task and do not provide any contribution. For participant B, working alone might be better than wasting time while asking for other members' contributions and opinions.
Overall, the participants perceived collaborative summary writing positively and they wish to do it again. They find it positively affects their grammar, vocabulary, paraphrasing skills and motivation. However, they encountered some problems during the process of collaboration such as working with uncooperative group members and limited time.
V. DISCUSSION
From the interview responses, all the participants revealed positive perceptions of collaborative writing in terms of several different aspects. In terms of language development, the participants claimed that collaborative writing was helpful in enriching their writing skills, which is in harmony with previous studies. For example, Shehadeh (2011) stated that the collaborative experience had a positive impact on students' L2 writing development. In terms of grammar development, the findings of the current study echo several prior studies' findings. For instance, Lin and Maarof (2013) and Storch (2005) found that collaboration affected the participants' grammatical accuracy positively. In terms of vocabulary development, Gö kç e (2011) and Shehadeh (2011) indicated an improvement in vocabulary and quality of writing after writing collaboratively. Therefore, students prefer writing collaboratively because a member can be proficient in some aspects like grammar, but not others like vocabulary. As a result, group members complement each other and collaborate to produce a better summary.
Another feature that was mentioned by the participants is sharing ideas and learning from each other, which is frequently cited in past studies (Shehadeh, 2011;Storch, 2005;Wigglesworth & Storch, 2012). By writing together, students exchange ideas, make suggestions, paraphrase sentences, and negotiate meaning. These interactions enrich their vocabulary, foster their grammatical accuracy, and enhance their overall language proficiency. When someone makes a mistake, others will provide corrective feedback and in this way, the final product will be improved. Such interactions open students' eyes to aspects of language they may not have considered before. Discussing these issues with each other helps them think and search deeply until they reach a solution to a problem they encounter. Therefore, their overall language will be developed and they will gain benefits in different aspects of L2 writing.
Most of the participants indicated that collaborative writing supported their motivation and confidence in writing. This finding confirms past studies (Shehadeh, 2011; Gö kç e, 2011; Yong, 2006) whose participants claimed that collaborative writing fostered their self-confidence. In addition, Lin and Maarof (2013) conducted a study on Malaysian students, which revealed that collaborative summary writing increased students' motivation toward writing. When a member receives positive feedback for her contribution or when the other members accept their suggestions, this increases confidence and motivation toward writing. Therefore, the students will be more engaged and eager to write. However, contrary to the majority of studies, participant C claimed that collaborative writing might affect their motivation negatively. For them, being a part of a group which has more competent or dominant students might make them not accept their contributions and suggestions. As a result, this may lead them becoming less confident and unmotivated to participate. Therefore, teachers should be mindful of this in the class and always remind their students of best practice for polite social interaction. Although having students with different levels in the same group has positive effects, it also can lead to negative ones. Scotland's (2014) study that was conducted with Qatari students showed they claimed to accept a group grade which contradicts the findings of the current study. Four of the participants in this study pointed out that a group evaluation would be accepted only if all the group members are cooperative and one participant did not accept it at all, although they did not really receive a mark for their collaborative summary writing. The reason for this might be that some members work harder than others, and so deserve better marks. Normally, in group writing, a member's mark is tied to the performance of the whole group. Plastow, Spiliotopoulou, and Prior (2010) made a comparison between individual marks and group marks for 230 students. Surprisingly, the results indicated that no statistical correlation was found between individual and group marks. This is an indicator that group evaluation is not valid compared to individual evaluation. Maybe that is why the participants in the present study do not find group evaluation reflects the real ability of each member. For example, higher-ability members might positively affect the mark of lower-ability members and vice versa.
THEORY AND PRACTICE IN LANGUAGE STUDIES
All the participants agreed that the kind of members in a group has a great influence on collaborative writing. This is because collaborative writing encourages students to take responsibility not only for their own writing but also share the responsibility to help other members of the group in their accomplishing goals. This is in line with Talib & Cheung's 2017) findings, who claimed that the success of collaborative writing is largely related to teamwork. A number of participants prefer to choose group members by themselves. So, maybe having members who are close to each other will lead to better interaction and therefore better writing. Some students may find it difficult to work with people they are unfamiliar with. Also, some students may prefer to choose team members in order to make sure they will be beneficial to the whole group. Some members are passive and unwilling to participate which reduces the enjoyment and cooperation of the whole group. In this way, the other members will not benefit and might become demotivated to participate which will be reflected by the overall quality of their writing. The participants also agreed that not everybody provided an equal amount of effort which corresponds with past studies (Lin & Maarof, 2013;Sajedi, 2014). In addition, Altai (2015) pointed out that some of the participants in her study complained about uncooperative group members. The reason for this might be that the group members change from one task to another. In some cases, the group members contribute equally, while they do not in others. Further, some members might be irresponsible and unwilling to provide their ideas and participate just because they expect other members to do so. In other words, some members (passengers) depend on others, particularly those who are more proficient in L2 writing skills. Passengers or free riders are those who do not provide any contributions to the group but still receive benefits (Scotland, 2014). The problem of passengers in group writing is very difficult to solve. Teachers need to always encourage their students to participate and make contributions as this will be reflected in their level of development.
Another difficulty that two participants encountered was the limited time to write collaboratively. This finding is consistent with some previous studies that found working in groups take longer time than working individually (Storch, 2005; Lin & Maarof, 2013), but contradicts Altai's (2015) findings where the participants declared that collaborative writing decreased the amount of time they spent on writing texts. Dobao (2012) argued that when assigning the same amount of time, students who write individually produce longer texts than those who write in pairs or as part of a group. As for the current study, the participants may not be proficient enough in English as they are in level two. This means that writing collaboratively will require more of their time to suggest, explore, discuss, pool ideas, talk about the best ways to use language and finally decide on the best choices. Therefore, the time required is not only for writing itself but also for what happens before the writing process as part of the collaboration.
VI. CONCLUSION
Research and empirical studies on collaborative writing have suggested its significance for teaching a second language. This study was conducted to investigate student's perceptions toward collaborative summary writing. The general principle behind collaborative writing is that students work together as a team to achieve a common goal, namely that each student learns from the others. Hence, considering the perceptions of students toward collaborative writing and problems that occur in the process of writing, it can be concluded that students have a generally positive attitude toward collaborative summary writing, although they encountered some problems in the process of collaboration. The findings of this study echo a vast number of past studies and suggest a lot of advantages for using collaborative writing in enhancing different aspects of language proficiency and paraphrasing skills as well as building motivation toward L2 writing. Further, some problems occurred: a lack of group collaboration and limited time. Therefore, L2 teachers should encourage their students to reflect on their perceptions of collaborative writing because this provides them with a great opportunity to discuss and learn from these discussions.
Limitations and further studies
This study has a number of limitations that should be mentioned. First, the present study is limited in that it is based solely on five participants' reports gathered by semi-structured interviews. Thus, a general claim cannot be made about students' perceptions of collaborative summary writing as the researcher worked with only a limited number of participants. However, the participant's responses did reflect a number of issues highlighted in previous research. Second, all the participants in this study are studying at the same level; level two. Having more participants from different levels could affect the results. In addition, the limited meant that the participants only collaborated on one summary writing task. If there was more time, the participants could do more collaborative summary writing tasks and so the findings would be more reliable. Consequently, further studies need to be conducted in order to eliminate or mitigate these limitations. 6/ What are the effects of collaborative writing on personality factors like motivation and confidence? 7/ Based on your own experience, tell me the advantages and disadvantages of collaborative writing? 8/ What are the problems you face in collaborative summary writing? 9/ Is it fair that all group members take the same grade? | 2020-05-28T09:18:27.602Z | 2020-06-01T00:00:00.000 | {
"year": 2020,
"sha1": "4830d6a546810965f9e406a11760ba4e36eaddc1",
"oa_license": null,
"oa_url": "https://doi.org/10.17507/tpls.1006.11",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "c2ee2c80f56e53c13a03a6f9c73b6ffbeb60fd74",
"s2fieldsofstudy": [
"Education",
"Linguistics"
],
"extfieldsofstudy": [
"Psychology"
]
} |
270334957 | pes2o/s2orc | v3-fos-license | Anaemia is independently associated with mortality in patients with hepatocellular carcinoma
Background Anaemia is frequent in patients with cancer and/or liver cirrhosis and is associated with impaired quality of life. Here, we investigated the impact of anaemia on overall survival (OS) and clinical characteristics in patients with hepatocellular carcinoma (HCC). Materials and methods HCC patients treated between 1992 and 2018 at the Medical University of Vienna were retrospectively analysed. Anaemia was defined as haemoglobin level <13 g/dl in men and <12 g/dl in women. Results Of 1262 assessable patients, 555 (44.0%) had anaemia. The main aetiologies of HCC were alcohol-related liver disease (n = 502; 39.8%) and chronic hepatitis C (n = 375; 29.7%). Anaemia was significantly associated with impaired liver function, portal hypertension, more advanced Barcelona Clinic Liver Cancer stage and elevated C-reactive protein (CRP). In univariable analysis, anaemia was significantly associated with shorter median OS [9.5 months, 95% confidence interval (95% CI) 7.3-11.6 months] versus patients without anaemia (21.5 months, 95% CI 18.3-24.7 months) (P < 0.001). In multivariable analysis adjusted for age, Model for End-stage Liver Disease, number of tumour nodules, size of the largest nodule, macrovascular invasion, extrahepatic spread, first treatment line, alpha-fetoprotein and CRP, anaemia remained an independent predictor of mortality (adjusted hazard ratio 1.23, 95% CI 1.06-1.43, P = 0.006). Conclusions Anaemia was significantly associated with mortality in HCC patients, independent of established liver- and tumour-related prognostic factors. Whether adequate management of anaemia can improve outcome of HCC patients needs further evaluation.
INTRODUCTION
Liver cancer, of which hepatocellular carcinoma (HCC) represents the majority, is the sixth most frequent cancer and the third most common malignant cause of death worldwide. 1Mostly, it develops in a cirrhotic liver due to hepatitis B virus (HBV), hepatitis C virus (HCV), alcohol-related liver disease (ALD) or metabolic dysfunction-associated steatotic liver disease (MASLD). 2 Several prognostic serum biomarkers have been proposed for patients suffering from HCC.While alpha-fetoprotein (AFP) and C-reactive protein (CRP) are amongst the most studied ones, [3][4][5] the prognostic role of anaemia, a condition often found in cancer patients, 6 is less clear.Anaemia is defined by the World Health Organization (WHO) as a haemoglobin level <13 g/dl in men and <12 g/ dl in women. 7It can be stratified by mean corpuscular volume (MCV) into microcytic, normocytic and macrocytic anaemia which is suggestive of specific causes. 8In cancer patients, anaemia is a common comorbidity that is associated with worse outcome and impaired quality of life. 9,10ne important reason for anaemia in malignant diseases is inflammation, leading to anaemia of chronic disease, mainly caused by a dysfunctional iron metabolism. 11urther mechanisms, e.g.chronic bleeding, haemolysis, malnutrition or bone marrow infiltration, may also play a role in certain types of cancer. 9In patients with liver cirrhosis and portal hypertension, anaemia is also frequently present due to various risk factors such as Volume 9 -Issue 6 -2024 https://doi.org/10.1016/j.esmoop.2024.103593][14] Although HCC patients suffer from two conditions commonly accompanied by anaemia (i.e.liver cirrhosis and cancer), direct evidence about the impact of anaemia on their survival is sparse.Two studies found a correlation between low haemoglobin levels and overall survival (OS) in small cohorts of HCC patients. 15,16Low haemoglobin levels were also associated with diminished survival in cohorts treated with programmed cell death protein 1 (PD-1) inhibitors 17 and Yttrium-90 radioembolisation. 18By using computational intelligence methods, a recent study identified haemoglobin levels as one of the most predictive factors for survival of HCC patients although it remains unclear which treatment the patients received. 19ere, we conducted a retrospective analysis of the impact of anaemia on OS and clinical characteristics in a large, well-characterised cohort of HCC patients.
Patients
All adult patients who were diagnosed with HCC at the Medical University of Vienna from 01 August 1992 to 30 April 2018 were included.1][22] Patients with missing haemoglobin level and/or insufficient follow-up were excluded from this analysis.
Data collection and definition of variables
All data were collected retrospectively from the patients' electronic health records.If not stated otherwise, the values at the time of HCC diagnosis were collected.All laboratory parameters recorded for this study were measured in the ISO-certified laboratory of the Medical University of Vienna.ChildePugh score and Model for End-stage Liver Disease (MELD) score were used to describe liver function.Tumour staging was defined according to the Barcelona Clinic Liver Cancer (BCLC) classification. 23The presence of portal hypertension was assumed in all patients with oesophageal/ gastric varices, portal-hypertensive gastropathy, ascites, a hepatic venous pressure gradient of !10 mmHg or a liver stiffness measurement value measured by vibrationcontrolled transient elastography !25 kPa. 24The Milan criteria as a summative parameter for tumour extent were defined according to Mazzaferro et al. 25 (one tumour nodule 5 cm or 3 nodules 3 cm, no macrovascular invasion, no extrahepatic manifestation).
Anaemia was defined according to the WHO definition, 7 i.e. a haemoglobin level below 13 g/dl in men and below 12 g/dl in women.The severity of anaemia was stratified into four grades according to the Common Terminology Criteria of Adverse Events (CTCAE) version 5.0. 26Anaemia type was stratified by MCV into microcytic (<80 fl), normocytic (80-96 fl) and macrocytic anaemia (>96 fl).
The definition of systemic treatment in the context of this manuscript includes sorafenib, lenvatinib, regorafenib, cabozantinib, ramucirumab and the immune checkpoint inhibitors nivolumab, pembrolizumab as well as the combination of atezolizumab and bevacizumab.All other experimental systemic treatments (e.g.thalidomide, octreotide, etc.) were included in the 'other treatment' group.For the purpose of statistical analysis, liver transplantation, surgical resection and local ablation were defined as curative treatments, whereas transarterial chemoembolisation (TACE), systemic therapy, best supportive care (BSC) and all other treatments were defined as palliative treatments.
Definition of endpoints and statistical analysis
Baseline patient characteristics of the overall study population and the subgroups anaemia versus no anaemia are presented using descriptive statistics.The association of categorical variables was tested by chi-square test.Means of continuous variables were compared by t-test.
OS was defined as the time from the initiation of the first treatment line until the date of death.If a patient did not receive any treatment, the date of diagnosis was defined as the starting point of OS.Patients who had still been alive at 31 October 2021 (end of follow-up) were censored at the time of last contact.Median follow-up was calculated by the reverse KaplaneMeier method.
Survival plots were calculated by the KaplaneMeier method and compared by log-rank test (univariable analysis).Multivariable survival analysis was done by Cox regression.Variables were included into the multivariable model if they were statistically significant predictors of OS in univariable analysis.If two or more variables showed a strong correlation (e.g.ChildePugh class and MELD score), only one of these variables was inserted into the multivariable model in order to avoid collinearity.
In general, a P value < 0.05 was considered statistically significant.Wherever appropriate, the level of statistical significance was corrected by using the Bonferroni methoddin this case, the corrected level of statistical significance is given in the footnote of the respective table.Statistical analyses were carried out using SPSS version 29.0 (Chicago, IL) and R 4.3.1 (R Core Team, R Foundation for Statistical Computing, Vienna, Austria).Receiver operating characteristic (ROC) curves for the outcomes of interest were calculated using the 'proc' package of R.
Ethical considerations
This retrospective analysis was approved by the local ethics committee of the Medical University of Vienna (reference number 1759/2015).As this study is a retrospective analysis of anonymised patient data, the obligation to obtain written informed consent was waived by the ethics committee.
Patient characteristics
Of 1415 HCC patients, 152 were a priori excluded from analysis due to missing haemoglobin levels and one because of insufficient follow-up.Thus, 1262 patients were analysed (Supplementary Figure S1, available at https://doi.org/10.1016/j.esmoop.2024.103593).
Association between anaemia and other patient characteristics
At diagnosis, 555 patients (44.0%) had anaemia.Mean haemoglobin level was 12.9 g/dl (SD AE 2.0) in the total cohort, 11.1 g/dl (SD AE 1.3) in patients with anaemia and 14.3 g/dl (SD AE 1.1) in patients without anaemia (Table 2).
DISCUSSION
In this large retrospective study, we identified anaemia as a significant comorbidity in HCC patients which is associated with tumour stage, more advanced liver function impairment and inflammation, and negatively impacts patients' outcome.A key strength of our study is the large number of well-characterised patients with a long follow-up, allowing for meaningful and robust survival analysis.Our results are in line with other, albeit small, studies that have also reported a link between haemoglobin levels and impaired survival in HCC patients treated with PD-1 inhibitors, 17 Yttrium-90 radioembolisation 18 and various first-line treatments. 15,16gure Overall survival (OS) according to the presence or absence of anaemia.
Anaemia is usually stratified by MCV into micro-, normo-and macrocytic anaemia, which is closely linked to possible causes of anaemia. 8,9In our cohort, most anaemic patients suffered from normocytic anaemia, suggesting that inflammation-drivendsocalled 'anaemia of chronic disease' (or more recently termed 'anaemia of inflammation')dwas the most common cause.This is a comorbidity that is frequently found in cancer patients. 11eside tumour stage (i.e.BCLC stage), ChildePugh class, MELD score and portal hypertension were significantly associated with anaemia, underlining that impaired liver function and splenomegaly/hypersplenism due to portal hypertension also contribute to the development of anaemia in HCC patients.Accordingly, in a cohort of patients with advanced chronic liver disease and portal hypertension, most anaemic patients had normocytic anaemia, and the presence of anaemia and lower haemoglobin levels were associated with deteriorating liver function. 12However, given that anaemia and haemoglobin level as a continuous variable were both significantly associated with mortality independent of MELD score in the multivariable model, we conclude that anaemia per se should be considered as an independent risk factor for mortality and not only as a surrogate marker for hepatic dysfunction and more advanced tumour stages.
CRP is a well-known marker for inflammation as well as more aggressive tumour biology and highly predictive for survival and other outcome parameters in HCC patients. 3,4,27,28Its significant association with anaemia in our cohort highlights the well-known and important role of inflammation in the development of anaemia. 11ossible limitations of our study include the retrospective design with all its potential biases and the long period of inclusion, possibly introducing heterogeneity in the diagnostic approach and standard of care.Moreover, some variables required to determine the cause of anaemia (e.g.folic acid, vitamin B12) were not routinely measured.Additionally, we assumed that all patients with ascites had portal hypertension; however, ascites in HCC patients may have other causes (e.g.hypoalbuminaemia, malignant ascites) and thus a small number of patients may have been misclassified as having portal hypertension.
It is noteworthy that cut-offs for haemoglobin levels for clinical decision making (e.g. for clinical trial inclusion, transfusion, erythropoietin administration) [29][30][31] are generally lower than the cut-off defining anaemia according to WHO.In our cohort, we observed no OS difference in patients with mild (CTCAE grade 1) versus moderate/severe (grade 2-4) anaemia, and both groups had a markedly reduced OS compared to non-anaemic HCC patients.Whether targeting anaemia at earlier stages may be beneficial in terms of quality of life and survival of HCC patients requires prospective evaluation.
In conclusion, anaemia was significantly associated with worse OS in HCC patients, independent of various wellknown prognostic factors including severity of liver function impairment.Furthermore, our data suggest a role of more advanced tumour stage, more severe liver function impairment and systemic inflammation in the multifactorial genesis of anaemia in patients with HCC.Prospective studies are needed to investigate potential benefits of measures to ameliorate anaemia in patients with HCC.
FUNDING
None declared.
DISCLOSURE
TM received speaker honoraria from AstraZeneca, Chiesi and Janssen-Cilag; consulting fees from CSL Behring; and travel support from AstraZeneca, BeiGene, CSL Behring, Chiesi, Jazz Pharmaceuticals, Janssen-Cilag, Novartis and tevaratiopharm.LBa received speaker honoraria from Chiesi.MM served as a speaker and/or consultant and/or advisory board member for AbbVie, Collective Acumen, Gilead and W.
Table 1 .
Patient characteristics | 2024-06-09T06:17:28.344Z | 2024-06-01T00:00:00.000 | {
"year": 2024,
"sha1": "9c9ba7853ed90810fa45d00007898948604e25ab",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "9055117d895d3dfddf007f9c0d35bb90cd217b6c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
265554295 | pes2o/s2orc | v3-fos-license | Diagnostic test accuracy of machine learning algorithms for the detection intracranial hemorrhage: a systematic review and meta-analysis study
Background This systematic review and meta-analysis were conducted to objectively evaluate the evidence of machine learning (ML) in the patient diagnosis of Intracranial Hemorrhage (ICH) on computed tomography (CT) scans. Methods Until May 2023, systematic searches were conducted in ISI Web of Science, PubMed, Scopus, Cochrane Library, IEEE Xplore Digital Library, CINAHL, Science Direct, PROSPERO, and EMBASE for studies that evaluated the diagnostic precision of ML model-assisted ICH detection. Patients with and without ICH as the target condition who were receiving CT-Scan were eligible for the research, which used ML algorithms based on radiologists' reports as the gold reference standard. For meta-analysis, pooled sensitivities, specificities, and a summary receiver operating characteristics curve (SROC) were used. Results At last, after screening the title, abstract, and full paper, twenty-six retrospective and three prospective, and two retrospective/prospective studies were included. The overall (Diagnostic Test Accuracy) DTA of retrospective studies with a pooled sensitivity was 0.917 (95% CI 0.88–0.943, I2 = 99%). The pooled specificity was 0.945 (95% CI 0.918–0.964, I2 = 100%). The pooled diagnostic odds ratio (DOR) was 219.47 (95% CI 104.78–459.66, I2 = 100%). These results were significant for the specificity of the different network architecture models (p-value = 0.0289). However, the results for sensitivity (p-value = 0.6417) and DOR (p-value = 0.2187) were not significant. The ResNet algorithm has higher pooled specificity than other algorithms with 0.935 (95% CI 0.854–0.973, I2 = 93%). Conclusion This meta-analysis on DTA of ML algorithms for detecting ICH by assessing non-contrast CT-Scans shows the ML has an acceptable performance in diagnosing ICH. Using ResNet in ICH detection remains promising prediction was improved via training in an Architecture Learning Network (ALN). Supplementary Information The online version contains supplementary material available at 10.1186/s12938-023-01172-1.
Background
A potentially fatal disorder known as intracranial hemorrhage (ICH) occurs in 25 per 100,000 yearly, which is related to 2 million strokes globally and has an estimated incidence [1].There is a variety of fundamental (80-85%) and secondary (15-20%) underlying causes of ICH [2].The most frequent non-traumatic secondary causes include brain tumors, ischemic strokes, and vascular malformations.Hospital admissions for ICH have grown during the past ten years, primarily because of the elderly population, insufficient blood pressure (BP), and increased use of blood thinners management [3,4].In such a way that, rational decrease of BP is an important factor to manage these patients, specifictly for lower than 15 mL ICH volume [5,6].The revascularization in the acute phase of strokes can improve the symptoms and better prognosis of these patients [7].The tissue plasminogen activator (tPA) is the main treatment for ischemic stroke.Moreover, the clot in the blood vessel can be removed by thrombectomy technique that catheter intervent upper of femur; then, using angioplasty blocked artery can be opened up [8,9].
Neuroimaging is, therefore, essential for the diagnosis of acute ICH because perchance challenging to differentiate it from other diseases, such as ischemic stroke [10].The successful procedure of a non-contrast computed tomography (CT) for the cerebrum, an accessible and quick technique for diagnosing ICH, are crucial component of the ICH diagnostic process.Fundamental ICH features such as location, edema, ventricular system expansion, and midline shift are morphologically revealed by a CT-Scan [11].However, more significant CT-Scan usage could delay the identification of ICH, and a growing burden in radiology departments could lead to job-related stress and burnout.In contrast, it has been discovered that artificial intelligence (AI) can improve radiology practice by lowering the amount of effort required [12][13][14].
Today, the efficiency of machine learning (ML) algorithms, especially improving deep learning (DL) algorithms for computer vision, has advanced significantly.The CT-Scan, one of the most well-known imaging modalities, and has seen considerable breakthroughs in ML and its application [15,16].Support vector machine (SVM), Convolutional neural network (CNN), random forest (RF), and conditional random field (CRF) are the most prominent ML algorithms for recognizing brain bleeding from visual data.Even though a great deal of work has already been accomplished in this field, there is still room for growth.Additional research is required to improve the accuracy, precision, and resilience of ML-based brain segmentation [17,18].A meta-analysis reported DTA of AI for the detection of ICH; however, this study did not report subgroups for distinguishing between Algorithms and also types of ICH [19].Therefore, this systematic review and meta-analysis were conducted to objectively evaluate the evidence of ML in the patient diagnosis of ICH on CT scans.
Risk of bias
The validity and the possibility of bias for the included studies were evaluated with the QUADAS-2 (Fig. 2).One high-risk bias was reported in all the included studies [20].When the publication bias is very low, the points will be symmetrically distributed around the true effect of an inverted funnel, as shown in Fig. 3.
Discussion
Detection of ICH by ML in systematic studies may decrease the time to diagnosis, which is crucial for clinical because approximately most of ICH in accordance with death occurs within the primary hours [53].This meta-analysis demonstrated that ResNet algorithms could detect ICHs accurately with retrospective and non-randomized data [22,31,33,37,38,50].
Practical ML is characterized by high accuracy measures such as AUC, sensitivity, and specificity, which can accurately categorize illness suspects and non-suspects.This meta-analysis revealed a combined AUC of 0.971.On the other hand, the high AUC of the included trials could not correctly represent the performance of the algorithm's therapeutic benefit [54].Initially, the range of AUC among studies was 0.608 to 1 that Neural Networks (NNs) learning such as CNN, ResNet, and RNN had a higher rate from other ML algorithms [20, 21, 23, 24, 26-29, 31, 33, 37-39, 43, 44, 46, 49].In other words, this result suggested that NNs algorithms in the big data can improve the rate of AUC which it is a useful way to detect a good model and positive and negative target classes.
To interpret the results, a DOR of 219.47 (95% CI 104.78-459.66,I 2 = 100%) generally means using ML in diagnosing ICH is valuable.Due to the necessity of reporting the convergence of the results along with the accuracy, precision is also mentioned.Precision equal to 76.24 (ranges from 66.71 to 86.32) indicates a relative convergence besides the accuracy of 90.3 (ranges from 87.24 to 93.01).These results show that ML can be diagnosed with ICH in healthy patients.Also, likelihood ratios are important factors that could help improve clinical judgment and show the range of disease frequencies, and LR + greater than 10 produces a greater pretest probability.The LR − less than 0.1 has conclusive changes in the post-test possibility [56].The pooled positive LR + and LR − range from 12.639 to 20.784 with a mean of 16.208 and 0.072 to 0.123 with a pooled mean of 0.094, respectively.The pooled LR + of 16.208 means that diagnosis of ICH is 16.208 times more likely to be diagnosed while ML is used; likewise, the pooled LR − of 0.094 means ICH has a higher likelihood of negative test for the ML algorithm than healthy patients.The pooled F1 score of this study was 79.14 (ranging from 70.9 to 86.48).The F1 score is a numerical score between 0 and 100; the closer this number is to 100, the more valuable the method studied [57].This score results from the average weight of recall and precision, which has a significant place in data interpretation.It can be reduced the number of false negatives and positives.
The sub-group analysis based on the ML architecture and algorithms was done to assess these factors' influence on the DTA results.The network architecture analysis results showed significance for the specificity of the different network architecture models (p-value = 0.0289).However, the results for sensitivity (p-value = 0.6417) and DOR (p-value = 0.2187) were not significant.Thus, the ResNet algorithm has higher pooled specificity than other algorithms 0.935 (95% CI 0.854 to 0.973, I 2 = 93%).Between studies, CNN architectures included specialized neural networks and ensemble learning [58].However, this study focuses on CNNs for detecting ICHs in general, and it may not be acceptable to extend the results to other AI projects [25].To increase the number of entirely connected layers from one to five, Lee et
Random effects model
Heterogeneity: alterations [33].It has been demonstrated that standard ImageNet architectures such as ResNet18 do not significantly outperform smaller and simpler CNNs [59].However, by averaging many transfer models, the performance of an ensemble of transfer models may be enhanced.Chang et al. (2018) used a hybrid 3D/2D CNN pyramid with a proprietary mask R-CNN architecture as its backbone to detect and segment ICHs [60].Medical imaging can use finely tuned 3D networks, which have shown exceptional performance in a variety of applications; however, 3D networks need a large dataset and several training parameters, with the image depth volume varying from 20 to 400 slices per scan, which is more demanding in terms of computation efficiency [25].
Misdetection of ICHs, which are difficult to distinguish from bone or undiscovered microbleeds in trauma imaging, is another therapeutically significant and relevant issue [61].Using image processing techniques, the skull and face were removed from NCTCs in Kuo et al. 2019 research.They achieved 100% sensitivity in an external test set of 200 NCTCs, which was likely made possible by the simplicity of detecting bleeding when only intracranial structures were considered [62].Patients excluded or removed because of picture artifacts might improve the algorithm.NCTCs are familiar with patient-related imaging artifacts in CT, such as metallic materials, human movements, and incomplete projections.In addition, the diversity of CT scanners and image reconstruction methods makes direct comparisons between research challenging [33].
Limitations
Developing a clinical environment where an ML supports the radiologist could improve diagnostic efficacy and should be assessed from a socioeconomic and patient standpoint [63].The deployment of MLs in clinical operations necessitates a sophisticated configuration coupled with medical imaging systems.Just one of the included articles assessed midline shift [25].Therefore, this outcome couldn't analyze.This would be important clinically, as its value > 5 mm may be an indication for urgent neurosurgical review.
Additionally, the findings of the I-squared analysis make it clear that combining the data from these studies may not be appropriate, underscoring the dearth of external validation research.Due to factors like scanning methodology, scanner types, algorithm designs, and reference standards, it is not easy to compare different research, which reduces the generalizability and validity of the findings.The judgment of articles may have been tainted by subjective bias since writers' degrees of experience varied.The creation of additional prospective studies in this area may significantly advance future research since, in addition to the different causes of variability, the use of retrospective studies was the study's most noticeable limitation.
Conclusion
This meta-analysis on DTA of ML algorithms for detecting ICH by assessing non-contrast CT-Scans shows the ML has an acceptable performance in diagnosing ICH.Using ResNet in ICH detection remains promising prediction was improved via training in an Architecture Learning Network (ALN).However, further studies with greater homogeneity are needed to draw more accurate conclusions about the results of DTA of ML in ICH.
Protocol and registration
This meta-analysis study was reported according to Preferred Reporting Items for Systematic Reviews-Diagnostic Test Accuracy (PRISMA-DTA) guideline [64].
Eligibility criteria
Original studies were eligible if they met all the following predefined inclusion criteria: a) patients undergoing non-contrast brain computed tomography (CT) scan for the detection of acute or chronic Intracranial hemorrhage (ICH), such as intraparenchymal hemorrhage (IPH), subdural hemorrhage (SDH), epidural hemorrhage (EDH), intraventricular hemorrhage (IVH), and subarachnoid hemorrhage (SAH), or b) using a gold standard (Radiologists) to report the ICH.
Information sources
Until May 2023, systematic searches were conducted in ISI Web of Science, PubMed, Scopus, Cochrane Library, IEEE Xplore Digital Library, CINAHL, Science Direct, PROS-PERO, and EMBASE for studies that evaluated the diagnostic precision of ML modelassisted ICH detection.
Summary measures
ICHs versus HCs that were true positive (TP, true ICH, predicted to be ICH), true negative (TN, non-ICH predicted to be non-ICH), false positive (FP, non-ICH predicted to be ICH), or false negative (FN, ICH, predicted to be non-ICH) were extracted for meta-analysis purposes.The original study's inclusion criteria were utilized to obtain data for the meta-analysis on detecting ICH.In addition, the publication year, the nation where the research was conducted, the study methodology, the number of patients, and their ages were recovered.The primary outcomes were diagnostic accuracy = ((TP + TN)/(TP + FN + FP + TN)), specificity = TN/(FP + TN), sensitivity = TP/ (TP + FN), precision = (TP/TP + FP), F1-Score = 2 × (Precision × Recall/Precision + Recall), negative likelihood ratio (LR − ) = (1-sensitivity/specificity), positive likelihood ratio (LR + ) = (sensitivity/1-specificity), DOR = (LR + /LR), and the AUC of ML on detecting ICH in the patients, ICH versus healthy controls (HCs) [65,66].Comparing the accuracy, sensitivity, and specificity of ML and CT-Scan were the subgroup analysis.
Random effects model
Heterogeneity:
Risk of bias across studies
Two independent reviewers utilized the updated Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) instrument to evaluate all studies' quality and potential bias.Communication resolved conflicts, and a third reviewer and reviewers independently assessed the first included papers.Two categories were considered: bias susceptibility and patient selection, index test, and comparative benchmark application.In the flow and pace areas, bias was evaluated.
Additional analyses
Using the Random Effects Model (RE) technique, a univariate meta-analysis was conducted for each modality's sensitivity and specificity to determine its diagnostic accuracy [67].The RE model was chosen because of the suspected high proportion of heterogeneity.The primary endpoints were sensitivity, specificity, a summary of receiver operating characteristics (SROC) curve, and diagnostic odds ratio (DOR).Point estimates and 95% confidence intervals (CIs) for each study were calculated to ensure consistency of sensitivity and specificity.A bivariate meta-analysis of sensitivity and specificity used R version 4.1.2(R Foundation for Statistics Computing, Vienna, Austria, 2021) and RStudio version 1.4.1717 to obtain the SROC curve.This includes the "mada" and "meta" R packages implemented.Then the average AUC of SROC was estimated [68,69].The secondary outcomes comprised the positive and negative likelihood ratios, precision, and F1 score.Cochran's Q test and I 2 statistics were utilized to evaluate statistical heterogeneity between studies.0-40% indicates insignificant non-uniformity, 30%-60% indicates moderate non-uniformity, and 75-100% indicates considerable non-uniformity for Q statistics.A funnel chart was used to examine and depict publication bias (32).All p-values are derived from two-sided tests, and p-values of 0.05 are statistically significant.Screening based on machine learning algorithms, ICH types, retrospective or prospective study design, and acute or chronic ICHs was used to perform subgroup analysis.
Using the Cochrane Review Manager version 5.4 (RevMan 5.4) program, bias crossstudy risk and applicability concern charts were assessed.
Fig. 2 AFig. 3
Fig. 2 A. Risk of bias and applicability concerns graph; review authors' judgments about each domain presented as percentages across included studies.B. Risk of bias and applicability concerns summary; review authors' judgments about each domain for each included study al. 2019 combined a final CNN made up of VGG16, ResNet50, Inception-v3, and Inception ResNet-v2 utilizing ResNet18 with only minor Study Random effects model Heterogeneity: I 2 = 100%, τ 2 = 1.2425, p = 0 Test for subgroup differences: χ 0 2 = 0.00, df = 0 (p = NA) g = 0
Fig. 5
Fig. 5 Univariate sub-group analysis of specificity with random model based on retrospective studies
Fig. 6
Fig.6 The SROC of the bivariate for DTA based on retrospective studies
Fig. 7
Fig. 7 Univariate sub-group analysis of sensitivity with random model based on prospective studies
Fig. 8 Fig. 9
Fig. 8 Univariate sub-group analysis of specificity with random model based on prospective studies
Table 1
Summary of findings for all studies included in
Table 2
DTA estimated from all the studies included in the meta-analysis using (2 × 2) confusion table | 2023-12-04T14:38:02.120Z | 2023-12-01T00:00:00.000 | {
"year": 2023,
"sha1": "58302903f5809964f25644de234f0bab766fd9ba",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "58302903f5809964f25644de234f0bab766fd9ba",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": []
} |
266566968 | pes2o/s2orc | v3-fos-license | Firm Structural Attributes and Capital Structure Adjustments among Listed Manufacturing Firms in Nigeria using Static and Dynamic Approaches
The study examined the effect of firm structural attributes on capital structure adjustments of Nigerian listed manufacturing companies. Out of the 56 listed firms 35 listed manufacturing firms were selected using the purposive sampling approach. Dynamic and static estimation techniques were applied. The results from both static and dynamic panel data revealed that assets tangibility had a positive and significant effect on capital structure adjustments with (t= 4.463; t = 2.965; p <0.05). Non-debt tax shields (t= -2.831; t= -4.478; p <0.05) had negative but significant effect on capital structure adjustments. Furthermore, static result showed that firm size (t= -5.617; p <0.05) had negative but significant while dynamic results revealed firm size (t=6.956; P<0.05) had a positive and significant effect on capital structure adjustments. This study concluded that structural attributes serve as firm-level determinants to understanding of factors influencing the capital structure and speed of adjustments of listed companies in Nigeria. It was recommended that management of firms need to expand in size and investing in tangible assets to enhance their profit level, this will enable them to enjoy large profit levels with a large reduction in debt ratio.
Introduction
Companies, in particular those that produce goods, contribute significantly to a nation's economy through a variety of means.This is because the manufacturing industry is one of the economic recovery systems and the prosperity of manufacturing businesses can stimulate the growth of other companies as well as the economy (Abate, 2012;Efuntade & Akinola, 2020).Thus, no organisation in the world can be an island of itself without leverage or sourced external fund to finance its assets or business operations.Corporate bodies are being prompted to pay attention to how business characteristics variables impact the adjustment of their capital structure.Financial constraints, financial surpluses or deficits, external financing costs, the difference between the discovered and optimal debt ratios, economic distress, the ownership of the business, capital market access costs, macroeconomic factors, and corporate governance structures are some of those variables that led to the emergence of the issues surrounding capital structure adjustment.These factors influence how quickly the capital structure is adjusted to its optimum level (Buvanendra et al., 2017).Hence, capital structure adjustment is a way in which company adjust its debt and equity to achieve the optimum capital structure, thereby leading to speed of adjustments.Wendy and Salim (2019) state that only when data from the prior period is available can the speed of adjustment towards goal leverage be established.Mawitjere et al. (2016) state that a dynamic approach is implemented by tracking the direction shifts and the company's rate of adjustment speed-the rate at which it achieves its optimal leverage.The rate at which the capital structure is balanced at the appropriate degree of leverage is known as the speed of adjustment (Surwanti, 2015).
The rate at which businesses reduce the difference between the desired leverage for this time frame and the leverage from the previous year is known as the speed of adjustments (Mawitjere et al., 2016).The adjustment speed of a firm considers the cost of adjustment spent by the firm compared with loss when the firm's leverage deviates from its target.Targeted leverage levels can differ, allowing for a deviation between target and observed leverage (Heshmati, 2001).Firm structural attributes such as asset tangibility, non-debt tax shields and firm size are attributes which could have an effect on capital structure as explained by the tradeoff theory.Asset tangibility represents tangible assets use or pledge as a security while company source for funds.Due to the difficulty of obtaining public debt, manufacturing companies frequently turn to lease finance and loans from deposit money banks in country with lax bankruptcy laws.Instead of using an interest tax shield, businesses use a Non-Debt Tax Shield (NDTS) (Ramy, 2020).M'ng et al. (2017) state that when a company employs high depreciation as an NDTS, its capital structure has lower debt levels.
The best amount of leverage is obtained by striking a balance between the expenses of raising debt and the advantages of interest payments.Firm size is a measure of an organization's total assets, both present and future (Okonkwo & Azolibe, 2020).A smaller company would be less able to bring in more debt than one with more employees since it would require fewer collateral assets to pay off debt in the case of bankruptcy.These variables, which are financial and relate to capital structure modifications under management control, such as business size, assets tangibility, and non-debt tax shields, have been disregarded and have not received enough attention.Although there is a wealth of empirical information on capital structure adjustments in developed nations, research in developing nations like Nigeria is still at an early stage, and until recently, developing nations' manufacturing businesses were not given enough attention in capital structure adjustment research.Numerous researches such as (Aggarwal & Padhan, 2017;Chang et al., 2008;Doorasamy, 2021) have concentrated on firm's capital structure in different parts of developed and developing countries.However, reviewed researches (Kieschnick, 2017;Nguyen & Nguyen, 2020;Sathyamoorthi et al., 2019;Uddin et al., 2019;Wu, 2019) focused a great deal of attention on the variables that affect a firm's capital structure and/or financial performance in industrialized nations, but capital structure modifications were overlooked.Due to differences in their approach and scope, the aforementioned research produced conflicting results.Furthermore, the prior studies did not take into account the connections between changes in a firm's capital structure and its structural features.Certain research mentioned above failed to take into account the structural characteristics of the organizations that are financed in nature, such as business size, non-debt tax shielding, and asset tangibility.
The aforementioned research' analyses did not also consider both static and dynamic approaches.In light of this, the study addressed a research vacuum by investigating the impact of firm structural characteristics on changes to the capital structure of Nigerian listed manufacturing companies.Therefore, this study examined the effect of firm structural attributes (asset tangibility, non-debt tax shields and firm size) on capital structure adjustments of listed manufacturing firms in Nigeria.
Literature Review Theoretical Framework
The study was anchored on Trade-off Theory.To bolster their argument that there is a target debt level that optimizes company value by weighing the benefits of debt versus the expenses associated with debt financing, Modigliani and Miller established the trade-off theory in 1963.The advantages of debt include tax deductions for interest payments and a drop in free cash flows, which suggests that a high leverage ratio can lead to an increase in company value.On the other hand, a high level of leverage results in agency expenses and financial difficulties for the company.Abdullahi and Suleiman (2020) used multiple regression method to discovered that assets tangibility had a positive effect on leverage while assessed how firm characteristics affected the capital structure of cement companies in Nigeria from 2010 to 2015.In addition, Ramy (2020) revealed that asset tangibility had positive impact on capital structure decisions while reviewed the factors that affect capital structure and capital structure dynamics.Ezeani (2019) studied the determinants of capital structure and speed of adjustment in Nigerian non-financial firms.The author used GMM to disclosed that assets tangibility and firms' leverage are positively related.Kim (2017) buttressed that companies having a variety of tangible assets should be able to obtain debt financing more easily and affordably even if the value of a particular asset declines.Belkhir et al. (2016) established that tangible asset positively connected with leverage.Buvanendra et al. (2017) assessed the effect of firm characteristics, corporate governance on capital structure adjustments of 90 listed firms in India and Sri Lanka between 2004 and 2013.The authors employed GMM and OLS regression to established that assets tangibility and firms' leverage are positively related.Based on the review of prior and empirical literature.Hence, the study hypothesized that there is no significant effect of assets tangibility on capital structure adjustments.M'ng et al. (2017) examined the determinants of capital structure of public listed companies in Malaysia.The authors employed panel data estimation technique to established that firm can reduce debt levels in its capital structure by using high depreciation as an NDTS.Khan et al. (2020) assessed the firm characteristics determine capital structure of Pakistan listed firms on the Pakistan Stock Exchange during the period of 2008-2017.The authors used Quantile Regression Approach to revealed that NDTS had negative and significant effect on financial leverage.This also affirmed by Onofrei et al. (2015) who discovered negative relationship between leverage and non-debt tax shelters.Based on the review of prior and empirical literature.Hence, the study hypothesized that there is no significant effect of nondebt tax shields on capital structure adjustments
Firm Size and Capital Structure Adjustments
A company that has more large assets will be better able to attract more financing since it will have additional secured assets to pay off debt in the case of bankruptcy (Awan et al., 2011).Muigai and Muriithi (2017) investigated the moderating role of firm size on the link between capital structure and financial distress of Forty (40) listed non-financial enterprises in Kenya from 2006 to 2015.The authors used panel regression estimation to established that business size considerably impacted the relationship between capital structure and financial distress of non-financial enterprises.The studies (Nguyen et al., 2017;Marughu & Nwaobia, 2020;Okonkwo & Azolibe, 2020) established that size of a company was positively affect its debt level.Memona et al. (2020) employed generalized method of moments to discovered that the size of the firm had positive impact on the adjustment speed.Razaq et al. (2023) affirmed that firm size had positive and significant effect on sustainability reporting.This was established while examined the effect of corporate attributes on sustainability reporting of listed nonfinancial firms in Nigeria from 2011-2020.However, Surwanti (2015) studied the speed adjustment of leverage company in Indonesia.He used dynamic panel data to disclosed that firm size negatively affects the peed of adjustment.The studies of (Mawitjere et al., 2016;Naveed et al. 2015;Uddin et al., 2019) used GMM to established that firm size negatively affects the leverage structure and speed of adjustment.Hence, Hence, the study hypothesized that there is no significant effect of firm size on capital structure adjustments.
Inflation and Capital Structure Adjustments
Aggarwal and Padhan (2017) established that increases in inflation tends to make firms borrow instead of raising equity and high economic growth makes firms to raise more equity while examined the impact of capital structure on firm value in Indian hospitality industry.Ibrahim et al. (2023) employed Generalized Method of Moments to discovered that inflation had a positive influence on the capital structure adjustments while examined the influence of firm attributes on capital structure adjustments of listed manufacturing firms in Nigeria from 2010 to 2019.Pervaiz et al. (2021) studied adjustment speed towards target capital structure and its determinants.The authors employed panel data estimation method to discovered that inflation rate had relationship with capital structure adjustments Kaloudis and Tsolis (2019) evaluated the capital structure and speed of adjustment in U.S. firms.A comparative study in microeconomic and macroeconomic conditions.The authors used quantille regression approach to showed that the GDP growth had positive and significant impact on capital structure among United States economy companies for 44 years.Also, Ibrahim et al. (2023) and Surwanti (2015) discovered that economic growth (GDPg) demonstrated the positive effects of adjustment speed.Firms that operate in a country with increased real GDP, have a higher level of economic wealth thereby tend to issue more debt than equity (Chipeta & Mbululu., 2013;Muthama et al., 2013).However, Annalien (2010) employed the OLS to revealed that GDP growth does not have a statistically significant relationship with capital structure among south African firms.Kayo and Kimura (2011) found a negative correlation and made the case that companies typically get higher net incomes and higher revenues during periods of economic activity's boom.
Methodology
The research design used for this study was correlational.The Nigerian Exchange Group (NGX) named 56 manufacturing companies as the study's population.Purposive sampling technique was employed to select Thirty-five (35) listed manufacturing companies.Secondary data collected from published financial report and accounts of selected 35 listed manufacturing firms from 2010 to 2021, Descriptive statistics, correlation analysis, static and dynamic panel estimation techniques were used.To validate the data, the following diagnostic tests such as Multicolinearity, Heteroskedasticity and Autocorrelation, in addition to other specification tests like Breusch Pagan tests were conducted.
Measurement of Variables
Dependent variable was capital structure proxied by Financial Leverage FLEV the ratio of total debt to total equity of book value (Abdullahi & Suleiman, 2020;Buvanendra et al., 2017;Ezeani, 2019).Independent variables are: Non debt tax shields (NDT) represented as ratio of depreciation to total asset (Onofrei et al. 2015); Asset Tangibility (AST) represented as the ratio of non-current asset to Total assets (Belkhir et al., 2016) Firm size (FIS) represented by Natural log of assets (Okonkwo & Azolibe, 2020;Inflation (INF) represented by Consumer Price index in % (Aggarwal & Padhan, 2017); GDP growth (GDPg) represented by Changes in GDP over time expressed in % (Muthama et al., 2013).
Model specification
This model of this study is as follows: = λ 0 + λ 1 + λ 2 + λ 3 + λ 4 + λ 5 + ……………………………………… (1) The explicit representation of the equation above in dynamic panel form is given as: Where, −1 = Lagged of financial leverage, (1 − δ) reprensts SOA,Fsi=Firm size, Ast= Assets tangibility, Ndt= Non debt tax shields, Inf= Inflation, Gdp= Gross domestic product,= λ 0 , Error term, i=company, t=time.As showed in the Table 1, financial leverage (Flev) has an average value of 0.614, implies that equity of selected companies was averagely greater than their liabilities.The max and min.values showed 3.327 and 0.021.Firm size (Fsi) had average value of 7.306 while it has max.and min.values of 6.969 and 4.797 respectively.Asset tangibility (Ast) has an average value of 0.433, implied that the non-current assets of selected firm constituted 43.3 % of their total assets while it has max.and min.values of 1.457 and -0.103.Non -debt tax shield (Ndt) has an average value of 3.815 while it has max.and min.values of 14.573 and 0.001 respectively.Inflation (Inf) had a mean of 12.351, implies that average of inflation rate from 2010-2021 was 12 % which was very high while it has max.and min.values of 16.95% and 8.06% respectively.Gross domestic product growth Gdpg has a mean value 3.19, implies that the average Gdpg rate of Nigerian between 2010 and 2021 was not encouraging while it has max.and min.values of 8.0 and -1.79 respectively.The coefficient value of kurtosis of variables such as Flev and Ndt were greater than 3, implies a flat tails and have a leptokurtic distributions while variables such as Fsi, Ast,Inf and Gdpg have a coefficient value of kurtosis that less than 3, implied a flat slope and have a platykurtic distributions.The associated probabilities of Jarque-Bera value for all study's variables Flev, Fsi, Ast, Ndt, Inf and Gdpg were less P<0.000 show that the data is regularly distributed, that there are no outliers or bias in selection, and that the study's conclusions are unlikely to be generalized.Source: Authors Computation, (2023).
Correlation Analysis
As indicated in Table 2, Firm size and Flev have a negative link, as seen by the correlation coefficient (-0.261).This suggests that larger businesses can obtain outside funding This is supported by Ezeani (2019) who established negative association exist between Firm size and leverage.he correlation coefficient (0.162) shows a positive relationship between asset tangibility (Ast) and Flev.It implies that firms with large fixed assets have access to external borrowings.This finding is similar to the result of Efuntade and Akinola (2020) who fund that asset tangibility significantly correlated with leverage.Non -debt tax shield (Ndt) has an inverse with Flev as depicted by correlation coefficient (-0.068).This finding is in consonance with the outcome of Onofrei et al. (2015) who discovered negative relationship between leverage and non-debt tax shelters.Inflation (Inf) positively correlated with Flev at coefficient (0.175), suggests that businesses typically borrow money rather than increasing equity when inflation rises.Gdpg Gross domestic product growth negatively correlated with Flev as depicted by correlation coefficient ( -0.040).
Since there was no coefficient of variables that above the threshold of 10 on VIF which indicated that the issues of multicollinearity problem may not likely to occur in the study.3 shows the P-value of the Hausman Test of (0.166) which implied that Random effect is more appropriate than pooled OLS.This also affirmed by Breusch Pagan test which was used to decide the appropriate panel regression between Pooled OLS, random and fixed effect regression.The output of (Breusch-Pagan) test as shown in the Table 3 suggest random effect was the appropriate estimation method employed as indicated by the P-value<0.05.The probability value=0.000<0.05 and F-stat.10.788 shows that the model is fit and significant at 5% level and the variables were properly selected and combined.This means that there is an association between firm structural attributes and financial leverage.The R 2 of about 52.5 % of the total variation of financial leverage is explained by the predictor variables and the remainder of 47.5% is not explained which is accounted for by the stochastic error term.Wald Tests 2 reveals p-value 0.000 <0.005 this indicates that all predictor variables were taken as a part of determinants factors of financial leverage.
Furthermore, firm size (t= -5.617; p <0.05) had negative but significant impact on financial leverage.The result suggests that larger firms are more varied in providing services and making sources of internal funding accessible.In this case, large Nigerian firms will likely issue more equity rather than debt.The findings of the studies (Ezeani, 2019;Khan et al., 2020;Uddin et al., 2019) demonstrated a significant and negative association between firm size and leverage structure.Asset tangibility had a positive and substantial effect on financial leverage with (t= 4.463; p <0.05).This suggests that the asset structures of the enterprises are important when it comes to seeking finance, because tangible assets act as collateral for debt financing.This result was in line with the trade-off theory's implications, and the finding itself was consistent with (Buvanendra et al., 2017) who found that assets tangibility had positive significant with leverage.
The impact of non-debt tax shielding on capital structure was substantial but negative (t= -2.831; p<0.05).This indicated that the company has a higher share of tangible fixed assets, which result in higher levels of depreciation and a larger tax credit.However, the negative and significant outcome this study implied that firms with highly depreciation figure may lower their debt.This finding was agreed with the outcome of Onofrei et al. (2015) who discovered negative relationship between leverage and non-debt tax shelters On the control variables, inflation (t= 3.874; p<0.05) had a positive and significant effect on leverage and it was claimed that as inflation rises, businesses typically borrow money rather than raising equity.This finding is similar to the study of (Kaloudis & Tsolis, 2019).discovered that inflation had a positive influence on leverage.GDPg (t= 0.298; p >0.05) had a positive but insignificant effect on financial leverage and when there was decrease in GDP, companies may choose to restructure their finances by trading debt for equity, which lowers their leverage.However, the 2 (52.542, p-value=0.000)disclosed that firm structural attributes used in this study were considered as a part of the determinant factors for financial leverage.The study reported the adjustment speed (SOA)estimated using system GMM.Table 4. reported the coefficients of the lagged leverage of 0.115, it was noteworthy and favourable for all manufacturing business sectors at the 5% level.The outcomes align with the research findings published by Aderajew et al. (2017).That there was a positive and significant effect of firm size on capital structure adjustment in Nigerian manufacturing firms (t=6.956;P<0.05).This is supported by the claim that large companies can alter their capital structures more quickly and at a lower cost than smaller ones, the cost of doing so is essentially fixed.The finding was similar to the outcome of (Ezeani, 2019, Uddin et al., 2019).Asset tangibility (t = 2.965; p<0.05) had positive and significant effect on speed of adjustment speeds.It implies that creditors place a higher value on material possessions.The outcome bolsters the significance of using physical assets as security for loans.This outcome was in consonance with the result of (Buvanendra et al., 2017).Non-debt tax shields (NDTS) had a negative and significant effect on capital structure adjustment with (t= -4.478; p<0.05), this indicated that firms with highly depreciation may quickly adjust their capital.This was in line with outcome of (M 'ng, et al. 2017).Inflation had a positive and significant effect on capital structure adjustments (t = 9.089; p <0.05), and it was inferred that rising inflation generally causes businesses to borrow money rather than raise equity and reduce their debt.GDPg contributes positively and strongly to changes in capital structure (t= 7.725; p<0.05).This indicated that as the economy grows, so does the demand for goods and/or services, which forces companies to produce more and raises their need for money to fund operations.
This study employed a coefficient diagnostic test to identify errors in GMM estimation resulting from the validity of the data via (J-stat), which displays a Sargan J-stat of (27.498;P-v=0.545) while the Arellano and Bond AR (1) tests yielded a P-value 0.618, indicating that the model is not affected by autocorrelation.This validates the efficiency and reliability of the estimates as it showed no indication of first order serial correlation in the outcome.At the 0.05 level of significance, Table 5 indicates that lagged leverage (Flev-1) is significant.Manufacturing firms adjust leverage towards the target capital structure at a speed of 89% (1−λ) per year, as inferred from the estimated lagged leverage coefficient value of 0.115.This implies that it takes firms approximately 0.3 years to reach half of the target leverage from their current leverage.The current findings of the study indicate that Nigerian industrial enterprises relied on bank credit.This finding linked to the underdeveloped Swiss bond market, causing firms to acquire debt finance through bank lending.The study further revealed SOA across manufacturing firms in the study was 89%, which suggests extremely high adjustments speed.This indicates that the faster adjustments occurred thereby, easing the means of acquiring financing through debt and lower adjustment costs.This finding was similar to the work of Ezeani (2019) who reported that SOA of 83%, 72%, 63% for Nigerian oil and gas, industrial goods as well as non-financial firms.Reported SOA of 80% for Swiss firms and SOA of 79% for firms in Spain.
Conclusion and Recommendations
It is deduced that firm structural attributes such as firm size, non-debt tax shield and assets tangibility have significant effect on capital structure adjustment, because large firms with high tangible assets will have assets to be use or pledge as a security while company source for funds.This study concluded that structural attributes serve as firm-level determinants to understanding of factors influencing the capital structure (financial leverage) and speed of adjustments of listed companies in Nigeria.It was recommended that management of firms need to expand in size and investing in tangible assets to enhance their profit level, this will enable them to enjoy large profit levels with a large reduction in debt ratio.By presenting proof of the presence of capital structure adjustments, this study added to the body of knowledge already available on capital structure.This study differentiated from pattern in methodology and modelling by combining both static and dynamic panel estimation techniques which were employed separately by the previous researchers. | 2023-12-28T16:09:18.336Z | 2023-12-25T00:00:00.000 | {
"year": 2023,
"sha1": "2b5d8956811620a8b0eee22f85e0ba69c4129bc7",
"oa_license": "CCBYNC",
"oa_url": "https://fujafr.fudutsinma.edu.ng/index.php/fujafr/article/download/62/60",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "570133fe29b9e1f5d6f18ec58a60058b9b09879f",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": []
} |
225853329 | pes2o/s2orc | v3-fos-license | Experimental and numerical characterization of the interface between concrete masonry block and mortar Análise teórica e experimental de ensaios de caracterização da interface entre bloco de concreto e argamassa
Resumo Masonry is a construction system that has been used since the beginning of civilization and is still used throughout the world. The finite element method is a recent development that allows complex problems, including structural masonry problems, to be solved. A vast amount of literature exists on finite element modeling, using software such as ABAQUS, to represent experimental masonry models. Based on this established pattern, an experimental and analytical research program was designed and implemented. Thus, a set of tests was conducted to determine the compressive and tensile strengths of the masonry components, i.e., block, mortar, and grout. Bond wrench tests, diagonal tension tests, and horizontal joint shear tests were conducted to characterize the interface between the blocks and the mortar. A finite element model was then developed to repre sent the physical models and the general conclusion is that the finite element model was able to represent reasonably well the physical models.
Introduction
A substantial volume of knowledge has been established with respect to individual masonry components (units, mortars and grouts) as well as the interaction between these components. For example, Barbosa (2005) investigated the shrinkage of concrete masonry units and of masonry walls constructed from those units; Santos (2014) examined the effects of the elastic properties of the block-grout interface on the stiffness of masonry walls; Izquierdo (2015) studied the interface between block and grout under direct compression and flexural compression; Barbosa (2000) and Madia (2012) studied the behavior and interaction of infill masonry walls and the surrounding concrete elements; Capuzzo Neto (2000), Maluf (2007) and Lopes (2014) investigated the behavior of masonry walls under compressive loadings; and Silva (2014) determined the distribution of vertical loads on masonry walls using experimental and numerical models. Studies have also been conducted on the behavior of masonry structural elements. Examples include Landini (2001) and Contadini (2014) where the behavior of masonry beams under flexural loads was studied, while Landini (2001) and Pasquantonio (2015) studied the behavior of masonry beams under shear loads. Masonry has been used in the construction of tall buildings and in this application, masonry elements are subjected to high compressive stresses. Fortes (2017) determined the properties of masonry constructed with high strength units, which can be used when masonry is subjected to high compressive stresses. While there is a large body of literature on experimental research on structural masonry components and elements, there is a distinct lack of research and thus understanding, on the interaction between masonry elements, e.g., a masonry beam connected to a masonry wall. The increase in computational capability over the last forty years has led to a large base of numerical modeling of masonry structural elements together with increasing sophistication in the modeling. To capture the capacity and behavior of masonry accurately, these sophisticated models require many material parameters, which are extracted from experimental research endeavors. Due to a lack of standardization in testing, differences in the manufacture of local masonry components from local materials, and differences in local construction techniques and skills, there is a large dispersion worldwide in the values of the material parameters of interest for numerical modelling. This is especially true for the parameters that control the behavior of the interface between blocks and mortar in concrete masonry and the parameters that control the post-peak behavior. Herein we present a summary of three tests, namely, the bond wrench, diagonal tension, and horizontal joint shear as conducted on masonry constructed with half-scale (1:2) concrete blocks with dimension of 203 × 102 × 102 (length × width × height) mm and type S mortar (1:0.5:4.5 PC:Lime:sand by volume); some masonry specimens were hollow while others were grouted. In addition to the results of these masonry tests, results of tests conducted to the determine the compressive strength of the masonry materials are also presented. Results from the tests are used to extract the parameters that are required for accurate numerical modeling of masonry elements and structures constructed from these materials. In addition to the results obtained from the tests described herein, the results obtained by other authors, who conducted similar tests, are also presented.
Tests to characterize the masonry and materials
In this section, the three tests that were conducted as part of this research to characterize the behavior and capacity of the block-mortar interface are described. The difficulty in representing the masonry behavior accurately lies in representing properly the block-mortar interface (Oliveira 2014, Bolhassani 2015, Santos 2016).
The tests that were conducted to determine the compressive strengths of the masonry materials are also described.
Flexural strength test (bond wrench test) -AS 3700 (2001)
The objective of this test is to determine the flexural tensile strength of the mortar-block interface. Experimental and numerical characterization of the interface between concrete masonry block and mortar or as soon as practicable after 7 days: grouted prisms are tested 28 days after filling the cores. Specimens are be transported to the testing location no more than 24 hours before the testing time. The testing apparatus or wrench, must be able to do the following: n Apply a bending moment to the joint to be tested in the prism; n Have a retaining frame into which the prism is clamped; n Have the means of applying and measuring the load to determine the flexural stress at failure to an accuracy of within 0.01 MPa. Sketches of a typical bond wrench apparatus are shown in Figures 1 and 2. The flexural moment is applied to the test joint by means of four gripping points at the quarter points along the length of the masonry unit on both the tension and compression faces. The wrench must have the following parameters calibrated: n The mass of the wrench (m 1 ) to within ±25 g; n The distance from the inside face of the tension gripping block to the center of mass of the bond wrench, d 1 , to within ±2 mm; n The distance from the inside face to the tension gripping block to the loading handle (d 2 ) to within ±2 mm. The tensile strength of the specimen is calculated using Equation 1. (1) Where: f sp -Flexural tensile strength of the specimen, MPa; M sp -Bending moment about the centroid of the bedded the area of the test joint at failure, N•mm. This bending moment is calculated as M sp = 9.81 * m 2 * (d 2 -(t u ⁄ 2)) + 9.81 * m 1 * (d 1 -(t u ⁄ 2)); Z d -Section modulus of the cross-sectional area A d of the member; F sp -Total compressive force on the bedded area of the test joint, N. This compressive force is calculated as F sp = 9.81 * (m 1 + m 2 + m 3 ); A d -Cross-sectional area of the member, mm 2 ; m 1 , m 2 and m 3 -The mass of the wrench, the mass equivalent of the applied load and the mass of the block above the joint being tested, used in the flexural strength calculation, kg; d 1 -Distance from the inside edge of the tension gripping block to the center of mass, mm; d 2 -Distance from the inside edge of the tension gripping block to the loading handle, mm; t u -Width of the masonry unit, mm.
Initial shear strength test -BS EN 1052-3 (2002)
Several authors (Araújo 2002 Diagonal fracture -is due to diagonal cracking of the units. The initial shear strength of the joint is obtained by linear regression of the stress-strain response to zero normal stress. The testing machine must be able to apply the load while the specimen is subject to a pre-compression load. Two types of specimens can be used: type A consists of a prism assembled with three blocks with equal heights that are less than or equal to 200 mm and type B consists of a prism assembled with two blocks with unequal heights that are greater than 200 mm. Research (Araújo 2002, Oliveira 2014, Drysdale et al. 1999, Vermeltfoort 2012 indicates that type A specimens are preferred. Specimens must be constructed as follows: 1. Set the first block on a firm, clean, flat, level surface; 2. Place a mortar bed on the face of the block with a final mortar joint thickness between 8 and 15 mm. In the research presented herein, the mortar thickness was 10 mm; 3. Place the next block on the mortar joint checking for linear alignment and level; 4. Repeat steps 2 and 3; 5. Strike off excess mortar with a trowel without disturbing the blocks; 6. Take samples (cubes) of the mortar. Immediately after assembling a specimen, pre-compress the specimen with a uniformly distributed mass to give a vertical stress between 2.0 × 10 -3 N/mm 2 and 5.0 × 10 -3 N/mm 2 . Then cure the specimens and maintain them undisturbed until testing. When lime-based mortar is used, specimens should be covered with a polyethylene sheet to pre-vent the mortar from drying out during the curing period. Specimens are to be tested, as shown schematically in Figure 3, when they reach an age of 28 days ± 1 day, unless otherwise specified when limebased mortar is used. The compressive strength of the mortar is to be determined at the same time as the prisms are tested. The standard specifies three precompression stresses and for each, a minimum of three specimens must be tested. Precompression stresses (f pi ) are determined based on the compressive strength of the units used. For units with compressive strength less than 10 MPa, precompression stresses should be approximately 0.1, 0.3, and 0.5 MPa. For units with compressive strength greater than 10 MPa, precompression stresses should be doubled. The rate at which shear stress should be applied to the specimens is between 0.1 (N/mm 2 )/min and 0.4 (N/mm 2 )/min. The following parameters must be recorded during a test: 1. The cross-sectional area of the specimen parallel to the shear force (Ai) with an accuracy of 1%; 2. The maximum applied load (F i,max ); 3. The precompression load; 4. The type of failure. If a specimen experiences rupture type R4, the standard recommends that further specimens be tested until three shear rupture of the other types are obtained. For each specimen and precompression stress, the shear strength (f voi ) is calculated using Equation 2. (2) Shear strengths (f voi ) are plotted as a function of precompression stresses (f pi ) as shown in Figure 5 and a linear regression line is determined. The initial shear strength (τ o ) is the y-intercept of the regression line while the angle of internal friction (α) can be determined from the slope of the regression line. The characteristic value of the initial shear strength is f vok = 0.8 f vo and the characteristic angle of internal friction can be obtained from tan α k = 0.8 tan α.
Diagonal tension (shear) test -ASTM E519-02
This test was developed to determine more accurately the diagonal tensile (shear) strength of masonry than was possible with Experimental and numerical characterization of the interface between concrete masonry block and mortar other available methods. The specimen size was selected as being the smallest that would be reasonably representative of a full-size masonry assemblage and that could be performed in the testing machines typical of laboratories. The height and width of the specimen are 1200 mm and 1200 mm while the thickness depends on the thickness of the block. At least three specimens must be tested with all three constructed using the same type of block and mortar. When two types of mortar are to be evaluated, two sets of three specimens must be constructed. Specimens should not be moved for at least seven days after construction and should be stored for at least 28 days in a controlled environment with temperature of 24 ± 8˚C and relative humidity between 25 and 75%. In addition to testing the masonry specimen, the mortar and the block must be also tested according to the following: 1. Mortar -for each mortar type, three 50-mm cubes must be tested to determine the compressive strength of the mortar; 2. Units -at least six units must be tested to determine their compressive strength. The procedure to test the specimens are as follows: 1. Placement of the loading shoes -the upper and lower loading shoes are centered on the upper and lower bearing surfaces of the testing machine and are placed on a diagonal of the specimen; 2. Specimen placement -the specimen is positioned such that the diagonal to be tested is centered with the axis of the testing machine. In some cases it is necessary to cap the specimen with gypsum in the lower loading shoe to obtain full contact between the shoe and the specimen; 3. Instrumentation -extensometers or LVTDs are to be used to measure the shortening or elongation of the two diagonals of the specimen. The shear stress is calculated using Equation 3.
(3)
where: S s -shear stress, MPa; P -applied load, N; A n -net area of the specimen, mm², calculated using equation 4.
(4)
where: W -width of the specimen, mm; h -height of the specimen, mm; t -thickness of the specimen, mm; n -percent of the gross area of the unit that is solid, expressed as a decimal. The shear strain is calculated using Equation 5.
(5)
where: γ -shearing strain, mm/mm; ∆V -vertical shortening, mm; ∆H -horizontal shortening, mm; g -vertical gage length, mm; ∆H must be based on the same gage length as for ΔV. The modulus of rigidity or the modulus of elasticity in shear is calculated using Equation 6.
Materials test
To determine the compressive strength of the masonry units, tests were conducted according to ASTM C140. Six specimens were tested as shown in Figure 6.a; the loading rate used was 1.27 mm/min. To determine the compressive strength of the mortar, tests were conducted as outlined in ASTM C109. Six specimens were tested as shown in Figure 6.b. To determine the compressive strength of the grout, tests were conducted as specified in ASTM C1019 and ASTM C39. Four specimens were tested as shown in Figure 6.c.
Specimen construction
The construction of the specimens for the bond wrench and shear strength tests and the construction of the wallettes for the diagonal tension tests are summarized in this section.
Bond wrench and shear strength specimens
Prisms were constructed with three blocks for both types of test. A simple wood jig, shown in Figure 7.a, was built to facilitate the construction of the prisms. The prisms were constructed as follows: 1. The top face of the block was moistened (to reduce absorption of the mortar water by the block) and the block was placed on the jig as shown in Figure 7.b; 2. A full bed of mortar was applied to the surface of the block as shown in Figure 7.c; 3. The next block was moistened and placed on the mortar bed as shown in Figure 7.d; 4. A line was used to aid the leveling the block and maintain the mortar bed thickness as close as possible to the specified thickness; 5. Steps 2 to 4 were repeated for the third block as shown in Figure 7.e; 6. The excess mortar was struck off with a trowel without disturbing the blocks.
Diagonal tension specimens
Ten wallettes were constructed: five hollow and five grouted. The wallettes were three blocks wide and six blocks high, constructed as follows: 1. The first course was constructed by buttering the webs of the blocks as shown in Figure 8.a; 2. The top face of the blocks was moistened, and a full bed of mortar applied as shown in Figure 8.b; 3. The second course of blocks was placed and the blocks leveled; 4. Steps 2 and 3 were repeated until the wallette was constructed. A complete wallette is shown in Figure 8.c. Approximately 48 hours after being constructed, five wallettes were grouted as follows: 1. The grout space was cleaned from mortar droppings; 2. The wallettes were moistened; 3. Grout was placed in layers of approximately the height of the block. Each grout layer was rodded 15 times with a tamping rod.
Experimental and numerical characterization of the interface between concrete masonry block and mortar
The first layer was rodded to its bottom while the other layers were rodded to about half of the previous layer; 4. The process was repeated for the other wallettes.
Test results
In this section, the results of the tests conducted are presented.
Materials
In Figure
Bond wrench
The testing apparatus is shown in Figure 10 with a specimen ready for testing. The specimen was loaded by means of slowly placing sand in the bucket located on the right side of the loading apparatus as shown in Figure 10b. The mortar separated from either the loaded block, as shown in Figure 11a, or from the block below, as shown in Figure 11.b. Thirty joints were tested in total. The average tensile resistance of the mortar joint was 0.08 MPa with a coefficient of variation of approximately 29.5%.
Initial shear strength
There were 30 specimens divided into three groups according to
Figure 11
Typical mode of failure -bond wrench tests the precompression loading. Since the blocks used had a compressive strength greater than 10 MPa, precompression stresses were approximately 0.2, 0.6, and 1.0 MPa. A specimen ready to be tested is shown in Figure 12a while specimens after testing are shown in Figures 12b and c. The underside of the same specimen is shown in Figure 12d: the cracks through the webs that developed during the testing are visible.
In Figures 13 to 15
Diagonal tension (shear)
Tests were conducted as shown in Figure 17. The positioning of the specimen is important since misalignment of the specimen can
Figure 13
Shear stress x Strain -precompression of 0.2 MPa
Figure 14
Shear stress x Strain -precompression of 0.6 MPa
Figure 15
Shear stress x Strain -precompression of 1.0 MPa
Figure 16
Average shear stress as a function of precompression
Figure 17
Diagonal tension testing cause shear or bending moment to be applied to the specimen, which are undesirable. Positioning of the instruments measuring the deformations is also important because the modulus of rigidity of the masonry will be based on these measurements.
Analysis
In addition to the results obtained in this research, results from tests conducted by others are also presented and discussed.
Blocks
In
Figure 19
Shear stress vs Strain -hollow masonry
Figure 20
Shear stress vs Strain -grouted masonry Table 2, the value obtained in this research represents approximately 82% of the average compressive strength obtained using the values obtained by others. When using the data for blocks of scale 1:1 only, as summarized in Table 3, the value obtained in this research represents approximately 83% of the average compressive strength obtained by others. This simple analysis shows that the ratio of the compressive strength of the blocks obtained herein to that obtained using results of tests conducted by others is independent of the block scale. Further, the analysis confirms that in order to model a particular form of masonry, the data associated with that masonry need to be used -a single set of data for example, will not represent all concrete blockwork from all over the world.
Initial shear strength
The parameters considered in this analysis are described below: 1. The initial shear strength and the angle of internal frictionthese values are obtained from the regression of the average shear strength for each precompression stress; 2. Modulus of rigidity of the interface -this value is obtained from stress vs strain curve. Herein, the modulus of rigidity will be considered the same for both shear directions; 3. Modulus of rigidity of the interface -considering the modulus of elasticity of the block, the modulus of elasticity of the mortar, and the Poisson's Ratio, this value will also be obtained using the equation proposed by Lourenço et al. (2004). In addition to the results obtained herein for the initial shear strength and the angle of internal friction, the results from tests conducted by five other researchers are considered. The initial shear strength and the angle of internal friction results are summarized in Table 4. The ratios between the initial shear strength and angle of internal friction obtained in this research and those obtained from tests by other researchers are 0.40 and 0.67, respectively. Most likely the reason for the smaller values obtained herein is that half scale blocks were used; the blocks used for the specimens in the other tests were full scale. Such a difference was also observed by Table 5. Based on the values shown, the relationship between the shear strength for a half scale block and that of a full block is approximately 0.5; the same ratio is obtained for the angle of internal friction. The values of modulus of rigidity for each level of precompression are summarized in Table 6.
In general, the stress-strain relationship is related to the size (or scale) of the block used and as the size increases or decreases, the behavior changes as depicted in Figure 21. Table 7 and the values obtained vary tremendously, again emphasizing the need for data for the particular masonry being modelled. These values have been used during numerical modeling, specifically for micro-modeling of the mortar-block interface.
Bond wrench
A literature search was conducted to find published results on the strength of the mortar to tension caused by flexure determined according to AS 3700 (2001). Unfortunately, not many published Experimental and numerical characterization of the interface between concrete masonry block and mortar results were found, and the results obtained are for full scale blocks. The results are summarized in Table 8. When the number of specimens tested is less than 30, the standard specifies that the coefficient of variation must be less than 30%; the coefficient of variation for the results obtained during this research was 29.5%. Pavía and Henley (2010) concluded that the mortar strength depends on various factors including the block geometry, the mortar type, and the curing time of the specimen. Barr et al. (2015) conducted tests immediately after, 1 minute after, and 15 minutes after construction of the specimens, and concluded that the strength of the mortar is a function of the water absorbed by the block. Thus, although the average value obtained herein is only about 40% of the average value obtained from the results of the tests conducted by other researchers; the value can still be considered satisfactory.
Diagonal tension
The results of the analysis are summarized in Table 9 for the hollow masonry and in Table 10 for the grouted masonry. There is a large scatter in the results obtained for hollow masonry. The reason is that only the mortar joint is resisting the applied load, and as mentioned earlier, mortar strength varies tremendously and is difficult to determine using current testing methods. For grouted masonry, however, the values are more consistent because the grout prevents large relative displacement between the block and the mortar at their interface. Thus, the specimens resist larger applied load but displace sig-nificantly less than their hollow counterparts. Although the grouted specimens can resist more load, the shear strength is lower than that of their hollow counterpart because the area resisting the shear is significantly larger than that of their hollow counterparts.
Concluding remarks
Based on the results and analysis presented, the following conclusion can be made: 1. Blocks n The stress-strain relationships of the tested blocks are in good agreement with each other; n The compressive strength of the block is independent of the block scale; n The modulus of elasticity of the block is within the values obtained by other researchers. 2. Initial shear strength n The higher precompression level appears to change the mode of failure of the specimens slightly. Cracks developed in the webs of the block at a precompression of 1.0 MPa but no cracks were observed for the other two smaller precompression levels; n Once one of the mortar joints fails, the central block of the specimen locks, causing an increase in capacity; n Considering the scale effect, the values for the initial shear strength and angle of internal friction obtained in this research are consistent with those obtained in other studies. 3. Bond wrench n The testing is very sensitive and must be conducted very carefully; n The rupture observed was consistent with that observed by others; n The flexural tensile strength of the block-mortar interface has significant variability; n The value obtained herein differs slightly from those obtained by other researchers. 4. Diagonal tension n The mode of rupture appears to be independent of the block scale used; n The mode of rupture is independent of the type of masonry.
Both hollow and grouted masonry experienced the same type of failure; n The value obtained herein is slightly smaller than that obtained by other researchers due to the block size used; n The moduli of rigidity obtained herein are smaller than those obtained by other researchers due to the block size used; n The modulus of rigidity of hollow masonry varies tremendously because it is dependent significantly on the mortar-block interface strength. 5. Modelling n There is so much variation in the data that the values of the parameters needed to model a particular form of masonry should be determined for that masonry.
Acknowledgements
The authors are grateful for the financial support of CAPES (Coordenação de Aperfeiçoamento de Pessoal de Nível Superior) | 2020-07-16T09:06:19.386Z | 2020-06-01T00:00:00.000 | {
"year": 2020,
"sha1": "333c5d2b7d646ade0303703077bf3417828032c2",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/riem/a/k7dwBJkWHMg5zN9pYyGgqsc/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3fd0da40383033d1c8ad72c202674920afe2a593",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
216273452 | pes2o/s2orc | v3-fos-license | Multi-Criteria Optimisation of an Experimental Complex of Single-Family Nearly Zero-Energy Buildings
: The Directive 2010 / 31 / EU on the energy performance of buildings has introduced the standard of “nearly zero-energy buildings” (NZEBs). European requirements place the obligation to reduce energy consumption on all European Union Member States, particularly in sectors with significant energy consumption indicators. Construction is one such sector, as it is responsible for around 40% of overall energy consumption. Apart from a building’s mass and its material and installation solutions, its energy consumption is also a ff ected by its placement relative to other buildings. A proper urban layout can also lead to a reduction in project development and occupancy costs. The goal of this article is to present a method of optimising single-family house complexes that takes elements such as direct construction costs, construction site organisation, urban layout and occupancy costs into consideration in the context of sustainability. Its authors have analysed di ff erent proposals of the placement of 40 NZEBs relative to each other and have carried out a multi-criteria analysis of the complex, determining optimal solutions that are compliant with the precepts of sustainability. The results indicated that the layout composed of semi-detached houses scored the highest among the proposed layouts under the parameter weights set by the developer. This layout also scored the highest when parameter weights were uniformly distributed during a test simulation.
Introduction
Designing and building a complex of single-family houses, with a holistic focus on sustainable development, and construction and occupancy costs, is a problem that is very difficult to solve. It is a multi-aspect problem, in which decision variables include, among others: selecting construction materials, building services, the parameters of the shape of the buildings, their placement on the site, the sequence in which individual buildings are to be constructed and the completion deadlines for individual tasks. During the occupancy stage, one should add an analysis of the impact of the change in energy costs. When considering the entire life-cycle of the buildings, renovation and repair costs should also be included, in addition to any remodelling and demolition expenses. In recent years, the focus on the energy efficiency and pollution emission of buildings has grown exponentially and has had an immense impact on the real estate market and property values. The paper [1] presents an analysis of the profitability of investing in a selection of three systems based on renewable energy sources, using a semi-detached house as an example.
The Scale of the Problem
According to the Eurostat Agency [30], in 2017, as much as 33.6% of the population of the European Union resided in detached single-family houses, while 24.0% of the population lived in semi-detached single-family houses. According to data provided by the Polish General Construction Inspector's Office [31], in 2018, a total of 98,915 building permit applications were filed for residential buildings, out of which 93,714 building permits were issued for single-family houses (almost 94.8%), which constituted a 3% increase from the previous year [31]. The results of statistical studies presented by BPD Europe BV [32] indicate that 12% of respondents from The Netherlands, 16% of respondents from France and 17% of respondents from Germany who had been searching for a new place of residence pointed to a single-family house as a preferred option in a situation in which they would have to move to a different dwelling. This proves that single-family buildings remain popular in many developed countries, while in countries such as Poland they continue to enjoy very high popularity. The design of single-family building complexes can therefore be considered an important field, both to planners who design urban layouts for individual projects, as well as designers who work on projects composed of many such buildings.
The necessity of considering parameters associated with energy efficiency is dictated by, among other things, the current energy policy vector of the European Union, expressed by such documents as Directive 2010/31/EU on the energy performance of buildings [33]. This EU document introduced the following definition of nearly-zero energy buildings (NZEB): nearly zero-energy building' means a building that has a very high energy performance, as determined in accordance with Annex I. The nearly zero or very low amount of energy required should be covered to a very significant extent by energy from renewable sources, including energy from renewable sources produced on-site or Energies 2020, 13, 1541 4 of 30 nearby". However, every European Union Member State has adopted its own NZEB parameters, which are adapted to local conditions. The implementation of the Directive into Polish state law has introduced a new standard of near zero-energy buildings (NZEBs). The document that has been amended to include the standard, as stipulated in the Directive, is the Ordinance on the Technical Conditions that Must be Met by Buildings and their Placement [34]. In the Polish definition of NZEBs there are two criteria. The first criterion is ensuring proper building envelope thermal insulation. The second criterion for NZEB buildings in Poland applies to Primary Energy (PE), which cannot exceed a set value for different types of buildings. For single-family residential buildings without cooling installations, the maximum PE value is 70 kWh/(m 2 /annum). The buildings analysed in the article meet both criteria, thus being compliant with NZEB standards as stipulated in Polish regulations.
Selecting the appropriate orientation of buildings, their shape, construction materials, building services and shading system solutions leads to lower energy demand and costs and affects the comfort of use of the buildings themselves, reducing overheating. This subject has been discussed in, among others, [35][36][37][38][39][40][41][42]. It can therefore be argued that the application of research findings in the field of energy efficiency and thermal comfort optimisation in the design of buildings and complexes that enjoy significant popularity on the real estate market should be incorporated into current design methodology. It can also be argued that by increasing the energy efficiency of buildings forming market-preferred layouts one can contribute to increasing the popularity of such solutions in areas where detached or semi-detached houses are the most popular.
Cost Estimation Methods
Calculating the overall costs of construction is a difficult task. Typically, the overall costs of construction are divided into indirect and direct costs. However, this is a major oversimplification. Situations that can significantly affect the expenditures of the contractor can take place over the course of working on a construction project. Some of these are difficult to predict (e.g., natural disasters, economic and social changes), while others can be prepared for during a project's planning stage. The predicted additional costs can include penalties for failing to meet planned deadlines or penalties for failing to maintain the continuity of construction crew operation.
Furthermore, every project is subjected to certain technological and organisational constraints, such as the time during which construction crews or buildings are available, or technological and organisational requirements concerning ongoing work performed by a given crew on a specific building. Numerous cost estimation or total construction project cost optimisation methods have been developed. They usually utilise metaheuristic methods, such as simulated annealing, hybrid algorithms [43] and genetic algorithms [44]. Mathematical programming methods are also used, such as the Decision model for planning material supply channels in construction [45]. In [46], an innovative approach was proposed, in the form of a stochastic decision network. It enables the integration of random and decision-based aspects of planned projects. This approach allows the decision-maker to obtain a solution that is optimal in terms of the expected time and cost. They can do so by defining their preferences as to the results of the planned project and setting their risk aversion accordingly. In the case of multi-criteria analysis of time and cost, the decision-maker can also define weight values for the expected project time and cost. The methods listed above do not incorporate all of the essential costs of carrying out construction projects, such as direct costs, indirect costs, penalties for failing to meet contractual deadlines, as well as the costs of discontinuities in the work of construction crews. They also do not take into account technological and organisational constraints. Furthermore, metaheuristic methods involve lengthy calculations when compared to linear programming.
Previously Used Methods
Multi-criteria decision-making analysis methods that have been applied in urban design and spatial planning have been predominantly focused on large-scale planning. As Afshari and Vatanparast [6] Energies 2020, 13, 1541 5 of 30 have indicated, such proposals utilised the Preference Ranking Organisational method for Enrichment Evaluation (PROMETHEE and PROMETHEE-2) methods in combination with Hasse Diagram Technique (HDT), the AHP method, Fuzzy AHP, Weighted Linear Combination (WLC), a combination of the AHP and TOPSIS methods, the ELECTRE-2 method (ELimination Et Choix Traduisant la REalité), the EXPROM-2 method (an extended version of the PROMETHEE method), as well as evolutionary algorithms. Guarini, Battisti and Chiovitti [23] also pointed to the possible use of Multi-attribute Utility Theory (MAUT) and Measuring Attractiveness by a Categorical Based Evaluation methods (MACBETH).
The selection of the multi-criteria analysis method depends on the specificity of the decision problem at hand. For instance, in [47] the author referred to studies in which they used the ELECTRE III and AHP methods to analyse the possibility of using an alternative version of the BIPOLAR method to solve the problem of selecting a construction company for partnering cooperation during a construction project. The study included a calculation example and verified the results of previous studies by applying an alternative BIPOLAR method version. Ultimately, its use produced the same end result as when using two other methods.
Multi-criteria methods that incorporate co-dependencies between decision-making criteria and alternatives have seen considerable development in recent years. In [48] the authors analysed a multi-criteria problem of selecting a new function for historical buildings in light of the proposed selection criteria. Apart from the economic aspect and cultural heritage, these criteria also included sustainability. The Weighted Influence Non-linear Gauge System method (WINGS), which was extended to incorporate the uncertainty of expert opinions and their aggregation, was applied to model the structure of the problem and analyse the dependencies between alternatives and criteria. The method had to be extended because of the specificity of the problem discussed in the paper. The majority of previously used methods did not analyse interdependencies between decision-making criteria despite such interdependencies actually existing, as proven by the authors. Similarly, the authors of [49] proposed a multi-criteria hybrid model, using the Decision Making Trial and Evaluation Laboratory method (DEMATEL) and the Analytic Network Process (ANP) to select a utility function for the purpose of adapting the building of 'Stara Polana', located in Zakopane. Both in this publication and in [50], a group of experts and specialists used a questionnaire to assess dependencies between factors that affect the criterion of societal benefits and the benefits associated with preserving cultural heritage. These were some of the parameters selected for analysis and rating in association with the investigated alternative forms of adaptive reuse of the Stara Polana heritage site in Zakopane. The remaining parameters, associated with, among others, acoustics, vibrations or thermal comfort, were studied using specialist technical instruments. These interdisciplinary studies became a basis for a multi-criteria analysis performed using a proposed hybrid model of selecting a form of use for a heritage site adaptation project.
The Significance of the Method Proposed in the Article
The innovative method proposed in this paper is an important contribution to the professional toolkit of designers and can significantly aid real estate developers in tailoring their offering. The method presented by the authors includes elements and criteria that are important during many stages of the design process: the conceptual urban design and site plan stage, the conceptual and technical architectural stage, as well as the structural system design and building services design stages. It also pertains to project cost assessment in the context of building single-family residential NZEBs, which are an important part of the housing sector. This approach can be considered a holistic perspective of the design task and the project as a whole, which is both necessary and desirable from the point of view of sustainable design. The necessity of this approach has been confirmed by, among others, Garau and Pavan [51] in a broader context. The method developed by authors assumes a multi-criteria analysis that includes aspects of sustainability, as well as the costs of the construction and occupancy of a complex of single-family houses. Sustainability parameters, such as the energy demand of the buildings and greenhouse gas Energies 2020, 13, 1541 6 of 30 emissions (depending on the given building variants and their siting) have been determined through analyses using the EnergyPlus tool [52]. The energy analysis took the life-cycle stages of the buildings into consideration. The cost parameter was determined by including direct and indirect costs, the costs of construction (taking technological and organisational constraints into consideration), as well as the costs of building occupancy-the authors performed calculations for different building types and urban layout alternatives. Indirect and direct costs, as well as construction costs, were determined using a specifically-tailored optimisation model and the concept of priority scheduling. Occupancy costs were calculated using energy demand and the discount rate. Using previously selected parameters as a basis, the authors carried out a multi-criteria analysis using the Weighted Aggregated Sum Product Assessment method (WASPAS). This made it possible to determine the optimal solution for the problem in question.
Method Overview
The design optimisation task analysed by the authors is an actual design problem. The analysed complex of forty experimental buildings was, at the time of the writing of thepaper, planned to be constructed in the municipality of Mogilany near Krakow, Poland. The developer must select one of several urban layout alternatives, which is to be optimal in terms of cost and energy demand minimisation, in addition to being compliant with the precepts of sustainable development and real estate market preferences. The urban layouts under analysis can differ in terms of building siting, as well as the size or type of buildings (detached, semi-detached, terraced). Criteria values were determined for these layouts. The criteria adopted for the analysis included: overall construction cost, building energy demand, greenhouse gas emissions associated with building occupancy, building thermal comfort and an assessment of the urban layout by potential buyers. After setting parameter values for all alternatives, the decision-maker (developer) can set the weights of each parameter. After preparing the data in the listed manner, the authors conducted a multi-criteria optimisation using the WASPAS method. A block scheme of the proposed method will be presented in Figure 1. The method of setting the values of individual criteria will be presented in the following sections of the article. The method developed by authors assumes a multi-criteria analysis that includes aspects of sustainability, as well as the costs of the construction and occupancy of a complex of single-family houses. Sustainability parameters, such as the energy demand of the buildings and greenhouse gas emissions (depending on the given building variants and their siting) have been determined through analyses using the EnergyPlus tool [52]. The energy analysis took the life-cycle stages of the buildings into consideration. The cost parameter was determined by including direct and indirect costs, the costs of construction (taking technological and organisational constraints into consideration), as well as the costs of building occupancy-the authors performed calculations for different building types and urban layout alternatives. Indirect and direct costs, as well as construction costs, were determined using a specifically-tailored optimisation model and the concept of priority scheduling. Occupancy costs were calculated using energy demand and the discount rate. Using previously selected parameters as a basis, the authors carried out a multi-criteria analysis using the Weighted Aggregated Sum Product Assessment method (WASPAS). This made it possible to determine the optimal solution for the problem in question.
Method Overview
The design optimisation task analysed by the authors is an actual design problem. The analysed complex of forty experimental buildings was, at the time of the writing of thepaper, planned to be
Overall Construction Cost
Despite the contribution of the aforementioned methods, the approach prepared by the authors of this paper features numerous advantages. Its greatest advantage is that it describes all of the included cost dependencies (direct costs, indirect costs, penalties for failing to meet contractual deadlines, penalties for failing to ensure continuity in the work of construction crews) and constraints in the form of a linear programming model. This allows one to quickly find the optimal solution (e.g., using the Simplex method). If we allow the possibility of altering the sequence of building the structures, the rapid calculation of the goal function in the discrete optimisation algorithm can be considered a significant advantage. The proposed model permits the overlap of the work of construction crews on different plots, which is a good reflection of the working conditions of an actual construction site. Furthermore, the model allows for flexible adherence to technological and organisational constraints. This means that the model permits certain departures from the constraints defined by the designer. However, these departures are optimal (minimal). This is of particular significance in the case of projects with a large (even exceedingly large) number of constraints, as one can encounter a situation where it is impossible to meet all of them. The model's disadvantage is that it describes time-cost dependencies in linear form, which is a considerable simplification. Furthermore, the sequence of building the structures cannot be changed in the model. Both of these disadvantages will be studied further by the authors. The individual sections of the model have been discussed in [53,54].
The model has been defined as follows: Given: where: with decision variables: Goal function: where: K indir = Z n,m ·k indir (7) Energies 2020, 13, 1541 8 of 30 Constraints: In order for the presented model to be applied, the following data is required(1)-(3): • tgr i,j -limit completion time for task performed on building i by crew j; • kgr i,j -limit costs of task performed on building i by crew j; The parameters are additional given data in the model, enabling the establishment of technological and organisational constraints. They have been described in the article at length [52]. The model's decision variables (4) are as follows: • t i,j -task duration (i,j); • R i,j -task initiation time (i,j); • Z i,j . -task completion time (i,j); • p i -delay value relative to the planned deadline for building i; • in addition to variables that make it possible to establish technological and organisational constraints: CK OL i,j , CK OU i,j , CK BL i,j , CK BU i,j . The goal function (5) is the sum of all essential project costs. Direct cost K dir (6) is the sum of the costs of all works performed by all crews on all buildings, depending on the completion time of a given Energies 2020, 13, 1541 9 of 30 task. Direct cost includes the cost of labour, materials and equipment. It is assumed that a reduction in the task completion time, from a standard completion time to a limit time, is possible, but requires additional expenses. The dependency between the cost of completing a given task depends on the set standard and limit costs and is assumed to be linear. Therefore, the popular Critical Path Method Cost (CPM-COST) approach was incorporated into the model [55].The CPM-COST method enables the calculation of minimum direct costs, assuming the ability to shorten task completion time by incurring additional expenses when a fixed contractual deadline is imposed. Element (6) of the goal function and constraints (12)- (18) apply the assumptions of the CPM-COST method, extended to include flexible time couplings. Had the model included only (6) and (12)-(18), with parameters CT and CK removed, with the addition of the condition of meeting a contractual deadline, it would have been a classical CPM-COST model. The values a i,j and b i,j can be calculated using equations (25) and (26).
Indirect cost K indir (7) depends on the value of indirect per unit cost or the completion time of the entire multiple-structure project. Indirect per unit cost can be assessed using either a detailed or indicator-based method.
Costs associated with failing to meet planned deadlines K p (8) are set as the sum of the products of the per unit cost of failing to meet a deadline and the length of the delay for all buildings. the per unit cost of failing to meet a planned deadline can vary between buildings. The penalties for such are defined in the contract between the developer and the buyer and typically range between 0.05 and 1% of the contract value for every day of delay. For instance, in [56] the authors define the cost of failing to meet contractual deadlines as ca. 0.1% of contract value per day of delay. Additional conditions depend on the country and its legal environment.
Costs associated with pauses in the operation of crews K c (9) are calculated as the sum of the products of the per unit cost of pauses in crew operation and the length of the pause for all crews. The pause length is calculated on the basis of the earliest time the crew can complete its tasks on all buildings, the earliest time of initiation of work on the first building and the task completion times of the crew on all buildings.
In addition, the goal function includes the costs associated with failing to meet flexible couplings between tasks performed on successive buildings performed by the same crew K CO (10) as well as flexible time couplings between tasks performed by successive crews on each building K CB (11). The impact value of these couplings on the goal function for both types of couplings is calculated as the sum of the product of failing to meet the lower value of the flexible time coupling and the unit value for coupling failure, as well as the product of failing to meet the upper value of the flexible time coupling and the per unit value for coupling failure.
The following constraints have been included in the model. Constraints (12) and (13) constrain the task completion time to the limits between limit time and normal time. Constraint (14) enables the calculation of task completion times. Constraints (15)-(18) make it possible to determine the value of failing to meet appropriate flexible time couplings. Constraint (19) makes it possible to calculate the value of failing to meet the deadline of completing a given building. Constraints (20)-(23) implement the assumptions concerning the availability of every building and every crew. All decision variables used in the model are to take on non-negative values (24).
The goal function and all constraints are linear, which makes this a linear programming model. The Simplex method was used to solve the model [57].
The model described above is used to optimise a pre-set sequence of constructing buildings. In general, this sequence can be modifiable. Changing the building completion sequence affects the time and cost of completing the project. The authors plan to expand the model so that it will be able to Energies 2020, 13, 1541 10 of 30 optimise the sequence of constructing buildings. The proposed model was implemented using the Python programming language. The program can be obtained from the authors.
Energy Consumption
One of the criteria of assessing near-zero energy buildings (NZEBs) that are applicable in Poland is Primary Energy (PE), which cannot exceed the values stipulated in [34], and which differ depending on building type. For single-family residential buildings without cooling installations, the maximum PE value is 70 kWh/(m 2 /annum). The Primary Energy (PE) values for the analysed buildings are lower than the maximum values allowed by Polish law. Primary Energy values predominantly refer to the building's sources of heating and electric power. Final Energy (FE) is a more precise indicator of material and building services solutions, as it results from a building's overall energy performance balance. This is why this indicator was incorporated into the multi-criteria analysis presented in the article.
The energy consumption simulation for all of the analysed alternatives was performed using DesignBuilder, a program enabling dynamic simulations. It is based on an hourly calculation method. The calculation engine of the program is based on EnergyPlus. DesignBuilder uses EnergyPlus format hourly weather data to define external conditions during simulations. These hourly weather data sets are data derived from hourly observations at the chosen location-in this case, Krakow, Poland.
CO 2 Emissions
A building consumes various natural resources, including water, materials and energy, and releases many pollutants and emissions during its life-cycle, i.e., from the raw material extraction to the final treatment of waste from the building's demolition [58]. The authors assume that emissions generated by construction material production, the construction of the buildings or their demolition, will be highly similar for both building variants (detached, semi-detached). This is why only occupancy-related CO 2 emissions were incorporated into the analysis, particularly as it is concordant with building energy performance calculations stipulated in Polish law [34].
The CO 2 emission in EnergyPlus is calculated from Fuel Totals using a locally published CO 2 emission factor for the specific region-in this case, Krakow. The consumed fuel (electricity for lighting and gas for DHW and Heating) is multiplied by the carbon emission factor. Based on emissions factors entered for a specific location, EnergyPlus calculates the mass or volume of CO 2 (carbon dioxide) [52] The amount of CO 2 produced depends on the carbon content of the fuel, and the amount of heat produced depends on its carbon and hydrogen content. Because natural gas, which is mostly methane (CH 4 ), has a high hydrogen content, combustion of natural gas produces less CO 2 for the same amount of heat produced from burning other fossil fuels [59].
Thermal Comfort
The methodology for determining thermal comfort is based on PN-EN ISO 7730 [60]. The thermal insulation of clothing was determined based on the PN-EN ISO 9920:2009 standard [61].
According to the ANSI/ASHRAE Standard 55-2010, Thermal comfort analyses were processed for indoor winter clothing (I cl = 1 clo) for a typical winter week period, and summer clothing (I cl = 0.5 clo) for a typical summer period. The l clo value of 0.7 is assumed for transitional periods over the year. The insulation of clothing was determined as the value for the transitional season of clothing worn indoors I clo = 0.7 (clo). Determining the PMV (Predicted Mean Vote) indicator requires the calculation of the physical parameters of the analysed space, while considering energy expenditure and the thermal insulation properties of the clothes of the persons working inside. The calculated PMV value is compared with a seven-point psychophysical scale of thermal sensation which has been developed by Fanger [62] (Table 1). A comfortable environment in terms of microclimate (thermally neutral) can be considered to be within the 0.5 < PMV < +0.5 range [63]. Thermal comfort sensations on the Fanger scale have been presented in Table 1. Thermal comfort evaluation analysis was performed for two periods: • A typical summer week-a week identified by the weather data translator software as being typical of summer. • A typical winter week-a week identified by the weather data translator software as being typical of winter.
The authors assumed that these periods will be suitable for the purpose of this paper as they represent weather conditions that are repeatable and most likely to happen every year. The Polish thermal comfort standard [60] does not indicate specific periods of the year for measurements.
Design Builder uses three different mathematical models to calculate Thermal Comfort: the one by Fanger (the Fanger Comfort Model), the Pierce Foundation model (the Pierce Two-Node Model), and the model developed by researchers from Kansas State University (the KSU Two-Node Model). For the purpose of this paper, the Fanger model was chosen and implemented for thermal comfort analysis based on the equation from ISO 7730. All PMV values are taken from dynamic simulation outputs.
The results of the thermal comfort simulation analysis will be compared with in-situ measurements when similar conditions occur once all of the buildings have been completed. One of the hottest days of the year was selected for the comparative analysis, i.e., 12.07.2018. The simulation was performed for the living rooms of each building.
Urban Layout
The selection of the urban layout of a housing complex must take the preferences of future residents into consideration. In order to determine the optimal layout, a survey was conducted among the potential users of the housing complex's buildings. The respondents were presented with three urban layouts (Layout A, Layout B and Layout C) and asked to select the optimal urban layout in terms of housing comfort, their preferences and convenience. The survey was filled out by 31 middle-aged persons (30-50 years of age). The survey featured a three-point scale: a rating of 1; a rating of 2; a rating of 3. A rating 1 was given to the layout considered to be the most desirable by the respondent, a rating of 2 was given to the next-to-best layout and a rating of 3 was given to the least desirable of the layouts from the point of view the adopted assessment criteria.
MCDM Analysis-The WASPAS Method
The parameter sets selected by the authors (construction cost, energy demand, CO 2 emission, thermal comfort, urban layout assessment) are diverse and cannot be directly compared. This is why the authors decided to utilise a multi-criteria decision-making approach. There are numerous MCDM methods, such as AHP, ANP, ELECTRE, BIPOLAR, TOPSIS and many others [64,65]. For this specific problem, the authors selected the WASPAS method. It is a method that combines two other methods: the Weighted Sum Model (WSM) and the Weighted Product Model methods (WPM) [66][67][68]. The WASPAS method was selected because of its mathematical simplicity and capacity to provide highly precise results when compared to the WSM and WPM methods. The WASPAS method is currently widely accepted as an effective decision-making support tool [68]. It is continuously being developed (e.g., by integrating a fuzzy approach) [69,70]. The WASPAS method requires a decision matrix to be prepared: X = x i,j m×n where x i,j is the value of alternative i in relation to criterion j, m is the number of alternatives and n is the number of criteria. Afterwards, matrix X is to be standardised according to the following formulae: for criteria which are stimulants In order to calculate the importance of alternative i, the following formula is to be used: The optimal value of parameter λ can be calculated based on the formula below: can be calculated from the following formulae: The approximate variance value for the standardised criteria is calculated using the formula: The criteria adopted for the analysis using the WASPAS method have been presented in Figure 2. Of note is the weight selection process in the WASPAS method. The proposed method has been formulated with the developer in mind and it is they who make autonomous decisions concerning weight values, based on their own preferences. The method is a decision-making support tool for use by the developer and models their preferences. In specific cases, the developer can use a different method of setting weights, e.g., using experts' opinions. The WASPAS method was applied by the authors using a Microsoft Excel spreadsheet. It is a simple solution that can be helpful to the developer. All that needs to be done is to set the weights and individual numerical values for parameters. The spreadsheet will calculate a score for each layout and produce a ranking of said layouts. The Excel file can be obtained from the authors. In specific cases, the developer can use a different method of setting weights, e.g., using experts' opinions. The WASPAS method was applied by the authors using a Microsoft Excel spreadsheet. It is a simple solution that can be helpful to the developer. All that needs to be done is to set the weights and individual numerical values for parameters. The spreadsheet will calculate a score for each layout and produce a ranking of said layouts. The Excel file can be obtained from the authors.
Case Study-Housing Complex in Mogilany near Krakow, Poland
The planned housing complex which is the subject of the case study in this article is planned to be built on a 3-hectare plot in Libertów, in the municipality of Mogilany near Krakow, in the Lesser Poland Voivodship (Poland). The site is adjacent to an expressway which leads in the direction of Zakopane, a popular tourist destination located in the mountains. There is a lot of traffic along this road, particularly during weekends and holidays. The location of the experimental housing complex has been shown in Figure 3. In specific cases, the developer can use a different method of setting weights, e.g., using experts' opinions. The WASPAS method was applied by the authors using a Microsoft Excel spreadsheet. It is a simple solution that can be helpful to the developer. All that needs to be done is to set the weights and individual numerical values for parameters. The spreadsheet will calculate a score for each layout and produce a ranking of said layouts. The Excel file can be obtained from the authors.
Case Study-Housing Complex in Mogilany near Krakow, Poland
The planned housing complex which is the subject of the case study in this article is planned to be built on a 3-hectare plot in Libertów, in the municipality of Mogilany near Krakow, in the Lesser Poland Voivodship (Poland). The site is adjacent to an expressway which leads in the direction of Zakopane, a popular tourist destination located in the mountains. There is a lot of traffic along this road, particularly during weekends and holidays. The location of the experimental housing complex has been shown in Figure 3. The site plan for the plot (MU1) has been shown in Figure 3b. Figure 3c shows an aerial view of the plot. The housing complex was assumed to be composed of 40 single-family residential buildings, with various building layouts (detached and/or semi-detached buildings).
Three proposals of the urban layout of the complex's buildings were analysed (Layout A, B and C). They will be presented in Figure 4. The expressway runs near the plot, to its south-east (SE). To the north of the plot is the direction towards Krakow, while from the south, to the holiday resort-Zakopane. The urban layouts that were analysed are a reflection of the preferences of persons who buy single-family houses in Poland in terms of their view of property desirability. The site plan for the plot (MU1) has been shown in Figure 3b. Figure 3c shows an aerial view of the plot. The housing complex was assumed to be composed of 40 single-family residential buildings, with various building layouts (detached and/or semi-detached buildings).
Three proposals of the urban layout of the complex's buildings were analysed (Layout A, B and C). They will be presented in Figure 4. The expressway runs near the plot, to its south-east (SE). To the north of the plot is the direction towards Krakow, while from the south, to the holiday resort-Zakopane. The urban layouts that were analysed are a reflection of the preferences of persons who buy single-family houses in Poland in terms of their view of property desirability. Their selection is the result of the actual housing market situation in Poland. The proposed urban layouts are options that the developer offers for sale to their clients on the open market. The authors are aware of the precepts of sustainable design concerning the planning of small-scale urban complexes, but in this case the intent is to provide an iteration of the method for a more market-oriented approach. The authors do not rule out preparing a modification of the method that would feature other sustainability metrics in the future. The site plan for the plot (MU1) has been shown in Figure 3b. Figure 3c shows an aerial view of the plot. The housing complex was assumed to be composed of 40 single-family residential buildings, with various building layouts (detached and/or semi-detached buildings).
Three proposals of the urban layout of the complex's buildings were analysed (Layout A, B and C). They will be presented in Figure 4. The expressway runs near the plot, to its south-east (SE). To the north of the plot is the direction towards Krakow, while from the south, to the holiday resort-Zakopane. The urban layouts that were analysed are a reflection of the preferences of persons who buy single-family houses in Poland in terms of their view of property desirability. Their selection is the result of the actual housing market situation in Poland. The proposed urban layouts are options that the developer offers for sale to their clients on the open market. The authors are aware of the precepts of sustainable design concerning the planning of small-scale urban complexes, but in this case the intent is to provide an iteration of the method for a more market-oriented approach. The authors do not rule out preparing a modification of the method that would feature other sustainability metrics in the future. The housing complex will be composed of buildings with different floor areas and layouts. The types of individual buildings will be presented in Figure 5. Table 2 features a listing of the number of detached and semi-detached buildings for each layout alternative.
The housing complex will be composed of buildings with different floor areas and layouts. The types of individual buildings will be presented in Figure 5. The housing complex will be composed of buildings with different floor areas and layouts. The types of individual buildings will be presented in Figure 5. The buildings under analysis were assumed to be built using timber framing technology with thermal insulation composed of wood wool. The ground floor plan, assumed to have a usable floor area of 110 m 2 , in a detached building, is to be presented in Figure 5a, and in a semi-detached building-in Figure 5b. The geometric models of the buildings subjected to the analysis have been presented in Table 3. B1 type buildings are linked with a common wall that separates the bathrooms of both buildings.
The cost of individual buildings was calculated based on "Biuletyn cen obiektów budowlanych (BCO) część I a-obiekty kubaturowe" (in English: Construction price bulletin, Part Ia-buildings) [71]. It is a cost bulletin that lists the mean prices of buildings in Poland. The BCO can be used to estimate the cost of constructing buildings or their fragments for the purposes of cost assessment or preparing financial schedules for construction projects. Direct cost was assessed using the construction price bulletin and includes labour, construction materials and machinery. The completion time for individual works was estimated on the basis of available material costs catalogues. The values for completion times and costs were assessed for both types of buildings featured in the case under analysis. Table 4 includes the values for a detached building with a usable floor area of 110 m 2 . The buildings under analysis were assumed to be built using timber framing technology with thermal insulation composed of wood wool. The ground floor plan, assumed to have a usable floor area of 110 m 2 , in a detached building, is to be presented in Figure 5a, and in a semi-detached building-in Figure 5b. The geometric models of the buildings subjected to the analysis have been presented in Table 3. B1 type buildings are linked with a common wall that separates the bathrooms of both buildings. The cost of individual buildings was calculated based on "Biuletyn cen obiektów budowlanych (BCO) część I a-obiekty kubaturowe" (in English: Construction price bulletin, Part Ia-buildings) [71]. It is a cost bulletin that lists the mean prices of buildings in Poland. The BCO can be used to estimate the cost of constructing buildings or their fragments for the purposes of cost assessment or preparing financial schedules for construction projects. Direct cost was assessed using the construction price bulletin and includes labour, construction materials and machinery. The completion time for individual works was estimated on the basis of available material costs catalogues. The values for completion times and costs were assessed for both types of buildings featured in the case under analysis. Table 4 includes the values for a detached building with a usable floor area of 110 m 2 .
In order to determine the overall construction costs, the following parameter values were assumed: • indirect construction costs, estimated at 400 €/day; Indirect costs include the general costs of construction (wages for the site director, works managers, construction site infrastructure costs), as well as management costs (the cost of maintaining the project, company offices, costs of office work, etc.). Indirect costs were assessed as per the construction price bulletin. The bulletin lists the value of indirect costs at a level of ca. 15% of the total cost. The cost per day was calculated using an estimate of the project's duration. • costs of failing to meet planned deadlines, estimated at 150 €/per day of delay; Penalties for failing to meet planned deadlines are defined in contracts between the developer and the buyer In order to determine the overall construction costs, the following parameter values were assumed: • indirect construction costs, estimated at 400 €/day; Indirect costs include the general costs of construction (wages for the site director, works managers, construction site infrastructure costs), as well as management costs (the cost of maintaining the project, company offices, costs of office work, etc.). Indirect costs were assessed as per the construction price bulletin. The bulletin lists the value of indirect costs at a level of ca. 15% of the total cost. The cost per day was calculated using an estimate of the project's duration. • costs of failing to meet planned deadlines, estimated at 150 €/per day of delay; Penalties for failing to meet planned deadlines are defined in contracts between the developer and the buyer of the building. There is no official data on this subject and in the experience of the authors such penalties in Poland can range from 0.05% to even 1% of a contract's value per day of delay. It should be noted that penalties amounting to 0.5% of a contract's value per day are not considered unjustified by Polish courts of law. Furthermore, it should be noted that the penalty is not applied when the developer cannot be held responsible for the cause of the delay. It is impossible to factor in unforeseen events that can cause delays and for which the developer is not responsible during the construction planning stage, which is why this aspect was ignored in the paper. A penalty value of 0.2% per day per individual building was assumed in the paper. • it was assumed that agreements with some contractors specified penalties for failing to ensure the continuity of crew operation. For the third crew, a 150 € penalty for each day of pause in the work was assumed, with 200 € per day assumed for the fourth crew. The penalty value for failing to ensure crew work continuity is a direct result of contracts with specific contractors. If a contractor performs a highly specialised section of the work, these penalties will be higher. This is why the third crew, which was assumed to assemble the innovative roof structure, and the fourth crew, which was assumed to handle building services installation, were assigned such high penalties for work discontinuity. • it was determined that the first building is to be completed within 90 days. Every subsequent building was to be completed 14 days after the completion of the previous one. Planned deadlines are a direct consequence of contracts or agreements between potential buyers and the developer.
In addition, it was assumed that technological continuity was to be maintained between footing, foundation wall construction and insulation application, as well as the construction of load-bearing walls, decks, roof structure and roofing. Table 4 includes the standard and limit costs of construction work, as well as the standard and limit completion times for individual works.
The optimisation process produced schedules for each of the analysed layouts that were optimal in terms of cost. For layout A, an overall cost of 3034 thousand euro was obtained, along with an overall completion time of 629 days. For layout B, the costs amounted to 2928 thousand euro, while the overall completion time was 629 days. For layout C, the costs amounted to 3233 thousand euro and the overall completion time was 680 days. The completion time was taken into consideration when calculating the costs of failing to meet planned deadlines for each building.
The buildings under analysis were designed to have a timber frame structure. The external partitions of the analysed buildings were assumed to be built using timber framing technology. The technology selection was primarily dictated by the developer's guidelines, who had the technical infrastructure enabling them to manufacture prefabricated timber elements. Figure 6 is to show a cross-section of the external wall.
Energies 2020, 13, x FOR PEER REVIEW 18 of 30 infrastructure enabling them to manufacture prefabricated timber elements. Figure 6 is to show a cross-section of the external wall. U heat transfer coefficients [W/(m 2 K)] for elements of the buildings' envelope were calculated in accordance with the methodology outlined in [72,73]. The U heat transfer coefficients have been listed in Table 5, along with the standard requirements for NZEBs applicable in Poland [34]. [72,73]. The U heat transfer coefficients have been listed in Table 5, along with the standard requirements for NZEBs applicable in Poland [34]. The windows of the buildings meet Polish requirements for NZEBs. They were partially fitted with external blinds. The geometry of the buildings was also subjected to optimisation. Their massing is simple and thermal bridges were minimised. Their orientation relative to the cardinal directions is determined by local development guidelines. The structure of the buildings, material choices and their mass were determined by the developer. The air-tightness of the external envelope of the buildings was assumed to be n 50 = 1.5 [1/h]. In Polish regulations concerning building design [34] there are two requirements concerning building air-tightness. For buildings with gravitational ventilation it is a value of 3.5 air exchanges per hour [1/h], while for buildings with mechanical ventilation, the number of air exchanges per hour is 1.5 [1/h]. This is why the authors assumed a value of 1.5 [1/h]. The analysis was performed during the buildings' design stage. At present, the first buildings have already been built. Air-tightness inspections performed on site indicate that the precise construction of the buildings has ensured a much greater air-tightness than initially assumed during the design stage. The air-tightness achieved during construction has a value of n 50 = 0.6 [1/h], and is at a level expected of passive buildings [34]. The internal temperatures were assumed in accordance with the PN-EN 12831 standard [74].
Occupancy-related (metabolic) gains were defined in accordance with the ASHRAE 140/BESTEST standard [75], depending on the form of use of the spaces.
The buildings were assumed to be equipped with mechanical supply and exhaust ventilation with heat recovery. Supply air streams were assumed as in [76].
The Central Heating installation was designed in the form of gas-powered heating using a dual purpose gas-powered condensation boiler with a total efficiency rating of 0.95.
The indoor air temperature was assumed per PN-EN 12831:2006. Instalacje ogrzewcze w budynkach-Metoda obliczania projektowego obciążenia cieplnego-a Polish standard for heating installations governing methods of calculating heat loads during the design stage [74]. Per this standard, spaces for permanent occupancy by persons without outdoor clothing should be designed to have a temperature of +20 • C, while spaces meant for occupancy by persons potentially without clothing (such as bathrooms), should have a temperature of +24 • C. This temperature difference can be obtained by properly setting a building's thermostats or by using BMS installations. Concerning the schedule, it was assumed that such temperatures should be kept 24 h per day throughout the entire year. In residential buildings with building automatics systems it is much easier to assume periods of lowered temperature (e.g., at night) or when residents leave the building to work. However, most buildings in Poland do not feature elaborate BMS installations. The analyses assumed the buildings to have such systems, which is why no temperature drops were accounted for.
Design Builder Software uses Occupancy model data which defines the number of people in a space and occupancy times according to a set schedule. The Occupancy schedule setting is also used to control internal gains and/or HVAC systems. In DesignBuilder, schedules can be used to define:
•
Occupancy times • Equipment, lighting HVAC operation • Heating and Cooling temperature setpoints For the purpose of the simulation, the authors used a slightly modified Design Builder template with default data for residential buildings occupancy was used, to better reflect Polish living conditions, hours of work etc.
Lighting was designed in compliance with the PN-EN 12464-1:2012 standard [77]. Lighting was assumed to be energy-efficient, with a value of 3.3 [W/m 2 ] per 100 [lux]. The lighting intensity level was assumed as in [78].
The remaining parameters were assumed in accordance with the principles of construction, technical knowledge and applicable standards.
Energy Consumption of Central Heating and Domestic Hot Water Preparation
The yearly final energy consumption for the purposes of heating, ventilation, domestic hot water preparation and lighting is to be presented in Figure 7. The yearly final energy consumption for the analysed buildings has been presented in Table 6.
The Central Heating installation was designed in the form of gas-powered heating using a dual purpose gas-powered condensation boiler with a total efficiency rating of 0.95.
The indoor air temperature was assumed per PN-EN 12831:2006. Instalacje ogrzewcze w budynkach-Metoda obliczania projektowego obciążenia cieplnego-a Polish standard for heating installations governing methods of calculating heat loads during the design stage [74]. Per this standard, spaces for permanent occupancy by persons without outdoor clothing should be designed to have a temperature of +20 °C, while spaces meant for occupancy by persons potentially without clothing (such as bathrooms), should have a temperature of +24 °C. This temperature difference can be obtained by properly setting a building's thermostats or by using BMS installations. Concerning the schedule, it was assumed that such temperatures should be kept 24 h per day throughout the entire year. In residential buildings with building automatics systems it is much easier to assume periods of lowered temperature (e.g., at night) or when residents leave the building to work. However, most buildings in Poland do not feature elaborate BMS installations. The analyses assumed the buildings to have such systems, which is why no temperature drops were accounted for.
Design Builder Software uses Occupancy model data which defines the number of people in a space and occupancy times according to a set schedule. The Occupancy schedule setting is also used to control internal gains and/or HVAC systems. In DesignBuilder, schedules can be used to define: • Occupancy times • Equipment, lighting HVAC operation • Heating and Cooling temperature setpoints For the purpose of the simulation, the authors used a slightly modified Design Builder template with default data for residential buildings occupancy was used, to better reflect Polish living conditions, hours of work etc.
Lighting was designed in compliance with the PN-EN 12464-1:2012 standard [77]. Lighting was assumed to be energy-efficient, with a value of 3.3 [W/m 2 ] per 100 [lux]. The lighting intensity level was assumed as in [78].
The remaining parameters were assumed in accordance with the principles of construction, technical knowledge and applicable standards. In the case of the 110 m 2 house (B1/110), its external wall touches its "twin", which results in an adiabatic zone and a heat transfer value of zero for these walls. This results in a lower final energy Table 7 presents the yearly energy consumption for heating, ventilation and domestic hot water preparation for the entire complex. The high operational energy demand has its source in the Polish definition of NZEBs. The general definition of nearly-zero energy buildings is provided in the Energy performance of buildings directive 2010/31/UE as "'nearly zero-energy building' means a building that has a very high energy performance, as determined in accordance with Annex I. The nearly zero or very low amount of energy required should be covered to a very significant extent by energy from renewable sources, including energy from renewable sources produced on-site or nearby". However, every European Union Member State has adopted its own NZEB parameters, adapted to local conditions. In Poland, NZEBs are defined by the Ordinance on the Technical Conditions that Must be Met by Buildings and their Placement [34]. In the Polish definition of NZEBs there are two criteria. The first criterion is ensuring the proper building envelope thermal insulation. The heat transfer coefficients for external partitions in the analysed buildings are compliant with the values for NZEB buildings in Poland. The second criterion is for NZEBs in Poland pertains to Primary Energy value (PE), which cannot exceed a value set for different types of buildings. For single-family residential buildings without cooling, the maximum PE value is 70 kWh/(m 2 annum). In the article, the authors based their analyses on operational energy (OE), even though it is not listed in the requirements of the Polish NZEB definition. However, operational energy is more useful for the analyses performed by the authors than primary energy, which is a criterion for NZEBs in Poland. A definition of near-zero energy buildings for Poland has been included in the article. PE values were compared with maximum values for the Polish standard for NZEBs. The PE values for the analysed buildings are lower than the maximum values stipulated in Polish law, despite the fact that they are not fitted with renewable energy sources. This is why the authors define the buildings under analysis as NZEBs in their discussion. Table 8 shows the energy demand values for type A and type B buildings.
CO 2 Emission in the Analysed Building Types
Calculation of CO 2 emissions was based on [52,59]. The calculation of CO 2 emissions is shown in Table 9. The yearly CO 2 emission for the entire complex (40 buildings) has been presented in Table 10. CO 2 emissions are associated with the amount of fuel consumed for heating, as well as other types of fuel, such as electric power associated with, among other things, lighting. The low CO 2 emission of the buildings is also a product of their structural system. A reinforced concrete structure is associated with greater CO 2 emissions relative to a timber structure [78].
Thermal Comfort in the Analysed Building Layouts
The thermal comfort assessment analysis was carried out in a typical summer week, characteristic of climate data considered in this article (Southern Poland, around Krakow). The authors assumed that this period would be appropriate for the purposes of this paper, because it represents weather conditions that are repetitive and will most likely occur every year at a given location (this is also an option resulting from the software). The thermal comfort analysis includes building use schedules and HVAC system operation, which are discussed in the article. Graph 11 shows the daily average PMV values that are used for comparison between the buildings analyzed. Figure 8 presents the average values of thermal comfort for the analysed buildings. CO2 emissions are associated with the amount of fuel consumed for heating, as well as other types of fuel, such as electric power associated with, among other things, lighting. The low CO2 emission of the buildings is also a product of their structural system. A reinforced concrete structure is associated with greater CO2 emissions relative to a timber structure [78].
Thermal Comfort in the Analysed Building Layouts
The thermal comfort assessment analysis was carried out in a typical summer week, characteristic of climate data considered in this article (Southern Poland, around Krakow). The authors assumed that this period would be appropriate for the purposes of this paper, because it represents weather conditions that are repetitive and will most likely occur every year at a given location (this is also an option resulting from the software). The thermal comfort analysis includes building use schedules and HVAC system operation, which are discussed in the article. Graph 11 shows the daily average PMV values that are used for comparison between the buildings analyzed. Figure 8 presents the average values of thermal comfort for the analysed buildings. The buildings were designed as low energy and have a high degree of air tightness. In addition, they are not equipped with external solar screens, so they receive high solar heat gains thanks to large glazed surfaces. The PMV value of 0.7-1.27 during a sunny and warm summer week can be considered a predictable result [79].
Overheating in buildings, as simulations show, is a real problem in heavily insulated, airtight buildings with windows facing south. It was assumed that no shading elements, such as blinds, venetian blinds or brise-soleils would be installed on buildings. Such assumptions were intentional and are part of another research project, which aims to determine the extent to which solar barriers improve the comfort of using buildings with almost zero energy consumption. The buildings analysed in the article did not have solar barriers, which is why their interiors were overheated.
In order to improve the thermal comfort of the analysed buildings, additional shading mechanisms should be added (external louvers, blinds, etc.). The shape of greenery around the The buildings were designed as low energy and have a high degree of air tightness. In addition, they are not equipped with external solar screens, so they receive high solar heat gains thanks to large glazed surfaces. The PMV value of 0.7-1.27 during a sunny and warm summer week can be considered a predictable result [79].
Overheating in buildings, as simulations show, is a real problem in heavily insulated, airtight buildings with windows facing south. It was assumed that no shading elements, such as blinds, venetian blinds or brise-soleils would be installed on buildings. Such assumptions were intentional and are part of another research project, which aims to determine the extent to which solar barriers improve the comfort of using buildings with almost zero energy consumption. The buildings analysed in the article did not have solar barriers, which is why their interiors were overheated.
In order to improve the thermal comfort of the analysed buildings, additional shading mechanisms should be added (external louvers, blinds, etc.). The shape of greenery around the buildings should be adjusted to increase the amount of air supplied to the mechanical ventilation or air conditioning installations. The authors calculated the average value of thermal comfort for each system to compare different urban layouts. The thermal comfort value for system A is 1.075, 1.21 for system B and 0.94 for system C.
Assessment of the Analysed Urban Layouts
The results of the survey performed among the potential residents of the experimental housing complex have been presented in Figure 9. The number of persons listing a given layout as the most desirable one is indicated by the blue column (no. 1), the red column shows the number of persons who rated the given layout as the second-most desirable (no. 2), while the number of persons who rated the layout as the least desirable has been shown in green (no. 3).
air conditioning installations. The authors calculated the average value of thermal comfort for each system to compare different urban layouts. The thermal comfort value for system A is 1.075, 1.21 for system B and 0.94 for system C.
Assessment of the Analysed Urban Layouts
The results of the survey performed among the potential residents of the experimental housing complex have been presented in Figure 9. The number of persons listing a given layout as the most desirable one is indicated by the blue column (no. 1), the red column shows the number of persons who rated the given layout as the second-most desirable (no. 2), while the number of persons who rated the layout as the least desirable has been shown in green (no. 3). The mean result of the surveys and the ranking of each layout has been presented in Table 11. The respondents who selected Layout C as the most desirable reported that it was the layout that provided the most privacy (no adjacent neighbour). This layout was ranked as the most desirable by the greatest amount of respondents (16). Layout B was reported to be the least desirable by the largest amount of respondents (19). The respondents who rated this layout as the most desirable (rank 1) stressed their opinion as to lower construction and occupancy costs in semi-detached buildings when compared to other alternatives. Table 12 will present all of the results for each layout. As mentioned previously, the weights for each criterion, as included in the analysis, were set by the developer. They reflect the developer's preferences concerning each criterion. The developer decided that the most important criterion was total construction cost (a weight of 0.55). As the buildings will be sold to individual clients, their price was a key factor. The developer also took thermal comfort (a weight of 0.2) and energy consumption (0.15) into consideration, as this was intended to encourage potential clients and raise The mean result of the surveys and the ranking of each layout has been presented in Table 11. The respondents who selected Layout C as the most desirable reported that it was the layout that provided the most privacy (no adjacent neighbour). This layout was ranked as the most desirable by the greatest amount of respondents (16). Layout B was reported to be the least desirable by the largest amount of respondents (19). The respondents who rated this layout as the most desirable (rank 1) stressed their opinion as to lower construction and occupancy costs in semi-detached buildings when compared to other alternatives. Table 12 will present all of the results for each layout. As mentioned previously, the weights for each criterion, as included in the analysis, were set by the developer. They reflect the developer's preferences concerning each criterion. The developer decided that the most important criterion was total construction cost (a weight of 0.55). As the buildings will be sold to individual clients, their price was a key factor. The developer also took thermal comfort (a weight of 0.2) and energy consumption (0.15) into consideration, as this was intended to encourage potential clients and raise the price of buildings. The developer saw CO 2 emissions and urban layout assessment (each with a weight of 0.05) as the least important. Other weight values would produce different multi-criteria analysis results, which is why the authors also performed an analysis of the impact that weights can have on scores, as presented later in the paper. The scores for each urban layout produced by the analysis have been presented in Table 13. The scores have been presented on the scale of the entire complex (40 buildings). The authors used the WASPAS method to obtain the results. This made it possible to determine the optimal solution in light of the analysed factors (assessment criteria). We also performed a simulation for different criteria weight values using the WASPAS method. The randomly generated weights and their respective results are presented in Table 14.
Discussion and Conclusions
The results presented in Table 13 can aid the decision-maker in making decisions in selecting an architectural design alternative. The results were obtained for weights as defined by the developer and formulated to their individual preferences. An initial analysis of standardised values can provide valuable information about each alternative. Alternative B appears to clearly outrank the two other alternatives (highest value in four parameters). It also appears that alternative A is better than alternative C, as it has higher Q i scores in all parameters except thermal comfort. The WASPAS method confirms this analysis. Alternative B was selected as the best, followed by alternative A, with alternative C ranked the lowest. The differences in Q i scores were not significant (a difference of 1.5% and 2.8% relative to alternative B, for alternatives A and C, respectively).
An analysis of different weight values on the final results of the overall analysis was also performed (see Table 14). Generated with a uniform distribution, the weights and values from the Case Study were used to calculate Q i score values in the WASPAS method. Statistically, alternative B was indicated to be the best (it ranked first in 80% of cases). Of note is that alternative C (ranked first in 20% of cases) was shown to be markedly better than alternative A (no first ranks). Therefore, it can be concluded that alternative A was dominated by alternatives B and C, which indicates that it can be excluded from the decision-making process. Alternative C outranked alternative B when a sufficiently high value (ca. 0.3) was assigned to the weight associated with thermal comfort. It is interesting that alternative B received the lowest rank in 10% of cases. It was even outranked by alternative A when thermal comfort was assigned a weight with an appropriately high value (ca. 0.3) and urban layout assessment was assigned a low value (ca. 0.05). However, alternative C was shown to outrank the remaining alternatives each time when this was the case.
In the presented Case Study, only three alternatives of the design of the housing complex were formulated. The proposed model can be used to support decision-making problems with a greater number of alternatives. The method can also support the decision-maker in rejecting dominated alternatives, reducing the number of analysed alternatives.
It should be noted that analysing such complex cases, which incorporate parameters with different units, value ranges and significance, is impossible without using multi-criteria analysis methods. Despite the significant contribution of these methods to analysis, the decision-maker is irreplaceable. It is up to the individual to quantify their preferences and expectations and it is they who make the final decision. The method presented in the paper can only support them in this.
The work presents pioneering research associated with formulating a method of determining the optimal urban layout of a complex of single-family houses. The study took various cost factors into consideration. The method presented in the article is compliant with the concept of sustainable development. The article does not refer to the results of the work of other scholars as no direct counterparts were found in the literature. The discussion presented in the article covers the results of a multi-criteria assessment meant to aid in the selection of an optimal urban layout of timber single-family houses while taking different types of building layouts into account. The discussion concerning the results of the study took into consideration energy demand, the direct costs of construction and its organisation, occupancy costs, thermal comfort, energy consumption, greenhouse gas emissions and the results of a survey concerning preferences as to the proposed architectural and urban layouts conducted among potential home buyers. The authors assumed a goal function in the form of a sum of all essential costs, which, along with the constraints, is linear. The Simplex method was used to solve the linear model. Sustainability parameters were determined through analyses employing EnergyPlus code. Afterwards, using the procured data, a multi-criteria optimisation was performed, using the WASPAS method. The results of this optimisation made it possible to report a solution that can be considered optimal in light of the adopted assessment criteria.
Future Research Directions
Future analyses should automatically include the possibility of selecting construction materials, the shape of the buildings themselves and their placement. The user would define permissible alternatives and a computer program would, through discrete optimisation, set parameters that could prove useful in multi-criteria optimisation. | 2020-04-02T09:37:35.807Z | 2020-03-25T00:00:00.000 | {
"year": 2020,
"sha1": "3c5ec11fcab3b879cf1b7864af22d5ab7544b9f4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/13/7/1541/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "683b7b2a206433d677e5205996006063ce42924a",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
238741737 | pes2o/s2orc | v3-fos-license | Preparation of Quaternary Amphiphilic Block Copolymer PMA-b-P (NVP/MAH/St) and Its Application in Surface Modification of Aluminum Nitride Powders
Poly(methyl acrylate)-b-poly(N-vinyl pyrrolidone/maleic anhydride/styrene) (PMA-b-P (NVP/MAH/St)) quaternary amphiphilic block copolymer prepared by reversible addition-fragmentation chain transfer (RAFT) was used to improve the anti-hydrolysis and dispersion properties of aluminum nitride (AIN) powders that were modified by copolymers. Its structure was characterized by Fourier transform infrared spectroscopy (FT-IR) and Hydrogen nuclear magnetic spectroscopy (1H-NMR). The results demonstrate that the molecular weight distribution of the quaternary amphiphilic block copolymers is 1.35–1.60, which is characteristic of controlled molecular weight and narrow molecular weight distribution. Through charge transfer complexes, NVP/MAH/St produces a regular alternating arrangement structure. After being treated with micro-crosslinking, AlN powder modified by copolymer PMA-b-P(NVP/MAH/St) exhibits outstanding resistance to hydrolysis and can be stabilized in hot water at 50 °C for more than 14 h, and the agglomeration of powder particles was improved remarkably.
Introduction
Aluminum nitride (AlN) ceramics have a wide range of excellent properties, including high thermal conductivity, low dielectric constant and dielectric loss, and a thermal expansion coefficient that is similar to silicone, and are therefore considered a preferred material for hot sink devices such as the new-generation high performance ceramic wafer and electronic packaging [1][2][3]. AIN powder is the fundamental material used in the sintering process to create AIN ceramics. However, AIN powder can easily react with airborne water vapor, generating an AIN hydrolysate layer, and an increase in oxygen concentration can reduce the thermal conductivity of aluminum nitride ceramics [4]. Its hydrolytic feature also significantly inhibits the development of AIN ceramics' water-based tape casting technique, thus the anti-hydrolysis property of AIN powders has become one of the industry's prime difficulties.
The anti-hydrolysis treatment of AlN is generally to coat or produce a thin reaction layer of AlN particles via chemical bond or physical adsorption in order to prevent the AlN powder from hydrolyzing when exposed to water. The hydrolysis behavior of AIN powder in water can be effectively inhibited using commonly used methods such as coupling agent modification, phosphate acid, and other small molecule modifications [5][6][7][8]. However, problems remain in the wake of introducing silicone, phosphorus and other inorganic doped elements, and the powder after modification is not easy to water-based disperse. Low particle size and a highly dispersed slurry, on the other hand, are the foundations for the preparation of high-performance ceramics [6], and conventional antihydrolysis technologies struggle to match the powders' anti-hydrolysis and high dispersion requirements.
Although water-soluble polymer has been used in the dispersion of ceramic powders to some extent, reports of AlN powder modification are uncommon. As polyvinyl pyrrolidone (PVP) contains high polar lactam groups and has strong binding force with hydroxyl, amino and carboxyl groups, plus not having selectivity in adsorption on ceramic powder surface, it will not change the particle crystal shape while reducing particle sizes, so it has a wide range of applications in powder dispersion [9].
Due to the strong reactivity of N-vinyl pyrrolidone (NVP) monomer, it is extremely easy to co-polymerize with other monomers containing vinyl unsaturated structure to generate polymer compounds with both NVP unit and other copolymer monomer structures [10][11][12][13], resulting in a product with a wide range of functional characteristics. This paper is based on a mechanism of AIN powder surface modification for the design and synthesis of a quaternary amphiphilic block copolymer PMA-b-P(NVP/MAH/St) with alternating structures of NVP, Maleic anhydride (MAH), Styrene (St), Methyl acrylate (MA) using PVP as the main chain and RAFT polymerization. Both passivation and adsorption coating of the powder surface modification are achieved by chemical bonding between the lactam of NVP and the anhydride group of MAH in the molecular chain segment and the surface hydroxyl group of AIN powder. As a hydrophobic group, St, MA can form an amphiphilic block structure, improving powder hydrophobic properties and increasing hydrolysis resistance. At the same time, the steric hindrance of the two is combined with lactam and acid anhydride hydrolysis carboxyl electronegative charge repulsion, forming double particle protection. As a result, issues such easy powder aggregation and incomplete surface modification coating treatment are resolved.
Copolymer Design and Structure Analysis
In the design of copolymer, based on the modification mechanism of AlN powder, the stable adsorption of copolymer on the powder surface is enhanced on the basis of PVP lactam hydrogen bond adsorption by introducing maleic anhydride group on the basis of PVP structure. Acid anhydride has a good chemical bond with AlN powder surface -OH and -NH. Maleic anhydride monomer self-polymerization is challenging due to the higher steric hindrance, however, it is an electron acceptor and can form a charge transfer complex (CTC) [14][15][16][17] with the strong electron doner NVP and St as shown in Figure 1. Under the influence of the initiator, a copolymer with an excellent alternating structure is generated. Furthering the introduction of MA and St can effectively adjust the hydrophilic and hydrophobic features of the polymer, and on the basis of single pyrrolidone lactam electronegativity rejection dispersion, the abilities of stable adsorption and dispersion of copolymer to powder can be improved and the hydrolysis resistance of copolymer modified powder can be strengthened via methyl acrylate solvation chain, styrene steric hindrance structure, acid anhydride hydrolysis ionization repulsion.
One of the most important active polymerization technologies is RAFT polymerization. RAFT has rapidly gained popularity because of its benefits, which include a broad application range, gentle operating conditions, strong molecular structure designability, and a narrow product molecular weight distribution that is equivalent to classical free radical polymerization [18]. The molecular weight and molecular weight distribution of block polymers, as determined by GPC, are significant parameters for characterizing polymer properties and applications, as well as for investigating the regulating effect of reactive radical polymerization. The experiment analyzed and investigated the structure of the products via GPC and product element composition. Table 1 shows the copolymer composition. As can be seen from the table above, the proportion of CTC1 and CTC2 has a significant impact on product composition. BCSPA was utilized to conduct RAFT polymerization with CTC1 in the absence of CTC2, and the reaction ceased after the initial commencement, demonstrating that BCSPA has no ability to control CTC1 produced by NVP-MAH. The fundamental cause of this behavior is because NVP is a non-conjugated monomer, and BCPSA trithioester is not an impact RAFT reagent [19,20] that does not have a polymerization effect, but does have a polymerization inhibitory effect.
With the addition of CTC2, the product now complies with the basic molecular weight and molecular weight distribution features of RAFT polymerization. At the same time, when CTC1: CTC2 > 1, the reaction conversion rate terminated after the consumption of CTC2 and basically remained at CTC1:CTC2 = 1:1 reaction conversion rate. The acid value of the copolymer is around 530 mg/g, and the molar ratio of MAH in copolymers is close to 50%, indicating that MAH in copolymer demonstrates an obvious trend of alternating polymerization with NVP and St in the form of charge transfer complexes [21]. Combined with acid value and N content, it can be concluded that CTC1 and CTC2 are within a specific range of proportion, that during chain transfer agent of BCSPA, CTC2 and CTC1 show a trend of alternating polymerization, and the product structure has a certain regularity of alternating structure in the charge transfer complex. Meanwhile, for group 4, the sample was tested under different conversion rates, and the copolymer composition of the product can be seen from Table 2. The N content and acid value of the product were both very close to the theoretical value of the alternating structure, confirming the regular alternating structure of the product. The above condition also exists in the (NVP/MAH/St) chain extension experiment employing PMA-CTA, and PMA-CTA has no triggering impact on NVP/MAH (CTC1). The chain expansion reaction can only happen when CTC2 is introduced. Table 3 illustrates the block reaction ratios of macromolecular chain transfer agent PMA-CTA (a) and (NVP/MAH/St) (b) in various proportions, as well as three amphiphilic block copolymers with varied molecular weights, where NVP:MAH:St = 1:2:1. The molecular weight distribution of the macromolecular transfer chain PMA-CTA is very narrow (PDI(M w /M n )1. 17), showing that the produced chain transfer agent BCSPA has a good activity and regulated polymerization impact on MA, as shown in Table 3. The hydrophilic (NVP/MAH/St) part was introduced to carry out varying proportions of block copolymerization in the further chain extension, and the PDI value of the products obtained was 1.3-1.6. The ternary components' activity was regulated by the macromolecular chain transfer agent, and the molecular weight and molecular weight distribution were within the controllable range. However, the (NVP/MAH/St) ternary system in the RAFT polymerization process cannot fully follow the alternating structure of charge transfer complex, resulting in a wider molecular weight dispersion problem after chain extension. There is also a minor amount of free radical polymerization, which can result in a wider molecular weight dispersion if the two polymerization processes are combined. The experiment comprised sampling inspection over the A3 process at various conversion rates, with PMA-CTA excluded from the total amount and N content calculated. As shown in Figure 2, N content essentially maintained a balance at various conversion rates. The polymerization process indicated a pattern of alternate polymerization of CTC1 and CTC2, which was compatible with the hypothesized alternate polymerization. When the conversion rate exceeds 70%, the nitrogen content decreases slightly. The fundamental reason for this is because when the conversion rate rises, the concentration of CTC1 drops and the system's viscosity rises, making it difficult to maintain the alternating copolymerization trend. There is a certain random polymerization tendency in addition to the alternating copolymer. However, the overall structure remains relatively regular and adheres to the alternating composition. As a result, the planned and produced amphiphilic block copolymers exhibit an alternating distinctive structure, PMA-b-[(NVP-MAH)-alt-(St-MAH)]. In water, AIN powder is easily degraded and hydrolyzed into Al(OH) 3 and NH 3 , causing pH to rise. As a result, one of the important indicators for measuring the anti-hydrolysis of the powder is the investigation of pH change in the aqueous phase. According to the modification handling method, A1 block copolymer was used to modify the surface coating of AlN powder, then 2 g modified AIN powder was added into 50 mL deionized water and stirred evenly, the powder was placed in an 50 • C baking oven for heat preservation and the pH of the suspension system was monitored over time. Figure 5 shows the change in pH of the suspension created in water by unmodified/modified/micro-crosslinked AIN powder at 50 • C over time, demonstrating that unmodified powder, in the water batch at 50 • C, the pH will approach more than 10 within 1 h, indicating that hydrolysis is nearly complete. The pH of the copolymer-modified powder remained constant at around 6.0. In the beginning, there was a drop in pH due to unreacted anhydride hydrolysis. The pH has a noticeable tendency to rise after 8 h. Water molecules diffused through the coating layer generated in the copolymer powder surface, permeated into the surface of AIN powder, and hydrolysis happened as a result of Brownian movement. The micro-crosslinking treatment was achieved after micro-crosslinking with hexylenediamine, which inhibited the movement of water molecules and increased the coated layer's durability. The pH maintained basic stability within 14 h, showing good hydrolysis resistance.
XRD Analysis
The phase study of AIN powder before and after hydrolysis can be done using the XRD spectrum. The XRD patterns of AIN powder modified by block copolymer at various times are shown in Figure 6. Figure 6A(a) shows the diffraction patterns of original AIN, and Figure 6A(b) shows the diffraction patterns of AlN powder modified by block copolymer after 8 h hydrolysis, and it can be seen that pattern a and b are completely the same without any new diffraction peak, indicating that the modification of AlN powder by block copolymer has no effect on the phase lattice of AIN, and the modification only occurs on the surface of the powder. Additionally, AIN powders modified by block copolymer showed a certain resistance to hydrolysis at 50 • C within 8 h. Figure 6A(c) is the diffraction pattern of AlN powder after hydrolyzing for 14 h, and its phase is obviously changed. It has been hydrolyzed to Al(OH) 3 and AlO(OH). Water molecules infiltrate and diffuse into AlN powder, destroying the covering created via copolymer modification. Figure 6B shows the XRD pattern of AlN powder following micro-crosslinking treatment at various hydrolysis time. Figure 6B(a) is the original diffraction pattern, and Figure 6B(b) and Figure 6B(c) are the diffraction pattern of AlN powder modified by micro-crosslinking after 7 h and 14 h hydrolysis, from which it can be observed that, after 7 h and 14 h hydrolysis, AlN powder diffraction pattern is basically consistent with the original AlN powder. Meanwhile, after hydrolyzing at 50 • C for 14 h, only a trace of AlO(OH) appeared, and there's no impact on the phase lattice of AlN powder after micro-crosslinking treatment, and no other new diffraction peak occurred, which indicated that the surface of AlN powder after undergoing micro-crosslinking treatment was more tightly coated, and the diffusion movement of water molecules was limited by crosslinking treatment, which is more conducive to the improvement of its hydrolysis resistance. Figure 7 shows the SEM and TEM topography of AlN powder. Among them, (a) shows the SEM of the original AlN powder, which is in the shape of an irregular smooth sphere with relatively close agglomeration between particles. (b) shows SEM after hydrolysis of AlN powder, demonstrating wrinkle and irregular character, as well as considerable morphological changes. (c) shows the SEM of AlN powder modified by block copolymer, which is essentially the same as the original AlN, and the surface is relatively smooth and irregular, demonstrating that the modification of block copolymer will not change the phase of AlN powder, and the particles agglomeration has been improved to some extent. (d) shows the SEM of the modified AlN powder that has been micro-crosslinked. The morphology is the same as that of the original AlN powder, but the agglomeration between particles is significantly improved, indicating that the micro-crosslinked treatment improves particle repulsion and steric hindrance, and particle surface modification is more complete. This is also one of the reasons why the micro-crosslinking treatment is better for improving AlN powder's hydrolysis resistance. (e) shows the AlN/copolymer core-shell with a thin copolymer film within a uniform thickness of about 8 nm, the block copolymer was well bonded and coated on the surface of AlN powder. The EDS element distribution of a block copolymer covered with modified AlN powder is shown in Figure 8. As modified AlN powder was coated on aluminum foil for EDS testing, the distribution of Al element was denser than that of N element, as seen in Figure 8c,d. Furthermore, the C and O components from the copolymer can be observed in Figure 8e,f, where they are evenly dispersed on the surface of the aluminum nitride particles without accumulation, demonstrating that the block copolymer uniformly encapsulates AlN powder.
Characterization of the Copolymer
FT-IR: The structure of copolymer was characterized by FT-IR (KBr, Shimadzu Fourier transform infrared spectrometer IRSpirit-T (Shimadzu Corporation, Kyoto, Japan), and the range of scanning is between 4000-500 cm −1 .
1 H-NMR: The 1 H-NMR of copolymer was characterized by Bruker Advance spectrometer (Bruker Corporation, Billerica, MA, USA) at 400 MHz, and CDCl 3 or DMSO as solvent.
GPC: GPC was used to determine the molecular weight and polymer dispersion index of all polymers used in this work, and Shimadzu GPC (Shimadzu Corporation, Kyoto, Japan) was employed for analysis. The copolymer was neutralized and dissolved in an appropriate amount of sodium hydroxide solution in a water bath of 60-70 • C, with pH adjusted to 7-9, and filtered through a 0.45 µm filter membrane. The mobile phase employed 30% acetonitrile aqueous solution of 0.6%NaCl, flow rate of 1.0 mL/min, column temperature of 40 • C and monodisperse PEG/PEO as reference standard to generate the calibration curve. Following the method as described in the reference literature [22,23], tert-butyl mercaptan (18.0 g, 0.2 mol) and 30 mL distilled water were added to a round-bottomed flask, and 20.0 g sodium hydroxide solution with a mass fraction of 40% was added to the flask after proper stirring. Then, 10 mL acetone was added to get a clear colorless solution, stirred for 30 min and cooled to room temperature. Carbon disulfide (17.2 g, 0.23 mol) was added to get an orange solution. Stirring for 30 min in the ice bath, then 2-bromopropionic acid (31.4 g, 0.21 mol) and 20.0 g sodium hydroxide solution with 40% mass fraction were slowly added in order, the temperature was controlled below 30 • C, and stirred for 24 h at room temperature; at the end of the reaction, distilled water was added to dilute it. Under the circumstance of an ice bath, 30 mL concentrated hydrochloric acid was added to get a yellow oil layer, and stirred until the oil layer solidified and plenty of yellow particles were precipitated out. The solidified oil layer was filtered out, washed with a large amount of distilled water, dried for 24 h in a vacuum oven to get 40 g yellow powder, and then recrystallized with n-hexane for 3 times to obtain 35 g yellow solid. 1 In nitrogen atmosphere, PMA-CTA 1.3 g (1 mmol), AIBN 0.033 g (0.02 mmol), and appropriate amounts of NVP, MAH, St and dioxane, were added into the flask. After being nitrogen bubbled for 30 min, the oil batch temperature rose up to 60 • C, and reacted for 12 h. After cooling in an ice water bath, the reaction was terminated, and the product was precipitated in ethanol. After the dissolution of DMSO, the pure product was obtained by repeated dissolution-precipitation operations for 2-3 times. Then, after being washed with hot toluene several times, centrifugal separation was performed. The yellow powder was obtained by drying at 50 • C in a vacuum drying oven. Figure 9 shows the synthesis process of block copolymer PMA-b-P (NVP/MAH/St).
Modified Treatment
In 100 mL ethanol, 10 g AIN powder and 1 g block copolymer were mixed, dispersed by 2000 r/min high-speed shearing for 10 min, followed by ultrasonic treatment for 5 min, and then modified by stirring for 6 h in an oil batch at 80 • C. Removed and centrifuged at 4000 r/min for 3 min, then washed the dispersed precipitate with ethanol, centrifuged again, repeated treatment for 2-3 times, and vacuum dried at 50 • C. Micro-crosslinking treatment: After the powder reacted with block copolymer at 80 • C for 6 h, 0.2 g hexylenediamine was added, and heat preservation continued and stirred for another 2 h. Taking down, centrifuged at 4000 r/min for 3 min, washed the dispersed precipitate with ethanol, centrifuged again, repeated treatment for 2-3 times, and vacuum dried at 50 • C.
Hydrolysis Test and Characterization
First, 2 g of modified AlN powder were taken into 50 mL water, dispersed by ultrasonic treatment for 2 min. After being evenly dispersed, a hydrolysis experiment was carried out in an oven at 50 • C. The pH of the dispersion solution was recorded at various time intervals, and the modified powder hydrolysis resistance was characterized via X-ray diffraction (XRD), scanning electron microscope (SEM), transmission electron microscopes (TEM) and energy dispersive spectrometer (EDS).
Conclusions
(1) Using 2-(((tert-butyl thio)carbonothioyl)thio)propanoic acid as chain transfer agent and AIBN as initiator, based on the mechanism of surface modified powder via copolymer, the quaternary amphiphilic block copolymer of PMA-b-P (NVP/MAH/St) was successfully prepared by RAFT polymerization, and characterized by FT-IR and 1 H-NMR structure. The copolymer was found to be the target chemical, with a narrow molecular weight distribution, predictable polymerization characteristics, and a regular alternating structure in the hydrophilic chain section of the product.
(2) The surface of AlN powder was modified using the amphiphilic block copolymer that had been synthesized. It remained stable in water at 50 • C for more than 14 h after micro-crosslinking, demonstrating strong hydrolysis resistance. The modified powders were characterized by XRD, SEM, TEM and EDS. The surface modification treatment had no effect on the phase lattice of the AlN powders, and the copolymer was uniformly distributed on the surface of the powders, considerably improving the agglomeration phenomenon.
Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
Sample Availability: Please contact with the authors if samples are needed. | 2021-10-14T05:31:48.549Z | 2021-09-28T00:00:00.000 | {
"year": 2021,
"sha1": "da0d52d0f7dab77fd4869c21be54d875b534c55a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/26/19/5884/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "da0d52d0f7dab77fd4869c21be54d875b534c55a",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251850859 | pes2o/s2orc | v3-fos-license | Research on Evaluation Method for Urban Water Circulation Health and Related Applications: A Case Study of Zhengzhou City, Henan Province
The acceleration of urbanization and climate change has increasingly impacted the health level of urban dual water cycles. In order to accurately evaluate the health status of urban water cycles, the evaluation system covers four standard layers of water ecology, water abundance, water quality and water use, including 19 basic indicators such as water storage change and annual average precipitation. Three-scale AHP and EFAST algorithms are adopted to set the criterion and index layer weights. Water-cycle health assessment models are based on the improved TOPSIS model. The model evaluated Zhengzhou’s water cycle health from 2011 to 2021. We compared the TOPSIS model and FCE method to ensure the scientific objectivity of the evaluation results. The evaluation results indicated that the water cycle in Zhengzhou City improved annually, and the relative progress in 2020 was 0.567 in a sub-health state. The eco-environmental water demand, green coverage rate of the built district, water resources amount, and industry’s water consumption per unit of value added (CNY 10,000) were the major obstacles. These four factors have preponderantly influenced Zhengzhou City’s water cycle health. Our research results provide scientific reference for Zhengzhou to achieve a healthy urban water cycle and regional sustainable development.
Introduction
The water cycle supports the formation of water resources and is the main driving force for the evolution of water environments and ecosystems. With the development of the social economy and acceleration of urbanization and other human activities, the regional water cycle has gradually changed from a single "natural" water cycle model to a "natural-social" dual water cycle model. The basic process of a natural water cycle is precipitation-runoff-evaporation. The basic process of a social water cycle is water intakewater use-water supply-water drainage-water reuse. Natural and social water cycles influence each other and system structure coupling. A set of "natural-social" dual water cycle models based on "water supply-water use-water consumption-water drainage-water reuse" is formed.
Zhengzhou City is in the north-central part of Henan Province (Figure 1). The water cycle is greatly influenced by the urbanization rate and urbanization processes with problems including severe water pollution, a sharp decline in groundwater depth, and water shortage. In 2020, Zhengzhou's total water resources were 859.12 million cubic metres. The per capita water resources were only 68 cubic metres, one-tenth of the national per capita share, far below the internationally recognized 500 cubic metres, signifying an extreme water shortage warning line. In general, Zhengzhou is in a state of severe water shortage. In recent years, the rapid economic development in Zhengzhou has led to increased water extreme water shortage warning line. In general, Zhengzhou is in a state of severe water shortage. In recent years, the rapid economic development in Zhengzhou has led to increased water supply demand. In 2020, the total amount of sewage discharge reached 105.125 million tons, which restricted the long-term social development in Zhengzhou. Maintaining a healthy water cycle has become a significant proposition for the sustainable development of human society. At the beginning of the 21st century, China's urban water system developed rapidly. Many experts and scholars began to study the healthy circulation of urban water systems from the overall direction of the water environment and ecology. Zhang et al. [1] proposed a healthy water cycle according to the water crisis caused by human behaviours and emphasized the importance of recycling water models for water cycle health. Some scholars have adopted different methods to study this concept from different dimensions. For example, Jia et al. [2] evaluated the water cycle health in Ningxia and five other cities using principal component analysis. Zhang et al. [3] analysed the water cycle health in Beijing and comprehensively considered the influence of the South-North Water Diversion Project on the water cycle in Beijing, thus analysing the urban water cycle's influencing factors more scientifically. Luan et al. [4] proposed a farmland healthy water cycle and evaluated the water cycle health of the Jun Liu irrigation area. Research on water cycle in foreign countries has obtained many research results. Chu [5] established an index system based on "S-E", thought to evaluate the health status of the urban water cycle. Maria et al. [6] used life cycle methods to evaluate urban water cycles in the Mediterranean region. Deng et al. [7] proposed an improved entropy-based fuzzy matter element model to evaluate the Taihu Lake Basin in China. Healthy water cycles are a fuzzy concept.
The TOPSIS model ignores the linear relationship between indicators, leading to the failure of Euclidean geometry. In addition, Euclidean distance can only reflect the At the beginning of the 21st century, China's urban water system developed rapidly. Many experts and scholars began to study the healthy circulation of urban water systems from the overall direction of the water environment and ecology. Zhang et al. [1] proposed a healthy water cycle according to the water crisis caused by human behaviours and emphasized the importance of recycling water models for water cycle health. Some scholars have adopted different methods to study this concept from different dimensions. For example, Jia et al. [2] evaluated the water cycle health in Ningxia and five other cities using principal component analysis. Zhang et al. [3] analysed the water cycle health in Beijing and comprehensively considered the influence of the South-North Water Diversion Project on the water cycle in Beijing, thus analysing the urban water cycle's influencing factors more scientifically. Luan et al. [4] proposed a farmland healthy water cycle and evaluated the water cycle health of the Jun Liu irrigation area. Research on water cycle in foreign countries has obtained many research results. Chu [5] established an index system based on "S-E", thought to evaluate the health status of the urban water cycle. Maria et al. [6] used life cycle methods to evaluate urban water cycles in the Mediterranean region. Deng et al. [7] proposed an improved entropy-based fuzzy matter element model to evaluate the Taihu Lake Basin in China. Healthy water cycles are a fuzzy concept.
The TOPSIS model ignores the linear relationship between indicators, leading to the failure of Euclidean geometry. In addition, Euclidean distance can only reflect the evaluation object and reason from position distance. The closeness of the solution cannot reflect similar geometry. Mahalanobis distance and grey relational analysis improve the TOPSIS method, which can evaluate healthy water circulation and provide references for water cycle health and ecological management in Zhengzhou.
In this context, Zhengzhou was chosen for case studies, and an evaluation system covering four criteria layers was established, namely water abundance, water ecology, water use, and water quality, including 19 evaluation indexes such as mean annual precipitation, annual water storage change of reservoir, and eco-environmental water demand. The combined weights of subjective and objective indexes were obtained by combining the three-scale AHP and EFAST algorithms. On this basis, we established an improved TOPSIS model to evaluate the water cycle in Zhengzhou. We used the TOPSIS model and FCE for comparison and verification, aiming to provide scientific reference for the realization of benign water cycles and sustainable development in Zhengzhou City.
The Connotation of Healthy Water Cycle
The traditional concept emphasizes that humans should not interfere with the natural water cycle to maintain its intrinsic natural state. However, with the intensification of human activities, the social water cycle's role has gradually enhanced. Healthy water cycles should consider natural and human factors. On the one hand, the natural water cycle must ensure the social "three-living" water use under the premise of health and support the long-term development of society. On the other hand, human society must use water resources efficiently and avoid irreversible damage to the natural water cycle [8,9]. A healthy water cycle is manifested in the interaction between the natural and social water cycles on the cycle path, achieving dynamic balance. A healthy water cycle is demonstrated in normalizing water ecology, reasonably developing water resource abundance, healthy water environment quality, and efficient resource utilization [10,11]. Therefore, maintaining a healthy water cycle mainly includes the following three points: Firstly, water resource management must ensure the normalization of ecological water levels and self-healing capabilities. Secondly, the amount of water resources satisfies the need for socially sustainable development. Thirdly, efficiency in water resource utilization patterns [12]. A diagram of the urban healthy water cycle is shown in Figure 2. evaluation object and reason from position distance. The closeness of the solution cannot reflect similar geometry. Mahalanobis distance and grey relational analysis improve the TOPSIS method, which can evaluate healthy water circulation and provide references for water cycle health and ecological management in Zhengzhou.
In this context, Zhengzhou was chosen for case studies, and an evaluation system covering four criteria layers was established, namely water abundance, water ecology, water use, and water quality, including 19 evaluation indexes such as mean annual precipitation, annual water storage change of reservoir, and eco-environmental water demand. The combined weights of subjective and objective indexes were obtained by combining the three-scale AHP and EFAST algorithms. On this basis, we established an improved TOPSIS model to evaluate the water cycle in Zhengzhou. We used the TOPSIS model and FCE for comparison and verification, aiming to provide scientific reference for the realization of benign water cycles and sustainable development in Zhengzhou City.
Data Source
The data derive from China Environmental Statistics Yearbook
The Connotation of Healthy Water Cycle
The traditional concept emphasizes that humans should not interfere with the natural water cycle to maintain its intrinsic natural state. However, with the intensification of human activities, the social water cycle's role has gradually enhanced. Healthy water cycles should consider natural and human factors. On the one hand, the natural water cycle must ensure the social "three-living" water use under the premise of health and support the long-term development of society. On the other hand, human society must use water resources efficiently and avoid irreversible damage to the natural water cycle [8,9]. A healthy water cycle is manifested in the interaction between the natural and social water cycles on the cycle path, achieving dynamic balance. A healthy water cycle is demonstrated in normalizing water ecology, reasonably developing water resource abundance, healthy water environment quality, and efficient resource utilization [10,11]. Therefore, maintaining a healthy water cycle mainly includes the following three points: Firstly, water resource management must ensure the normalization of ecological water levels and self-healing capabilities. Secondly, the amount of water resources satisfies the need for socially sustainable development. Thirdly, efficiency in water resource utilization patterns [12]. A diagram of the urban healthy water cycle is shown in Figure 2.
Establishment of Evaluation Index System
The urban water cycle health assessment is a complex process. The natural and social water cycles should be included in the established evaluation index system. If the indicators are not comprehensive enough, the urban health status cannot be truly reflected.
Therefore, the selection of indicators should follow objective laws and be regionally representative [13,14]. We comprehensively consider the connotation and regional status of water cycles in healthy cities. The relevant basic data of Zhengzhou City were collected by referring to the Water Resources Bulletin of Zhengzhou City and Water Resources Bulletin of Henan Province. The health evaluation system of the water cycle was established from four dimensions: water supply, water sources, water use, and water treatment. The water utility criterion layer mainly determined the index layer from the effective utilization coefficient of irrigation water and social "three living", which reflects the utilization efficiency of water resources in cities. The water abundance criterion layer mainly considered the natural endowment conditions of urban water resources, such as the annual water storage variation of medium-sized reservoirs and annual average precipitation. The water ecological criterion layer mainly established the index system from the human protection measures of water resources, river and lake storage capacity, and drainage pipeline density. The water quality reflected natural attributes of the water cycle and mainly involved urban greening and water function area status. The evaluation index system is shown in Figure 3.
Establishment of Evaluation Index System
The urban water cycle health assessment is a complex process. The natural and social water cycles should be included in the established evaluation index system. If the indicators are not comprehensive enough, the urban health status cannot be truly reflected. Therefore, the selection of indicators should follow objective laws and be regionally representative [13,14]. We comprehensively consider the connotation and regional status of water cycles in healthy cities. The relevant basic data of Zhengzhou City were collected by referring to the Water Resources Bulletin of Zhengzhou City and Water Resources Bulletin of Henan Province. The health evaluation system of the water cycle was established from four dimensions: water supply, water sources, water use, and water treatment. The water utility criterion layer mainly determined the index layer from the effective utilization coefficient of irrigation water and social "three living", which reflects the utilization efficiency of water resources in cities. The water abundance criterion layer mainly considered the natural endowment conditions of urban water resources, such as the annual water storage variation of medium-sized reservoirs and annual average precipitation. The water ecological criterion layer mainly established the index system from the human protection measures of water resources, river and lake storage capacity, and drainage pipeline density. The water quality reflected natural attributes of the water cycle and mainly involved urban greening and water function area status. The evaluation index system is shown in Figure 3.
Threshold Standard of Evaluation Index
In this paper, we determined the index threshold by combining the investigation and referring to relevant research results and national regulations to ensure scientific objectivity [15]. Due to the relativity of the water cycle's health degree, the health standard was divided into five grades: excellent (grade I), healthy (grade II), sub-healthy (grade III), unhealthy (grade IV), and sick (grade V). The thresholds and attributes of each indicator in the evaluation system are shown in Table 1. According to the principle of minimum information entropy, the three-scale AHP and EFAST algorithm were combined, and the subjectivity and objectivity of the weight were coupled. The weight of the evaluation index was accurate and reasonable. The calculation steps were as follows.
1.
First item: The data were standardized. Since the annual torrential rain occurred in Zhengzhou City in 2021, the extreme value data were not representative. Therefore, we used a standardized transformation method to eliminate the contingency of extreme values.
where x ij is the original value of the evaluation index; r ij is the index value after standard transformation and X j is the mean value of item j; S j is the standard deviation of indicator j.
2.
Second item: Three-Scale AHP to Calculate Subjective Weight The analytic hierarchy process (AHP) is a multi-objective decision analysis method that combines qualitative and quantitative analyses. AHP can realize the qualitative and quantitative determination of target weight, which is widely applicable for management and decision-making [16]. The main calculation process was divided into three steps: In the matrix: U ij is number 1 to 9 and its reciprocal, representing the relative influence between the two indexes. In this paper, we only used a scale of 0-2 to establish the judgment matrix, which improved the shortcomings of the traditional analytic hierarchy process. b.
Calculate the maximum eigenvalue (λ max ) of the judgment matrix: c.
Consistency test of calculation results. When C.R. was ≤0.1, the consistency test was passed. Otherwise, the matrix had to be adjusted until it was passed. The consistency test index formula for the judgment matrix is: R.I. is the random consistency index, which can be obtained from Table 2. 3.
Third item: EFAST Algorithm to Calculate Objective Weight [17,18] The EFAST algorithm reflects sensitivity by the variance of the coupling between the indicators. The index weight with high sensitivity is higher, and the index weight with low sensitivity is lower. The algorithm is briefly introduced as follows: The total variance of model output can be represented as the sum of the variances of coupling effects with every index: where V i is the variance of index x i ; V ij is the variance of the interaction between x i and x j ; V ijk is the variance of the interaction between x i , x j and x k ; V ijk...n is the variance of the interaction between indicator x i and indicator n − 1. The second-order M ij , the third-order M ijk and the higher-order sensitivity index M ij...n of the index x i coupled with other indexes can be defined as: The sum of sensitivity for each indicator is: The sensitivities of the coupled indicators were used to calculate weights, and the sensitivity of the first index M mi normalized weight value E i . The weight calculation formula can be expressed as: 4. Fourth item: Index subjective and objective combination weight calculation: where w j is the index combination weight value; n is the number of indicators; W j prime is the weight result obtained by three-scale AHP; and W j prime obtains weight results for the EFAST algorithm. In this paper, the weighted Mahalanobis distance replaced the Euclidean distance of the original TOPSIS model, and the grey correlation analysis was introduced to improve the TOPSIS model. Based on the original TOPSIS model, the improved TOPSIS model comprehensively considered the correlation between various indicators and intuitively showed the nonlinear relationship between sequences. The calculation process is as follows: criterion layer: m and n.
1.
First item: The standardized index matrix A m×n was established: the matrix A m×n comprised the standardized index data x ij , where x ij is the standardized value of the sample j and index i: m and n are the criterion number of the criterion layer and the index number of the index layer, respectively.
2.
Second item: Determine the positive ideal solution Y + j and the negative ideal solution Y − j : 3. Third item: where m is the number of evaluation schemes; n is the number of indicators in each evaluation scheme; p ij is the value multiplied by the corresponding weight value after the standardized transformation. 4.
5.
Fifth item: Calculation of grey correlation degree: where ρ = 0.5, r + i is the grey correlation degree between the evaluation scheme and positive ideal solution, r − i is the grey correlation degree between the evaluation scheme and negative ideal solution. 6.
Sixth item: Dimensionless processing weighted Mahalanobis distance d + i , d − i , and grey correla- 7. Seventh item: Calculates the relative paste progress: i all reflect the distance between each evaluation object and the ideal solution; C i is the relative progress, and the larger the value is, the higher the water cycle health level is, and vice versa.
1.
First item: Calculation of factor contribution F j for evaluation j: where w * i is the weight value of the criterion layer corresponding to the index.
2.
Second item: Calculation of deviation I j : 3.
Third item: Calculation of barriers to evaluation indicators P j : The value of C i represents the degree of water cycle health, which is between 0 and 1. A higher C i value indicates a higher water cycle health rating, whereas a lower C i value indicates a lower water cycle health rating. Based on the existing research results and related data, water cycle health was divided into five grades, as shown in Table 3. The flow chart of water cycle health evaluation is shown in Figure 4.
Weight Calculation Results
The subjective weight of the evaluation index was calculated using a three-scale AHP. After calculation, C R = 0.05<1; therefore, the consistency test was passed. The objective weights of evaluation indexes in different years were determined by the EFAST algorithm. The minimum information entropy principle combines the subjective and objective weight values. Finally, the combined weight value of each evaluation index was calculated. The weight values of four criteria layers and 19 indicators of the evaluation system are shown in Figure 5.
Weight Calculation Results
The subjective weight of the evaluation index was calculated using a three-scale AHP. After calculation, C R = 0.05 < 1; therefore, the consistency test was passed. The objective weights of evaluation indexes in different years were determined by the EFAST algorithm. The minimum information entropy principle combines the subjective and objective weight values. Finally, the combined weight value of each evaluation index was calculated. The weight values of four criteria layers and 19 indicators of the evaluation system are shown in Figure 5.
Evaluation of Index Layer
The health state for the index layer is shown in Figure 6 according to the threshold of each indicator. According to evaluation results of the index layer, most indicators were developing towards a healthy trend annually.
Among them, the indicators D 2 , D 3 , B 4 were healthy in the past decade, showing that the sewage treatment rate in Zhengzhou was high, the industry's water consumption per unit of value (CNY 10,000) was reasonable, and the safety of drinking water sources was high. A 1 ,A 5 ,B 3 were opposing indicators and have been in sub-disease and morbid states for a long time in the past decade. The health level of B 3 was poor, and the health score was only 1.0. Apart from the general health level in 2016, A 5 was sub-morbid for nine years, and the score was only 2.0. The health level of A 1 improved in the past three years, but it was still sub-morbid. These three indicators indicated that Zhengzhou had to improve the drainage pipeline density, increase the water consumption of the ecological environment, and draw attention to reservoirs' annual water storage change. D 5 ,D 6 , A 4 status continued to improve and tended to be healthy. A 2 experienced three consecutive years of health from 2011 to 2013, but the next seven years were sub-health, showing that the protection of water resources in Zhengzhou City was increased and resulted in improved water ecological functions such as river and lake storage capacity and water function area compliance rates. C 3 was morbid in 2021 due to the once-in-a-millennium heavy rainfall in Zhengzhou on 20 July 2021, showing that the precipitation in Zhengzhou was unstable. Therefore, the precipitation trend in Zhengzhou should be better monitored. If the precipitation was excessive, there should be a timely warning to avoid the adverse effects of jade. The health status of C 4 was volatile, showing the dependence of Zhengzhou on groundwater and the sensitivity of indicators.
Evaluation of Index Layer
The health state for the index layer is shown in Figure 6 according to the threshold of each indicator. According to evaluation results of the index layer, most indicators were developing towards a healthy trend annually.
Factors Analysis of Water Circulation Health Disorders
According to Formulas (17)- (19), the obstacle degrees of 19 evaluation index Zhengzhou City from 2011 to 2021 were calculated. Six indexes with large obstacl grees were selected for statistical analysis to accurately understand the distribution o stacle degrees. The calculation results are shown in Table 4. The greatest obstacle f was the ecological environment's water consumption (A 1 ), and the average obstacl gree was 19.65. The main obstacle factor of the minimum obstacle degree was the g coverage rate in a built-up area (A 2 ); the average obstacle degree was 7.3. Table 4 shows eco-environmental water demand (A 1 ), the proportion of river le above class III water quality (B 3 ), the industry's water consumption per unit of v (CNY 10,000) (D 2 ), water resources amount (C 1 ), effective utilization coefficient of ir tion water (D 1 ), and the green coverage rate of the built district (A 2 ) when the obs degree was high. The six factors with a higher obstacle degree were the main influen factors. D 1 and D 2 belonged to the water utility criterion layer, C 1 belonged to the ter abundance criterion layer, A 1 and A 2 belonged to the water ecology criterion l and B 3 belonged to the water quality criterion layer. Among them, the indicators D 2 , D 3 , B 4 were healthy in the past decade, showing that the sewage treatment rate in Zhengzhou was high, the industry's water consumption per unit of value (CNY 10,000) was reasonable, and the safety of drinking water sources was high. A 1 , A 5 , B 3 were opposing indicators and have been in sub-disease and morbid states for a long time in the past decade. The health level of B 3 was poor, and the health score was only 1.0. Apart from the general health level in 2016, A 5 was sub-morbid for nine years, and the score was only 2.0. The health level of A 1 improved in the past three years, but it was still sub-morbid. These three indicators indicated that Zhengzhou had to improve the drainage pipeline density, increase the water consumption of the ecological environment, and draw attention to reservoirs' annual water storage change. D 5 , D 6 , A 4 status continued to improve and tended to be healthy. A 2 experienced three consecutive years of health from 2011 to 2013, but the next seven years were sub-health, showing that the protection of water resources in Zhengzhou City was increased and resulted in improved water ecological functions such as river and lake storage capacity and water function area compliance rates. C 3 was morbid in 2021 due to the once-in-a-millennium heavy rainfall in Zhengzhou on 20 July 2021, showing that the precipitation in Zhengzhou was unstable. Therefore, the precipitation trend in Zhengzhou should be better monitored. If the precipitation was excessive, there should be a timely warning to avoid the adverse effects of jade. The health status of C 4 was volatile, showing the dependence of Zhengzhou on groundwater and the sensitivity of indicators.
Factors Analysis of Water Circulation Health Disorders
According to Formulas (17)- (19), the obstacle degrees of 19 evaluation indexes in Zhengzhou City from 2011 to 2021 were calculated. Six indexes with large obstacle degrees were selected for statistical analysis to accurately understand the distribution of obstacle degrees. The calculation results are shown in Table 4. The greatest obstacle factor was the ecological environment's water consumption (A 1 ), and the average obstacle degree was 19.65. The main obstacle factor of the minimum obstacle degree was the green coverage rate in a built-up area (A 2 ); the average obstacle degree was 7.3. Table 4 shows eco-environmental water demand (A 1 ), the proportion of river length above class III water quality (B 3 ), the industry's water consumption per unit of value (CNY 10,000) (D 2 ), water resources amount (C 1 ), effective utilization coefficient of irrigation water (D 1 ), and the green coverage rate of the built district (A 2 ) when the obstacle degree was high. The six factors with a higher obstacle degree were the main influencing factors. D 1 and D 2 belonged to the water utility criterion layer, C 1 belonged to the water abundance criterion layer, A 1 and A 2 belonged to the water ecology criterion layer, and B 3 belonged to the water quality criterion layer.
In order to visually display the obstacle degree change of the six main obstacle factors, the trend was plotted as a broken line, as shown in Figure 7.
From 2011 to 2021, the eco-environmental water demand (A 1 ), the green coverage rate of built districts (A 2 ), the water resources amount (C 1 ), and the industry's water consumption per unit of value (CNY 10,000) (D 2 ) showed an increasing trend of obstacle degree annually. Therefore, Zhengzhou should increase ecological environment water consumption and improve the green coverage of built-up areas. In residents' daily life, there should be more water-saving instruments to improve the water resources shortage in Zhengzhou City. In terms of industrial structure, the production mode should be improved to reduce the industry's water consumption per unit of value (CNY 10,000).
In order to visually display the obstacle degree change of the six main obstacle factors, the trend was plotted as a broken line, as shown in Figure 7. From 2011 to 2021, the eco-environmental water demand (A 1 ), the green coverage rate of built districts (A 2 ), the water resources amount (C 1 ), and the industry's water consumption per unit of value (CNY 10,000) (D 2 ) showed an increasing trend of obstacle degree annually. Therefore, Zhengzhou should increase ecological environment water consumption and improve the green coverage of built-up areas. In residents' daily life, there should be more water-saving instruments to improve the water resources shortage in Zhengzhou City. In terms of industrial structure, the production mode should be improved to reduce the industry's water consumption per unit of value (CNY 10,000).
Evaluation of Target Layer
Through the value of each index and the corresponding weight, we established the improved TOPSIS model using Formulas (1)- (16), and the relative progress of Zhengzhou City from 2011 to 2021. The specific results are shown in Table 5.
Evaluation of Target Layer
Through the value of each index and the corresponding weight, we established the improved TOPSIS model using Formulas (1)- (16), and the relative progress of Zhengzhou City from 2011 to 2021. The specific results are shown in Table 5. From 2011 to 2021, the overall health status of the water cycle in Zhengzhou City developed a healthy trend. Due to the low eco-environmental water consumption from 2011 to 2012, the compliance rate of the water function zones was low, and the health level of the water cycle in Zhengzhou City from 2011 to 2012 was grade IV. From 2020 to 2021, the water cycle's health degree in Zhengzhou City decreased mainly due to the rare heavy rainfall weather on 20 July 2021; the annual average precipitation rose to more than 1000 mm, resulting in severe urban waterlogging, slightly lowering the water cycle's health degree in 2021 compared with 2020. In recent years, with the implementation of the most stringent water resources management system by the government and the strengthening of natural environmental protection, the water cycle health of Zhengzhou City has developed healthily and steadily.
Based on our analysis of the evaluation results, the countermeasures to improve Zhengzhou City's water cycle health are as follows: (1) Continue to implement the most stringent water resources management system. In terms of natural environmental protection, we must increase the ecological protection of rivers and lakes and reduce groundwater exploitation. (2) The precipitation in Zhengzhou City is higher in summer; therefore, reservoir water storage and other water storage works should be conducted for subsequent use. If the precipitation is excessive, the reservoir should be discharged to avoid danger. (3) Zhengzhou's advantages as a transportation hub should be considered, increasing idea exchange, and continuously exploring new ways to develop green ecology. (4) Government departments should improve the temporary emergency response capacity for natural disasters, further improving the flow capacity of urban drainage channels, and accelerating the establishment of safety monitoring facilities for small and medium-sized reservoirs.
Method Validity Test
We selected two conventional assessment methods, the FCE method and the traditional TOPSIS model, to compare our evaluation results to the improved TOPSIS model and verify the effectiveness of the improved TOPSIS model. The results are shown in Table 6. 2011 IV IV IV 2012 IV IV IV 2013 III IV III 2014 III IV III 2015 III IV III 2016 III III III 2017 III III III 2018 III III III 2019 II II II 2020 II II II 2021 III II III The evaluation results calculated by the three methods were consistent. There were some differences between the traditional TOPSIS model and the improved TOPSIS model in the evaluation results from 2013 to 2015 and 2021 because, in the improved TOPSIS model, the traditional Euclidean distance was replaced by the weighted Mahalanobis distance and the grey correlation analysis degree. As a result, the correlation degree between some indexes was comprehensively considered, and the position and form indicated the health grades of each year. The FCE method comprehensively considers the subjective opinions of experts and has strong subjectivity [20], whereas the improved TOPSIS method does not need subjective judgment and reflects the relative progress of each evaluation scheme from the original data. The improved TOPSIS model is applicable and scientific in water cycle health evaluation.
Conclusions
The water cycle plays an important role in promoting the sustainable development of nature and human society. The evaluation and definition of a healthy water cycle relate to the development direction of society, which are of great research significance. In this paper, we defined the concept of a healthy water cycle from several perspectives, and an index system was established based on the natural and social attributes of the water cycle. We analysed and compared the evolution of the water cycle in Zhengzhou City from 2011 to 2021 using the improved TOPSIS model and TOPSIS model and FCE. Our research led to the following conclusions.
(1) In recent years, the health degree of the water cycle in Zhengzhou City was between general (level III) and sub-health (level II), and the health score is increasing annually. It reached the sub-health state in 2020, and the relative progress was the highest in the past decade. However, it is still necessary to improve the ecological environment water consumption and increase the proportion of river length above class III water quality to improve the water resources ecology and quality. (2) In the health evaluation system of Zhengzhou City's water cycle, the obstacles to ecological environment water consumption, built-up area greening coverage rate, total water resources, and the industry's water consumption per unit of value (CNY 10,000) increased annually. The evaluation results demonstrated that increasing human protection of the ecological environment and improving water-saving awareness can improve the water cycle's health level. (3) The three methods were highly consistent in Zhengzhou's water cycle health evaluation results; however, the corresponding health grades were different. The FCE method has strong subjectivity. The Euclidean distance has less consideration for the correlation of some indicators. The improved TOPSIS model uses weighted Mahalanobis distance and grey relational analysis to replace the Euclidean distance, reducing subjective judgment and comprehensively considering the correlation between some indicators. The improved TOPSIS model is more scientific. (4) In this paper, the study area was only Zhengzhou City. In future research, multiple research areas can be selected for comparative study to better judge the applicability of different methods. | 2022-08-27T15:11:04.374Z | 2022-08-24T00:00:00.000 | {
"year": 2022,
"sha1": "1f843de4592ea1f1e56bbfab05a1d1bc1747c36e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/19/17/10552/pdf?version=1661488982",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1c98bd168a537e360a454030453809017dfdf711",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251474760 | pes2o/s2orc | v3-fos-license | Acute and long-term effects of psilocybin on energy balance and feeding behavior in mice
Psilocybin and other serotonergic psychedelics have re-emerged as therapeutics for neuropsychiatric disorders, including addiction. Psilocybin induces long-lasting effects on behavior, likely due to its profound ability to alter consciousness and augment neural connectivity and plasticity. Impaired synaptic plasticity in obesity contributes to ‘addictive-like’ behaviors, including heightened motivation for palatable food, and excessive food seeking and consumption. Here, we evaluate the effects of psilocybin on feeding behavior, energy metabolism, and as a weight-lowering agent in mice. We demonstrate that a single dose of psilocybin substantially alters the prefrontal cortex transcriptome but has no acute or long-lasting effects on food intake or body weight in diet-induced obese mice or in genetic mouse models of obesity. Similarly, sub-chronic microdosing of psilocybin has no metabolic effects in obese mice and psilocybin does not augment glucagon-like peptide-1 (GLP-1) induced weight loss or enhance diet-induced weight loss. A single high dose of psilocybin reduces sucrose preference but fails to counter binge-like eating behavior. Although these preclinical data discourage clinical investigation, there may be nuances in the mode of action of psychedelic drugs that are difficult to capture in rodent models, and thus require human evaluation to uncover.
INTRODUCTION
Obesity and addiction share neurobiological and behavioral similarities [1]. An excessive craving for food is perpetuated by the activation of mesolimbic reward circuits, where drugs of abuse also have their effect [2]. This may contribute to an inability to curb eating or a 'relapse' in feeding habits, as it becomes difficult to maintain the lowered food intake required to sustain weight loss [3]. Adaptations to neural circuitry mean that in both conditions, behaviors become increasingly compulsive and difficult to correct [2]. Substance use disorders and obesity can to some extent be considered diseases of plasticity; both chronic drug exposure [4] and long-term exposure to a high-fat diet [5] lead to persistent synaptic reorganization. Moreover, many of the genes associated with body mass index (BMI) encode proteins involved in glutamatergic signaling and synaptic plasticity [6]. These parallels suggest that similar therapeutic approaches could be applied to the treatment of obesity and addictive disorders, and psilocybin presents a promising candidate due to its remarkable ability to alter synaptic architecture. Psilocybin, and other serotonergic psychedelics, have re-emerged as therapeutics for neuropsychiatric disorders, where the development of new pharmacological treatments has stalled due to a lack of long-term efficacy for treatment-resistant patients [7]. Psilocybin is systemically metabolized to psilocin which activates the serotonin 2A receptor (5-HT 2A R), altering perception to induce its hallucinogenic effect [8]. The downstream consequence is the promotion of functional and structural neuroplasticity [9], facilitating enduring effects on behavior, for example, long-lasting improvement in depressive symptoms [10].
As a serotonin receptor agonist, psilocybin is especially promising for the treatment of obesity. The natural ligand for 5-HT 2A R, serotonin, acts on several receptors to have a wide range of biological effects, including modulating food intake [11,12]. Clinical studies have found that cerebral 5-HT 2A R binding is positively correlated to body mass index (BMI) [13] and high presurgical 5-HT 2A R binding (as well as the change in 5-HT 2A R binding) correlates with weight loss after gastric bypass surgery [14]. Serotonin integrates metabolic signals that convert energy status but also has an effect on reward-related feeding [15]. Decreased serotonin signaling is associated with obesity, and modulation of the serotonergic system affects food intake [16][17][18]. Early work identified that alongside D-amphetamines, classic serotonergic psychedelic lysergic acid diethylamide (LSD) and psilocybin reduce food consumption in rats and dogs [19,20]. Additionally, activation of 5-HT 2A R has anti-inflammatory effects in a mouse model of cardiovascular disease [21]. Self-reported data from psychedelic users shows that one-time use of a classic psychedelic reduces the risks of becoming obese [22]. Taken together, these results imply a potential for psilocybin to be therapeutically promising for the treatment of obesity.
In the present study, we used mouse models of genetic obesity, diet-induced obesity, and binge-eating disorder to examine the effect of psilocybin on body weight and food intake. A single high dose of psilocybin did not have an effect on body weight or food intake in diet-induced obese (DIO) mice or genetic mouse models of obesity. Microdosing also failed to correct obesity in DIO mice and in obese melanocortin 4 receptor (MC4R)-deficient mice. No effect was observed on binge-eating behavior. Explorative RNA sequencing revealed only subtle transcriptional changes in the hypothalamus, 3 h or 4 weeks after a single injection with psilocybin. In contrast, 3 h following psilocybin treatment more than 1500 transcripts were significantly changed in the prefrontal cortex (PFC). A single administration of psilocybin, however, did not produce any long-lasting transcriptional changes in the PFC. In summary, the presented preclinical evidence is not in favor of the use of psilocybin for the management of obesity. However, further clinical investigation might shed light on the use of psychedelics for the treatment of human obesity or obesity-related disorders.
MATERIALS AND METHODS Animals
Wild type male C57BL/6J mice (Janvier, FR), male leptin-deficient ob/ob mice (The Jackson, Laboratory, stock no. 00632), and male melanocortin-4 receptor knockout mice (The Jackson Laboratory, stock no. 032518) were used. Except otherwise stated, mice were group-housed. All mice were kept under normal room temperature (22°C) on a standard 12 h light-dark cycle, with ad libitum access to water and a chow diet (Altromin D30, Brogaarden). Diet-induced obese mice were switched from a chow diet to a high-fat diet at 8 weeks of age (D12331; Research Diets, New Brunswick, NJ, USA) in order to facilitate the development of adiposity. Prior to experiments, mice were acclimatized to housing conditions for a minimum of one week, and sham injected with isotonic saline in the 3 days preceding study start. Mice used in the head-twitch study were not sham-injected. Sample sizes were selected on the basis of previous experiments using similar methods. In experiments with lean C57BL/6J mice, mice were block-randomized into groups. Where dietinduced obese C57BL/6J mice, ob/ob mice, or MC4R mice were used, mice were stratified into groups to ensure body weight was matched. All in vivo experiments were carried out in accordance with regulations regarding the care and use of experimental animals, and the Danish Animal Experimentation Inspectorate approved the experimental procedures (2018-15-0201-01457).
Head-twitch response
Head-twitch response was evaluated in 17-week-old male C57BL/6J mice on a chow diet (group-housed). Mice were randomized into treatment groups (n = 4-5) and received a single intraperitoneal injection of 0.3 mg/kg psilocybin, 1 mg/kg psilocybin, 3 mg/kg psilocybin, or isotonic saline. Following injection, each mouse was placed into its own new cage with bedding. Two Logitech C920 HD webcams were mounted onto cages, and the behavior of each mouse was recorded for 20 min. The number of head twitches (defined as 'rapid paroxysmal rotational movement of the head') [23] was counted by two observers blinded to the experimental conditions.
Open field test
Locomotor activity was assessed in an open field arena (w: 50 cm, h: 50 cm) under normal lighting conditions. Fourteen-week-old male C57BL/6J mice (maintained on a chow diet, single housed) received a single intraperitoneal injection of either 3 mg/kg psilocybin or isotonic saline (n = 8 per group). Thirty minutes post-injection, mice were placed in the open field arena, and their locomotor activity was recorded for 20 min with a ceilingmounted Logitech C920 camera. Distance moved and time spent in the center zone was calculated using a computer-based tracking system (Ethovision XT, Netherlands).
Voluntary running
Ten-week-old male C57BL/6J mice on a chow diet were single-housed in cages equipped with running wheels (23 cm in diameter, Techniplast I). Running distance was measured by a computer (Sigma Pure 1 Topline 2016, D). After 5 days of familiarization with the wheels, the running distance was measured on days −2 and −1. These data were used to create two groups of mice with similar baseline running distances (n = 8 per group). On day 0, two hours prior to the dark phase, mice were injected intraperitoneally with either psilocybin or isotonic saline, and subsequent running distance was measured 4, 6, 8, 16, and 24 h after injection and again on days 2, 3, 4, and 5. Two samples were removed from the psilocybin group (day 2 and day 5) as the system failed to detect any activity.
Metabolic assessment
To assess the effect of psilocybin on whole-body energy expenditure and substrate utilization, 9-week-old male C57BL/6J mice were single-housed in indirect calorimetry chambers (TSE System, Germany). Mice were habituated to the indirect calorimetry chambers from day −7 to day 0. During these days, body weight was measured on a daily basis in the afternoon, and data on food intake, water intake, locomotor activity, oxygen consumption, and respiratory exchange ratio were automatically collected every 20 min. These data were used to create two groups of mice (n = 8 per group) with similar average body weights and wholebody metabolic profiles. Mice were sham injected at days −3, −2, and −1. The collection of experimental data started on day −1. On day 0, 45 min before the onset of the dark phase, mice in one group were injected intraperitoneally with psilocybin (3 mg/kg) while mice in the other group were injected with isotonic saline. The automatic 20 mininterval measurements of oxygen consumption, carbon dioxide release, and food intake continued for 4 days. On day 5, the chow diet was switched to a high-fat diet, and the automatic measurements were continued until the experiment was terminated on day 8. Two animals were excluded from the analysis of water intake due to leaky water bottles.
Body temperature
Body temperature was assessed via inserting a probe (BIO-TK8861, Bioselab, France) into the rectum of 8-week-old male C57BL/6J mice (double-housed) on a chow diet. On the day of the experiment, four hours after the onset of the light phase, mice were treated with a single intraperitoneal injection of 3 mg/kg psilocybin or isotonic saline (n = 8 per group). Rectal temperature was measured before injection (0 min) and at 15, 30, 60, 120, 180, 360 min, and 24 h following treatments.
Blood biochemistry
To assess the circulating level of corticosterone, cholesterol, triglycerides, and insulin, 8-week-old male C57BL/6J mice (double-housed) on a chow diet were randomized to treatment with a single intraperitoneal injection of either 3 mg/kg psilocybin or isotonic saline (n = 8 per group). Blood samples were drawn (from the tail tip) before treatment (0 h) and again, 3 and 24 h following treatment. Samples were immediately centrifuged at 3000 rpm at 4°C for 10 min (to obtain plasma) and stored at −20°C for analysis. Blood glucose was measured in a drop of whole blood at each time point using a handheld glucometer (Contour XT, Bayer). Plasma corticosterone was measured at (0, 3, and 24 h) using the Detect X Corticosterone assay kit (Arbor Assays, USA). Plasma cholesterol, triglycerides, and insulin were measured only in 24 h samples using Infinity Cholesterol and Infinity Triglycerides kits (Thermofisher Scientific) and Ultrasensitive Mouse Insulin Elisa Kit (Mercodia).
Food intake and body weight studies
Male DIO C57BL/6J mice at 46 weeks of age (maintained on a high-fat diet) were single-housed and divided (based on body weight) into two groups (n = 8 per group). Mice received a single intraperitoneal injection of either 3 mg/kg psilocybin or isotonic saline. In the 12 subsequent days, food intake and body weight were measured daily, before the onset of the dark phase.
A similar study was conducted in 21-week-old leptin-deficient ob/ob mice (single and double housed) that received either a single intraperitoneal dose of 3 mg/kg psilocybin (n = 9) or isotonic saline (n = 7). Food intake and body weight were measured daily, before the onset of the dark phase, for 7 days following injection.
Dietary intervention study
To assess whether psilocybin enhances diet-induced weight loss, a new experimental paradigm was designed. Twenty-four single-housed male DIO C57BL/6J mice at 47 weeks of age were divided into three groups based on body weight (i = 8 per group) as follows: Group 1: Ad libitum access to a chow diet and high-fat diet for the duration of the experiment, and were treated at day 0 with a single intraperitoneal injection of isotonic saline. This group served as a control for baseline food intake and body weight following injection stress. One mouse in group 1 was excluded from the analysis due to excessive weight loss. Group 2: High-fat diet was removed on day 0 of the experiment, in order to stimulate 'diet-induced weight loss', and mice were injected with 3 mg/kg psilocybin, to identify whether psilocybin treatment could potentiate the diet-induced weight loss. The high-fat diet was returned seven days after injection. Group 3: High-fat diet was removed on day 0 of the experiment, in order to stimulate 'diet-induced weight loss' as in group 2, but mice were injected with isotonic saline. The high-fat diet was returned 7 days after injection. This group served as a control to determine psilocybin's effect on body weight following the dietary intervention, and body-weight regain. Two data points were removed from the analysis due to scale mis-readings at the time of measurement.
Binge eating paradigm
A binge eating paradigm was established according to an existing protocol [24]. Twenty-one-week-old male C57BL/6J mice (single-housed, on a chow diet) were divided into one of two experimental groups termed 'continuous' (n = 20) or 'intermittent' (n = 19) as follows: Group 1, 'Intermittent': Ad libitum access to chow diet and high-fat diet for 48 h, after which the high-fat diet was removed for 5 days (mice had ad libitum access to chow diet). The high-fat diet was returned to the 'intermittent' group for 24 h, during which chow and high-fat diet intake were measured at 2.5 and 24 h. The high-fat diet was removed for 6 days (during which, mice had ad libitum access to chow diet)-constituting the first binge cycle. A second binge cycle was initiated immediately after the first. Cycles of short-term exposure to a high-fat diet establish a 'binge-like' phenotype in the 'intermittent' group. Group 2, 'Continuous': Ad libitum access to a chow diet and high-fat diet for the duration of the study (3 weeks).
Following two binge cycles, mice were randomized to groups for pharmacological evaluation of the model. On the day of the experiment, five mice from the 'continuous' group were selected as controls, to establish baseline intake of chow and high-fat diet with ad libitum access. These mice received an intraperitoneal injection of isotonic saline. Eighteen mice from the 'intermittent' group had access to a chow diet only in the 6 days preceding the experiment. On the day of the experiment, the selected mice received either a single intraperitoneal injection of fluoxetine (30 mg/kg), psilocybin (3 mg/kg), or isotonic saline (n = 6 per group). Thirty minutes following treatment, mice in the intermittent group were given 24 h access to a high-fat diet and chow. Food intake was measured in all groups at 2.5 and 24 h after injection.
Sucrose preference test
Fourteen-week-old chow-fed male C57BL/6J mice (single-housed) were acclimated to the presence of two water bottles in their home cage, for 24 h. The water bottles were weighed to obtain a baseline intake. On the test day, 2 h prior to the onset of the dark phase, bottles containing fresh water and a 2% sucrose solution were added to the home cages, and mice were subsequently injected intraperitoneally with psilocybin (3 mg/kg, n = 10) or isotonic saline (n = 10). Mice were free to consume liquid from either bottle for 24 h, after which the bottles were weighed to measure consumption. The study was repeated the next day, with the bottles reversed in position (in the cage) to account for side preference. Sucrose and water intake over two study days were averaged.
The sucrose preference test was repeated in the second cohort of chowfed male mice. Mice received daily intraperitoneal injections of psilocybin (0.3 mg/kg) or isotonic saline (n = 8 per group) for 5 days, and food intake and bottle weight were recorded daily.
Bulk RNA sequencing
Ten-week-old chow-fed male C57BL/6J mice (group-housed) were randomized into one of four treatment groups (n = 8 per group) as follows: Group 1: A single intraperitoneal injection of 3 mg/kg psilocybin, followed by tissue harvest 3 h after treatment. Group 2: A single intraperitoneal injection of isotonic saline followed by tissue harvest 3 h after treatment. Group 3: A single intraperitoneal injection of 3 mg/kg psilocybin followed by tissue harvest 4 weeks after treatment. Group 4: A single intraperitoneal injection of isotonic saline followed by tissue harvest 4 weeks after treatment.
On the day of the experiment, mice received a single intraperitoneal injection according to their grouping and were sacrificed (decapitation) according to the above schedule. The prefrontal cortex and whole hypothalamus were immediately dissected out and flash frozen on dry ice. Mice were euthanized in the morning, 4 h after the onset of the light phase, to ensure minimal diurnal variation in gene expression. Dissected brain regions were stored at −80°C until isolation of RNA was performed.
Gene expression analysis (qPCR)
Brown adipose tissue (BAT) was dissected from all mice used in the bulk RNA seq study and stored at −80°C until analyzed. For gene expression analysis, total BAT RNA was isolated according to manufacturer's protocol, using RNeasy Minikit (Qiagen, #74106). After quantification of RNA with NanoDrop 2000 (Thermo Fisher), cDNA was synthesised by mixing RNA with FS buffer, DTT (Thermo Fisher), and Random Primers (Sigma-Aldrich) and incubated for 3 min at 70°C. Superscript III (Thermo Fisher) and RNase out and dNTPs were added, before samples were placed into a thermocycler for 5 min at 25°C, 60 min at 50°C and 15 min at 70°C. cDNA was diluted 1:100 and stored at −20°C until further processing. qPCR was performed using SYBR green (Thermo Fisher), and gene expression analysis normalized to housekeeping gene 36b4, according to the delta-delta Ct method.
Transcriptomic analysis by RNA sequencing
Total RNA was isolated using an RNeasy mini kit (Qiagen, #74106) according to the manufacturer's protocol. Messenger RNA sequencing libraries were prepared using the Illumina TruSeq Stranded mRNA protocol (Illumina). Poly-A containing mRNAs were purified by poly-T attached magnetic beads, fragmented, and cDNA was synthesized using SuperScript III Reverse Transcriptase (Thermo Fisher Scientific). cDNA was adenylated to prime for adapter ligation and after a clean-up using AMPure beads (Beckman coulter), DNA fragments were amplified using PCR followed by a final clean-up. Libraries were quality-controlled using a Fragment Analyzer instrument (Agilent Technologies) and subjected to 52-bp paired-end sequencing on a NovaSeq 6000 (Illumina). A total of 2.7 billion reads were generated.
Bioinformatic analyses
Fastq files were generated using bcl2fastq2 v. 2.20.0, then reads were aligned to the GRCm38 primary assembly using STAR v. 2.7.2b [25] against a reference built using the GENCODE vM25 gene model [26]. Aligned reads were counted against the same gene model using featureCounts v. 1.6.2 [27] counting only reads where both ends mapped to the same strand. The separation between tissues and homogeneity within tissues was visually inspected using MDS plots. Two samples were excluded for having almost no reads and one sample was excluded for being heavily enriched in genes related to wound healing and immune system processes. Visual inspection revealed an extraction pool to separate samples. Genes with sufficient counts were selected using the filterByExpr function from the edgeR v. 3.30.2 [28] package and were transformed using the voomWithQuality-Weights function [29]. Differential expression was calculated using limma v. 3.44.0 using a model of the form~group + extraction_pool, where group encoded tissue, treatment, and timepoint and extraction_pool encoded the extraction pool. Animal-specific effects were modeled using the duplica-teCorrelation function which is part of limma [30]. Contrasts were specified as described in the limma manual. Gene ontology and reactome enrichments were found using the camera function [31]. All plots were generated using ggplot2 [32].
Statistical analyses
Statistical analyses were performed in GraphPad Prism version 9. All experiments were performed once. Statistics for each experiment can be found in respective figure legends. All data are presented as mean ± SEM and all P values ≤ 0.05 were considered statistically significant. *P ≤ 0.05, **P ≤ 0.01, ***P ≤ 0.001, ****P ≤ 0.0001.
RESULTS
Establishing a psychoactive dose of psilocybin for metabolic studies To test the bioactivity of psilocybin and determine an appropriate dose for assessing the metabolic effects of psilocybin in mice, we measured the head-twitch response following a single administration of 0.3 mg/kg psilocybin, 1 mg/kg psilocybin, 3 mg/kg psilocybin or vehicle (isotonic saline). The head twitch response is a rapid rotational head movement [23], stereotypical in rodents following activation of the serotonin 5-HT 2A R with psychedelics (Fig. 1A). Consistent with existing literature, 1 and 3 mg/kg induced a greater number of head twitches than other doses (Fig. 1B, C) [33]. To further confirm that the 3 mg/kg dose was sufficient for inducing a 'psychoactive' effect in mice, we performed an open-field test. Mice were injected intraperitoneally with 3 mg/kg psilocybin or saline, and their behavior was recorded for 20 min after treatment (Fig. 1D). Psilocybin-treated mice exhibited significantly less movement than mice treated with saline (Fig. 1E), and spent significantly less time in the center zone (Fig. 1F). The increase in thigmotaxis (time spent close to the walls) following psilocybin-treatment could be interpreted as anxiogenic [34], but in this instance, seems to relate to reduced total locomotion. Based upon these data and existing literature (which supports the use of 1-3 mg/kg in rodents [35], we selected 3 mg/ kg as the dose for subsequent metabolic in vivo studies. Selection of dose was supported by dose-titration pilot studies in DIO C57BL6/J mice, showing that a single subcutaneous injection of psilocybin at 0.3, 1, and 3 mg/kg had a marginal effect on subchronic weight loss and no difference on food intake (Fig. S1A, B).
Psilocybin does not affect energy metabolism in mice
In a study assessing alcohol consumption following psychedelic use, alongside recidivism, participants reported an increase in exercise [36]. To examine the effect of psilocybin on voluntary physical activity, male C57BL/6J mice (maintained on a chow diet) were adapted to running wheel access, and injected with a single dose of psilocybin (3 mg/kg) or vehicle (Fig. 1G). Consistent with existing literature [37], mice did not exhibit differences in acute or chronic treadmill running following administration of psilocybin (Fig. 1H, I). Human data suggest transient effects of psilocybin on blood pressure and on the hypothalamic-pituitary-thyroid (HPT) axis [38], indicating psilocybin might also regulate metabolic rate. Whether these changes coincide with increased energy expenditure is unknown. To study psilocybin's effects on whole-body energy metabolism, we employed indirect calorimetry metabolic cages (Fig. 1J). In the 5 days following a single injection with psilocybin, energy expenditure and substrate metabolism were similar between groups (Fig. 1K, L). The mice injected with psilocybin displayed a slightly increased food intake and water intake whilst maintained on a chow diet (Fig. 1M, N). The observed increase in water intake was also observed following a 1 mg/kg dose of psilocybin to DIO mice in metabolic cages (Fig. S1E). Serotonin 5-HT 2A receptors are implicated in thermoregulation and the 5-HT 2A R agonist lysergic acid diethylamide (LSD) causes hyperthermia in humans [39]. Analysis of brown adipose tissue in mice injected with a single dose of psilocybin (3 mg/kg) showed no differences in 'thermogenic' gene expression compared to control-treated counterparts ( Supplementary Fig 1F, G). The physiological effects of psilocybin in humans are limited, but in one study, psilocybin administration did not have an effect on body temperature [38]. Similarly, we observed no effect on body temperature at the selected dose (Fig. 1O). Blood glucose and plasma corticosterone were recorded at baseline, 3 and 24 h (in two separate cohorts of mice). To date, no effect of psilocybin on blood glucose has been observed in humans, but the 5-HT 2A R agonist-DOI (2,5-dimethoxy-4-iodophenyl)−2-aminopropane) restores glucose homeostasis in apolipoprotein E (ApoE) knockout mice [40,41]. Here, we demonstrate no significant effect of psilocybin on blood glucose or plasma corticosterone (Fig. 1P, Q). In contrast, plasma cortisol is increased following the administration of both psilocybin and LSD in humans [38,42]. We report no effect of psilocybin on plasma levels of cholesterol, triglyceride, or insulin ( Fig. 1R-T).
Psilocybin does not affect body weight or food intake in mouse models of obesity In humans, drugs that increase central serotonin have a marked anorectic effect [43], whilst BMI is correlated with cerebral 5-HT 2A R expression [13] Mice treated with the selective 5-HT 2A R agonists DOI and TCB-2 ((4-Bromo-3,6-dimethoxybenzocyclobuten-1-yl) methylamine hydrobromide) exhibit hypophagia [44,45]. Although the 5-HT 2C receptor is more widely studied in the context of energy balance, there are indications that the 5-HT 2A R plays an underappreciated role-especially as it has been shown to be important for mediating the weight-lowering effect of GLP-1 [46]. In the present study, we wanted to examine whether psilocybin, as an agonist of 5-HT 2A Rs, may have observable effects on body weight or food intake in a mouse model of diet-induced obesity ( Fig. 2A). Following peripheral injection of 3 mg/kg psilocybin, no difference was observed in body weight or food intake in the 12 days following injection (Fig. 2B, C). On the final day of the experiment, an intraperitoneal glucose tolerance test was performed to investigate whether psilocybin has long-term effects on glucose homeostasis, but there was no difference compared to vehicle-treated mice (Fig. S2A, B).
Similarly, we wanted to assess the effect of psilocybin in leptindeficient ob/ob mice, a commonly used genetic model of obesity (Fig. 2D) [47]. Ob/ob mice are reported to have reduced 5-HT transporter mRNA in the dorsal raphe nucleus-which is associated with a 'depressive' phenotype, alongside impaired metabolism [48]. Also, administering dexfenfluramine (which increases levels of extracellular serotonin) to ob/ob mice attenuates weight gain on a moderate-fat diet [49]. In contrast, we demonstrate that a single dose of 3 mg/kg psilocybin does not influence the body weight or food intake of ob/ob mice on a highfat diet (Fig. 2E, F). A Schematic for single injection study in diet-induced obese (DIO) mice (n = 8 per group). B Body weight in the 12-day period following injection of psilocybin (3 mg/kg) or vehicle. C Cumulative food intake in the 12 days following injection of psilocybin or vehicle. D Schematic for single injection study in ob/ob mice (n = 7-9 per group). E Body weight. F Cumulative food intake. G Schematic for microdosing study in DIO mice. Mice were injected daily with psilocybin (0.3 mg/kg; n = 7) or vehicle (n = 6). H Body weight following daily injection of psilocybin (0.3 mg/kg) or vehicle for 7 days (n = 6/7 per group). I Food intake. J Schematic for microdosing study in MC4R KO mice (n = 8 per group). K Body weight. L Food intake. Data are presented as mean ± SEM (analyzed by RM two-way ANOVA with Bonferroni's test).
In addition to the single-injection protocol, we investigated the effect of a microdosing-like strategy on body weight (Fig. 2G). Psychedelic microdosing has been popularized for its benefits on creativity, problem-solving ability, and energy levels [50,51]. In a separate cohort of DIO mice, intraperitoneal injection of psilocybin (0.3 mg/kg) or vehicle was administered daily. No significant effect was observed on either body weight or food intake using this approach (Fig. 2H, I). Previous work identified that serotoninmediated hypophagia requires downstream activation of melanocortin 4 receptors (MC4R) [12]. A microdosing-like approach was also applied to an MC4R KO mouse model (Fig. 2J). In agreement with studies in DIO mice and ob/ob mice, systemic administration of psilocybin did not alter food intake or body weight in MC4R KO mice (Fig. 2K, L).
Psilocybin has no effect on body weight or food intake in combination with dietary or pharmacological interventions Studies that assess the potential of psychedelic drugs as therapies for depression and addiction are increasingly using a combination approach, e.g. psilocybin treatment in addition to cognitive behavioral therapy. Psilocybin is thought to enhance 'cognitive flexibility'-facilitating revision of long-held patterns of behavior [52]. In such a paradigm, psilocybin acts to 'prime' the brain, increasing the efficacy of the secondary intervention. In the present study, we wanted to investigate whether psilocybin could provide the same neural flexibility in response to a dietary intervention (switch from HFD to chow only), on the same day as receiving a dose of psilocybin (Fig. 3A). Thus, the high-fat diet-fed obese mice were exposed to a low-fat diet-induced weight loss paradigm. After weight loss was achieved, the mice were returned to the high-fat diet in order to examine whether psilocybin has the ability to facilitate enduring changes following the dietary intervention. In this experiment, mice that received psilocybin did not maintain the diet-induced weight loss more successfully than vehicle-treated mice (Fig. 3B, C). We performed a similar combination intervention study, using the weight-lowering agent, liraglutide (GLP-1 receptor agonist), alongside psilocybin (Fig. 3D). Some of the weight-lowering effects of GLP-1 receptor agonists have been attributed to interaction with 5-HT 2A Rs, but this idea has also been contended [46,53]. Diet-induced obese mice received a single injection of psilocybin (or vehicle), alongside a daily injection of liraglutide (or vehicle). Mice dosed with liraglutide lost 9% body weight following 7 days of treatment, but this effect was not augmented by co-administration of psilocybin (Fig. 3E, F).
Psilocybin does not alter binge-eating behavior, but reduces sucrose preference in lean mice Early research into psychedelics as medicine showed promise for the use of psilocybin as a treatment for tobacco and alcohol dependence [54,55]. A repeat study in a mouse model of alcohol relapse failed to replicate this benefit, but recent work identified that psilocybin targets a neural circuit involved in craving, highlighting its potential as a treatment of addiction [56,57]. No studies to date have evaluated the effect of psilocybin on compulsive feeding behavior.
To assess whether psilocybin could have an effect on binge-eating behavior, we used a previously established experimental paradigm to stimulate binge-like eating behavior in a cohort of lean mice, without causing excessive stress [58] (Fig. 3G). Once the binge-eating behavior was established (Fig. S3A, B), we tested the pharmacological potential for psilocybin to reverse the phenotype (Fig. 3H). Fluoxetine hydrochloride was used as a positive control, as it effectively ameliorates binge-like eating behavior in mice [59] (Fig. 3I). After successfully establishing the binge-eating phenotype in two consecutive cycles (Fig. 3A-C), mice were dosed with either fluoxetine, psilocybin or saline during a third binge cycle. Administration of fluoxetine hydrochloride did ameliorate the binge-eating phenotype, but a 3 mg/kg dose of psilocybin failed to do so (Fig. 3J).
To determine whether psilocybin could alter sweet taste preference, a sucrose preference test was performed. Psilocybin did lower sucrose preference following a single injection; mice treated with psilocybin drank significantly less sucrose without reducing their water intake (Fig. 3K). A reduction in sucrose preference could be indicative of anhedonia [60], or supportive of the notion that psilocybin might lower the hedonic value of rewarding stimuli [57]. If the latter is true, psilocybin may be of benefit for treating compulsive-reward-seeking behavior or excessive consumption of sweet foods. Repeated administration of 0.3 mg/kg psilocybin did not have an effect on sucrose preference in lean mice (Fig. S3C-F), which suggests that a high dose regimen would be necessary to achieve a reduction in food-motivated reward seeking.
Psilocybin causes acute transcriptional changes in the prefrontal cortex, but not in the hypothalamus The widespread transcriptional effects of psilocybin on the brain have not been comprehensively studied. In rats, a single dose of psilocybin rapidly induced gene expression in the PFC compared to the hippocampus [35], with an increase in expression of genes associated with neuronal plasticity (Psd95) and neuron activation (c-Fos). To our knowledge, the transcriptional changes in the hypothalamus, following administration of psilocybin have not yet been characterized. To assess the transcriptional response to psilocybin (acute and long-term effect), chow-fed mice were randomized into two groups (psilocybin treated, or vehicle). Given that a single dose of psilocybin is reported to cause structural changes to dendritic spines in the cortex that persists even after a month [61], we dissected the hypothalamus and PFC after both 3 h and 4 weeks following a single administration of psilocybin (Fig. 4A). Significant differential expression was observed in the PFC 3 h following treatment, with 400 upregulated and 1155 downregulated genes (Fig. 4B). Among the differentially expressed genes were neuroplastin (nptn), brain-derived neurotrophic factor (BDNF) and neuronal growth regulator 1 (Negr1)-genes associated with cognition, neuronal growth, plasticity and obesity (Fig. 4C) [62][63][64]. Reactome pathways upregulated in the PFC include gluconeogenesis and glycolysis (Fig. 4D). Given the expression of 5-HT 2A R in the hypothalamus, we were surprised to find no effect of psilocybin on the hypothalamic transcriptome at either 3 h of 4-week postadministration of psilocybin (Fig. 4E-G). [65].
DISCUSSION
Obesity is a relatively treatment-resistant condition; many patients do not reap the benefits of pharmacological or lifestyle-based Fig. 3 Psilocybin displays no benefits in combinatorial therapeutic paradigms, and has no effect on binge-eating like behavior, but a single high dose of psilocybin reduces sucrose preference in mice. A Schematic overview of dietary intervention study (n = 8 per group). B Body weight of mice in the 14-day period following injection. C Caloric intake following a single injection of psilocybin (3 mg/kg) or vehicle alongside dietary intervention. D Schematic for therapeutic intervention study (n = 8 per group). Diet-induced obese mice were injected with psilocybin (3 mg/kg) and liraglutide (0.09 mg/kg), liraglutide (0.09 mg/kg), and vehicle, or vehicle only. E Body weight. F Cumulative food intake. G Schematic showing set up of binge eating like behavior model. First binge cycle-mice in the intermittent group had 48-h free choice access to standard chow and high-fat diet. Successive binge cycles (binge cycle 2 and 3): mice in the intermittent group had freechoice access to standard chow for 24 h, following 6 days with ad libitum access to a chow diet. H Schematic showing pharmacological intervention in the binge-eating model. Mice were cycled through 3 binge cycles, and pharmacological intervention occurred in the 3rd cycle. Following pre-treatment with psilocybin (3 mg/kg), fluoxetine (30 mg/kg), or vehicle, mice in the intermittent access group received a high-fat diet for 24 h. I Establishment of binge eating-like behavior in a cohort of mice. J Food intake 2.5 h following access to HFD was recorded in each group. K Sucrose preference test after treatment with psilocybin (3 mg/kg) or vehicle (n = 10) per group. L Sucrose consumption after daily administration of psilocybin or vehicle. Data was analyzed by mixed-effects analysis (missing values due to erroneous food scale measures) with Bonferroni's multiple comparison test (I, J) and one-way ANOVA and Bonferroni's multiple comparison test (E, K) and unpaired t-test (L). Data are presented as mean ± SEM. *P < 0.05, **P < 0.01, ***P < 0.001, ****P < 0.0001. interventions [66]. Following dietary, therapeutic, and even surgical interventions, homeostatic mechanisms defend the elevated body weight, hindering weight loss efforts [67]. Recent advancements in the development of anti-obesity medications have given rise to new therapies that safely deliver sizeable weight loss, but require a chronic treatment regime for weight loss maintenance [68,69]. Thus, the lofty vision of therapeutically lowering body weight in a manner that persists remains unmet.
The psychedelic renaissance has sparked new hope for the treatment of many behavioral and neuropsychiatric conditions that have-similarly to the management of obesity-shown to be extremely treatment-resistant [7]. For the treatment of addiction [70] and depression [10], a combination of psilocybin and psychotherapy has shown to produce clinical remission that, in some cases, persists for years. Given the neurobiological and behavioral similarities between obesity and addiction, we conducted the first preclinical evaluation of psilocybin for obesity and binge-like eating behavior in mouse models of genetic and diet-induced obesity. We demonstrate that neither a single 'high' dose of psilocybin (3 mg/kg) nor a daily microdosing approach induces metabolic or behavioral changes in mouse models of obesity and binge-eating. During our studies, we did not observe any adverse effects of psilocybin, suggesting that the doses and treatment applied here are safe and well tolerated.
Psilocybin's unique ability to enhance cognitive flexibility [52] has made it a relevant target for the treatment of depression, and disorders with compulsive symptomatology [57]. Deficits in cognitive flexibility are associated with alcohol use disorder [71], obesity [72], and eating disorders [73]-hence psilocybin could prove to be beneficial for the management of these conditions. In the present study, we found no evidence that psilocybin could provide a neural background that supports weight loss, maintenance of weight loss, or reductions in binge eating. We did observe a reduction in sucrose preference following administration of psilocybin, which supports that it could be useful for the treatment of compulsive behavior or for dampening excessive consumption of sugar and sweet foods in susceptible individuals.
The question remains-are rodent models applicable for translational evaluation of 5-HT 2A R-targeting psychedelics as tools for weight management and eating disorders? A similar question was posed in a study assessing the effect of psilocybin on an animal model of alcohol use disorder-where no effect was found [56]. A later study by the same authors combined animal models with genetic manipulation of prefrontal cortex glutamate receptors and elucidated a deeper understanding of psilocybin's effect [57]. Based on the findings of the present study, it may be premature to discard the use of psilocybin for the treatment of obesity and disordered eating in humans. Psilocybin may be particularly efficacious in subpopulations of obese patients with a disease etiology rooted in disordered eating. The first clinical trials assessing the use of psilocybin for the treatment of anorexia nervosa are starting, based on this very premise [74] (clinical trial numbers: NCT04661514, NCT04052568, and NCT04505189).
Given psilocybin's virtue of being a 5-HT 2A R agonist (via psilocin) and a weak agonist of 5-HT 2C R, it was surprising that psilocybin had no impact on food intake or body weight in the wide variety of conditions tested, including lean mice, dietinduced-obese mice, ob/ob mice and melanocortin-receptor 4 knockout mice. Obesity-susceptible mice have increased 5-HT 2A/2C receptor density in the ventromedial hypothalamus (VMH) [75] and serotonin release from the VMH is reduced in animal models of obesity. Perhaps psilocybin's effect on the homeostatic regulation of food intake is limited, as an expression of 5-HT 2A R in the hypothalamus is relatively low [76]. We observed no short or long-term changes in transcriptional programs in the hypothalamus 3 h or 4 weeks after peripheral administration of psilocybin. In contrast, substantial transcriptional changes were observed in the prefrontal cortex, 3 h after administration of psilocybin. These acute changes included alterations in canonical gene programs related to cellular substrate metabolism, signaling, and neuroplasticity. The relationship between obesity and the dorsolateral prefrontal cortex has been established; lower activation is observed in obese patients compared to lean [77], but this has never been targeted therapeutically. Psilocybin presents a novel approach for altering executive function to reduce impulsive food intake behavior. However, the observed transcriptional changes did not endure, and as such, cannot explain the long-lasting benefits of a single administration of psilocybin.
In summary, our preclinical evaluation of psilocybin for the treatment of obesity and binge-eating disorders demonstrates that psilocybin does not lower body weight and food intake in obese mice, but a single high dose of psilocybin does lower the preference for sucrose in lean mice. This effect may be linked to acute neuroplastic changes in the prefrontal cortex and suggests that psilocybin could have therapeutic value for lowering the hedonic value of certain hyperpalatable foods. Despite the decreased sucrose preference in lean mice, neither a single high-dose, nor microdosing approach was sufficient for reducing body weight and food intake, nor was a single high-dose able to attenuate binge eating behavior. Given the uncertainties about psilocybin's therapeutic benefit in this context, further studies are required. In order to fully evaluate psilocybin, or similar 5-HT 2A R targeting psychedelics, for perturbed eating behavior and weight management, a combinatorial approach-comprising psychedelic therapy, psychotherapeutic support, and lifestyle intervention could be considered.
DATA AVAILABILITY
Data generated from the bulk RNA-seq study are available in GEO under accession number GSE209859. | 2022-08-11T13:47:17.297Z | 2022-08-11T00:00:00.000 | {
"year": 2022,
"sha1": "5e661ae0fb85412148f86b08d5163d42491e9422",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "5e661ae0fb85412148f86b08d5163d42491e9422",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": []
} |
235828260 | pes2o/s2orc | v3-fos-license | Information system for diagnosing Neonatal Jaundice using rule-based algorithm
Neonatal jaundice often occurs in newborns characterized by a yellow discoloration of the sclera and baby’s skin due to high levels of bilirubin in the blood. The occurrence of jaundice needs done identified to the development of hyperbilirubinemia which has the potential to become a toxic that can cause kernicterus. Detection of jaundice by manual (visual) is still often done by parents and health workers, so that the results of the diagnosis obtained are less accurate. This study proposes an innovative used of information systems in detecting jaundice using the SDLC (Systems Development Life Cycle) method with a waterfall model. Retrieval of data in this study with a quasi experiment using non-probability sampling with consecutive sampling on 48 newborn respondents. The result of this study indicate that the information system that has been built can detect jaundice faster as much as 2.1 minutes with an accuracy rate of 91.7%, can provide appropriate solutions and an effective level of use of information systems of 90.5% which can be used as an innovation in helping overcome jaundice problems.
Introduction
In 2017, globally, the neonatal mortality rate in the first 28 days of life was estimated at 18 per 1.000 live births the most occurring in Sub-Saharan Africa, namely 27 deaths per 1.000 live births and followed by South Asia with 26 deaths per 1.000 live births [1]. The main causes of neonatal mortality, as much as 93%, were attributable to asphyxia, infection and prematurity [2]. Incidence of preterm 80% have jaundice problems in the first week of life while about 60% of term babies have the incidence jaundice [3]. Jaundice is a problem associated with infant mortality rate where as much as 24% of neonatal deaths are caused by kernicterus [4].
The problem of jaundice can be overcome by using an information technology system that can detect, diagnose, treat, educate effectively, easily and quickly [5]. Advances in information technology play a very important role in the field of health services which includes telemedicine, biological signal processing (medical image processing), security of health information and health information systems [6]. The use of information systems in health services can shorten the process of care, improve quality of service, maximize efficiency of time, medical devices, workforce in doing preventive and curative [7]. The information system aims to improve the ability in collect, store, and analyze health data accurately, effectively in intervening, providing services efficiently, increasing data accuracy, learning about trends and increasing accountability [8]. Improving the quality of health services has now been carried out by developing and developed countries by using information systems to address emergencies in obstetrics [9]. Application of information systems in improving the effectiveness of health services can process data properly in public health management [10].
Overcoming problems using information systems is needed as a breakthrough that can motivate, provide convenience, connect, teach, understand, and empower individuals [11]. The success of an information system in patient care consists of six dimensions including system quality, usage, information quality, user satisfaction, organizational impact and individual impact, while the quality of information systems can be seen from response time, ease of access, ease of use, system integration, level system accuracy and flexibility [12].
Building a rule-based system is needed in conducting specific assessment of all possibilities that occur in the field [13]. Automatic system development can support and analyze in making a decision [14]. The use of the system as a non-invasive method based on image processing can measure the level of bilirubin levels in a person's body by looking at the degree of jaundice in the sclera [15]. The use of a non-invasive system is more appropriate in the assessment of jaundice because invasive methods can cause pain and risk of infection [16] The development of science and technology systems that are increasingly rapid has a very large contribution in the health sector by using information systems. The use of information systems can diagnose the incidence of jaundice in newborns. The occurrence of jaundice must can be detected and diagnosed quickly so as not to experience delays in getting medical treatment. Delay in taking action can adversely affect the health of the baby. This study discusses an information system to diagnose neonatal jaundice using a rule-based algorithm.
Methods
The data retrieval of this study was a quasi experiment using non-propability sampling with consecutive sampling on 48 newborns aged 0-15 days with two measurements, namely manual (visually) and using an information system. The information system framework that has been built in this study is shown in Figure 1.
Figure 1. Information systems framework
The design of the information system in the detection of neonatal jaundice using the System Development Life Cycle (SDLC) model with the waterfall method which consists of several stages is shown in figure 2 [17] . The stages of making the information system include: 1. Analysis, This stage identifies user needs in developing information systems. 2. Design, The system design in this study is in the form of making a framework according to the needs of users in the field to facilitate the system creation.
3. Implementation, The implementation stage is in the form of making the system according to the design that has been made. 4. Testing, Tests are carried out to determine whether the system that has been made is in accordance with expectations. 5. Evaluation, After conducting field testing, then evaluate to find out if there is an error in a system. Application-based and website-based information systems used in jaundice detection have several steps that can be taken to produce diagnoses and provide solutions. The steps for detecting jaundice are shown in figure 3.
Results and discussion
The results of this study have been validated by expert systems, namely midwives to see the speed, level of accuracy, accuracy and effectiveness of using information systems. The speed of early detection in diagnosing neonatal jaundice is done manually and using an information system, the difference in mean time is shown in Table 1. The average speed of time in determining the diagnosis of jaundice using the information system is 7.8 minutes while manually is 5.7 minutes, so it can be concluded that the use of information systems is 2.1 minutes faster in determining the diagnosis of jaundice than visually. Medical image analysis can achieve higher diagnostic speed and expert accuracy parallelism that can influence medical practice by applying natural language processing to read the growing scientific literature and compile electronic medical records, a machine that runs from medical data can prevent clinical errors due to bias human cognitive abilities that can positively affect patient care [18]. Artificial Intelligence application in biomedicine can efficiently use large data and provide fast access to data to solve health-related problems [19]. Prompt treatment or timely prevention of jaundice can save costs due to reduced hospitalization of newborns, so that evaluating the incidence of neonatal jaundice in jaundice services should be considered a fundamental policy [20]. The use of web-based computers to help carry out instructions that have fast characteristics, high capacity, makes the dissemination of knowledge without the limitations of time and space, and can be accurate [21]. In addition, the use of the system can make time, cost and energy more efficient for doctors and patients [22]. The diagnosis of jaundice detection that has been carried out is shown in Figure 5. The jaundice detection assessment was carried out twice manually and using an information system. The diagnostic results obtained in this study were validated by an expert system, namely the midwife. Enforcement of jaundice diagnosis is done by comparing manual methods with expert systems, information systems with expert systems and manual methods with information systems that will be tested using statistics to see if there are differences in diagnosing enforcement. The results of different tests for the diagnosis of jaundice are shown in Table 2. Table 2 The results of different tests of diagnosis on the detection of jaundice Variable p value Manual diagnosis with an expert (midwife) 0.002 Enforcement of the diagnosis using an information system with experts (Midwives) 1.000 Manually-enforced diagnosis-information systems 0.002 The results of the different test of manual diagnosis with experts (midwives) were different with a p value of 0.002 and diagnosis using an information system with an expert (midwife) had no difference with a p value of 1,000, while manual diagnosis with an information system was different. p value 0.002, it can be concluded that the use of information systems is better for conducting diagnoses on the detection of jaundice because there is no significant difference between jaundice detection using an information system and an expert system, whereas there is a difference in jaundice detection done manually with an expert system.
The detection of jaundice is done manually and uses an information system validated by an expert system to see the accuracy of the diagnosis in newborns. The level of accuracy is shown in Table 3. Based on the results of the accuracy that has been done manually, namely 72.9% and using an information system of 91.7% in enforcing the diagnosis, it can be concluded that early detection of neonatal jaundice to find jaundice diagnosis is more accurate using an information system so that it can be used as a tool for parents. at home in monitoring. Accurate diagnosis is very important in treatment problems, so clinical decision support systems such as applications have experienced improvement in recent years [23]. Image processing in the diagnostic method has a higher level of accuracy, image possessing or image processing is widely used to assist experts in diagnosing diseases and choosing the right treatment [24].
Providing solutions to neonatal icterus events can be done by using an information system that can be used to make decisions to prevent adverse effects that can harm the baby. Providing solutions using an information system is shown in figure 6.
Figure 6. Administration of jaundice solutions
Accuracy in providing solutions to the problem of jaundice is very important because if the solution is not given the right one, the action taken will be wrong which can endanger the baby's health condition. The accuracy of providing solutions is in accordance with the diagnostic results obtained in the detection of jaundice. Providing the right solution can make it easier to treat jaundice problems.
Measurement of the effectiveness of the information systems aims to see the extent to which the application can be accepted among the public and can be effective in early detection of neonatal jaundice in newborns. The effectiveness of the information systems is shown in table 4. The results of the effectiveness of the information systems carried out by parents using the TAM (Technology Acceptance Model) score based on the aspects of uses, convenience, attitude, behavior and actual aspets are 90.5% which states that the information system is very effective in early detection of jaundice neonatal. The Technology Acceptance Model (TAM) has been widely used to explain user behavior and assist in understanding an information system such as the convenience factor and perceived usefulness factors in computer use are the main factors based on the technology acceptance model [25]. Technology Acceptance Model in each individual is influenced directly or indirectly by behavioral intentions, attitudes, perceptions of the ease and usefulness of the technology system [26]. A person's behavioral intention to use technology and interest in behavior can be seen from the predictable level of | 2021-07-15T20:07:05.132Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "41ed3aee30e9f0f5fa5ddb76904b822a7dee2fa2",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/1943/1/012037/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "41ed3aee30e9f0f5fa5ddb76904b822a7dee2fa2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Physics"
]
} |
238742275 | pes2o/s2orc | v3-fos-license | The Impact of SARS-CoV-2 Infection on Fertility and Female and Male Reproductive Systems
The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection remains a huge challenge for contemporary healthcare systems. Apart from widely reported acute respiratory distress syndrome (ARDS), the virus affects many other systems inducing a vast number of symptoms such as gastrointestinal, neurological, dermatological, cardiovascular, and many more. Currently it has also been hypothesized that the virus might affect female and male reproductive systems; SARS-CoV-2 infection could also have a role in potential disturbances to human fertility. In this article, we aimed to review the latest literature regarding the potential effects of SARS-CoV-2 infection on female and male reproductive systems as well as fertility, in general.
Introduction
At the end of December 2019, a new virus, officially recognized as the severe acute respiratory syndrome 2 (SARS-CoV2) coronavirus, emerged. In March 2020, the World Health Organization (WHO) announced a global pandemic related to the appearance of SARS-CoV-2. The SARS-CoV-2 virus mainly affects the respiratory system causing COVID-19 disease [1]. It causes cough, fever, difficulties in breathing such as shortness of breath, and severe conditions such as pneumonia [1,2]. It has been reported that the death rate from COVID-19 in various countries ranges from 1 to 4%, with an increased risk of its occurrence in the elderly and in patients with co-existent diseases. SARS-CoV-2 primarily spreads via the airborne route [3], however, other transmission routes have been hypothesized as well. The presence of SARS-CoV-2 in blood samples was confirmed by positive PCR test results [4][5][6]. The virus affects the target host cells by binding to the ACE2 (angiotensin-converting enzyme) molecule and modifies the expression of this receptor in host cells [7]. ACE2 is a member of the angiotensin-converting enzyme (ACE) family and converts angiotensin 1 into angiotensin 1-9 as well as angiotensin 2 into Ang 1−7 [8]. ACE2 receptors take part in physiological functions and are expressed in numerous organs including ovaries, uterus, vagina, and placenta [9]. Therefore, SARS-CoV-2 may have a potential influence on female reproductive organs [10]. Current data on the clinical features of SARS-CoV-2 also show that co-expression of ACE2 and transmembrane serine protease 2 (TMPRSS2) may facilitate virus entry [11,12]. TMPRSS2 is necessary to cleave the viral S protein and facilitate the fusion to the host cell [13]. This protease is broadly expressed in many human organs as compared to ACE2. Hence, ACE2 receptor may be primarily responsible for whether respective cells are vulnerable to SARS-CoV-2 infection [14]. It is crucial to investigate the potential effects of SARS-CoV-2 infection on female and male fertility, as so far no comprehensive evidence has been provided for such association.
Aim of the Review and Search Strategy
Major objective of this review was to investigate the potential role of SARS-CoV-2 infection on the functioning of female and male reproductive organs (with an emphasis on the accumulation of viral particles within the reproductive organs), any disruptions concerning female and male fertility, as well as hormonal dysfunctions or any disruptions regarding the pregnancy course. An analysis of the articles available on PubMed, Web of Science, Scopus, and Cochrane databases was performed by four independent researchers on 30 August 2021. The identification of the articles for final analysis included usage following key-words: (SARS-CoV-2 OR COVID-19) AND (reproductive organs OR sex hormones OR pregnancy OR fertility). The literature search included both studies performed on humans and animals; concerning language restrictions, the authors chose only those that were written in English. Finally, 95 articles were included in this narrative review.
SARS-CoV-2 in Female Reproductive Organs
The influence of SARS-CoV-2 infection on the genitourinary system remains unknown. Data from several papers indicate that the virus may be present in vaginal swabs [9]. Shwartz et al. analyzed both the vaginal and nasopharyngeal swabs obtained from 35 premenopausal and postmenopausal women, indicating that two of them had positive vaginal RT-PCR for SARS-CoV-2; [15]. Alexandre J. Vivanti presented a case of reinforcing the possibility of vertical transmission [16].
SARS-CoV-2 uses ACE2 as a receptor for entry into the host cell; ACE2 is widely present in the human ovaries as well as in other species such as rats [4,17]. Therefore, it was assumed that female reproductive organs might be highly vulnerable to SARS-CoV-2 entry. Gonadotropin facilitates the changes in the ovarian expression of ACE2 which is a crucial factor in controlling the malformation of the follicle [18]. Analysis of the RNA sequencing data allowed the expression of ACE2, TMPRSS2, the cysteine protease cathepsin (CTSL), and the receptor basigin (BSG) that may have influenced viral entry to be estimated. ACEs, CTSL, and BSG were detectable in most samples as compared to TMPRSS2 expression which was very low or undetectable [13]. The analysis of the scRNAseq shows the scarcity of the data related to the inner cortex of human ovaries [19]. Several samples were obtained from women undergoing fertility preservation procedures, but inner cortex lacks the germ cells. Oocytes from non-human primate ovarian tissue exhibit co-expression of ACE2 and TMPRSS2. Very low co-expression was found in the oocytes from primordial follicles. The level of co-expression seems to increase during the follicular development, as it was detected on more than a half of the antral follicles. These oocytes undergo atrophy or ovulation, so the risk of infection is relatively low. Also, the surrounding environment, the cumulus cells, lack the ACE2 receptors [13]. Such data cannot exclude the fact that SARS-CoV-2 might attack ovarian tissue and granulosa cells, thereupon the ovarian function may be abnormal and oocyte quality decreased. The downregulation of ACE2 receptor after the viral infection may cause variations in the follicular development and oocyte maturation [20]. In many fertility treatment centers, IVF procedures were postponed. Either a direct or indirect effect of SARS-CoV-2 on gametes and embryos cannot be excluded [21], either. The embryology laboratory and staff must respect the precautions taken during all procedures e.g., oocyte recovery [22].
The ACE2 RNA expression level was also determined in the human fallopian tube, endometrium, uterine cervix, and the vagina [23]; low expression levels were observed in all the tissues. Also, TMPRSS2 RNA has been noticed in all reproductive organs but it demonstrated significant higher expression in the male tissues [23,24]. It seems that SARS-CoV-2 is unlikely to infect ovaries or interrupt oogenesis [25]. Abhari et al. researched the number of ACE transcripts in the endometrium and the expression of the protein form. The expres-sion of ACE2 RNA was low and no protein was noticed. TMPRSS4 was highly expressed during menstruation by cleaving S protein and enabling SARS-CoV-2 to bind to ACE2. The renin-angiotensin system is present in the uterus and this presence is limited to the endometrium (epithelial and stromal cells in particular). Other researchers indicate to a positive correlation between the ACE2 expression and age especially in the early secretory phase of the endometrium. Generally, the endometrium seems to present low susceptibility to SARS-CoV-2 infection because of low ACE2 and TMPRSS2 expression. Therefore, it is proposed that women's age might constitute a risk factor in SARS-CoV-2 infection [26]. Other published studies on SARS-CoV-2 RNA presence in the vaginal fluid and cervical smears revealed that, respectively, 98.3% and 100% samples tested negatively [27].
It is crucial to estimate and predict whether and how SARS-CoV-2 targets and attacks the human reproductive system ultimately decreasing fertility. The available data and evidence provide no clear suggestion that male or female genital tracts might transfer the virus. Limited data hinder the estimation of SARS-CoV-2 virus influence on female reproductive organs. Considering the aforementioned information, it can be assumed that or knowledge regarding the possible risk of SARS-CoV-2 infection within the female reproductive organs is still highly limited. Thus, the outbreak of the COVID-19 pandemic imposes new conditions and significant challenges on the reproductive healthcare of the community.
The Multi-Faceted Effects of COVID-19 on Women, Including Sex Hormones
Given the rapidly spreading coronavirus pandemic and its impact on the global healthcare system, innovative therapeutic strategies should be continually sought. It is estimated that the number of people who die from COVID-19 infection is lower in women compared with men [28]. So far, differences between men and women in response to induced inflammation have been documented (these differences can in part be attributed to sex steroid hormones) [29].
In previous studies in female mice, the protective role of estrogens in protecting against SARS-CoV infection was documented [10]. Estrogens have been hypothesized to be crucial in modulating viral infection and the progression of the disease via an action on immune/inflammatory responses and ACE2 expression [28]. Given the above evidence clearly demonstrating the main role of estrogen in the immune, and non-immune responses to viral infections, it is not surprising that postmenopausal women who developed COVID-19 had a more severe course of infection [30]. Additionally, ovariectomy or treatment of female mice with an estrogen receptor antagonist significantly increased the death rate, indicating a protective effect on estrogen receptor signaling in COVID-19. Overall, these data suggest that the sex differences in susceptibility to COVID-19 in mice are similar to those seen in humans, and indicate that estrogen receptor signaling is a critical protective factor [31].
In a study by Ding et al., patients infected with COVID-19 were hospitalized in hospital in Tongji. The patients were divided into three groups-a first group of non-menopausal women, a second group of menopausal women, and a third group of men. The men oscillated around the same age as the woman. In the above study, no significant relationships between men and women were found. On the other hand, there were significant differences between the non-menopausal women and men of the same age, both in terms of the disease severity and clinical symptoms. A less severe course of COVID-19 disease has been observed in women. These results suggest that non-menopausal women might develop protective factors guarding them against the disease severity [32].
In another study by Phelan et al. aimed at examining the potential effects of a pandemic, i.e., changes in lifestyle, increased stress related to the occurrence of a pandemic and insecure existence, the general population of women of reproductive age was examined in terms of the menstrual cycle, libido, and lifestyle changes during the pandemic [33]. This study included 1031 women who completed the questionnaire via a computer link. Some of them i.e., 35/3.4% were positive for COVID-19, some i.e., 63/6.1% had symptoms, but this was not confirmed by the test, while the greater part i.e., 870/83% did not have COVID-19. The mean age of the women included in the study 15-54 years; the mean BMI was 25.8 ± 5.5 kg/m 2 . Looking the effects of COVID-19 on menstrual disorders, 46% of women reported on an overall change in their menstrual cycle, 53% reported an increase in PMS (premenstrual symptoms), while only 7% reported an improvement in PMS. The duration of the cycle itself varied, and the duration of bleeding was the same regardless of the pandemic [33]. This large, anonymous, observational study described that a large proportion of women experienced reproductive complications from the COVID-19 pandemic. A minority of women in the population experienced improvements in their health during the pandemic.
Some scientists such as Li et al. took a step forward by associating COVID-19 infection with genes encoded on the X chromosome, which may indicate the fact of a reduced mortality in women [34]. To understand the X-linked pathomechanism, we need to start with the biological role of ACE2. Biological functions of ACE2 could be divided into two categories-peptidase-dependent and peptidase-independent [8]. The peptidaseindependent function of ACE2 mainly refers to the mediation of coronavirus infection. The gene encoding ACE2 is located on the X chromosome, being presented in two copies in females [35]. According to Lyon's theory, one of the two X chromosomes from the mother or from father is transcriptionally silenced [36]. This complex silencing process, translating into X chromosome inactivation is a fundamental basis for ensuring balanced gene expression between the sexes [35]. Sometimes some genes (15-30%) located on the short arm of the chromosome avoid inactivation [37]. This has to do with the fact that ACE2 maps in the p22.2 band and, therefore, can avoid gene inactivation. Hence, this phenomenon could explain the differences observed among sexes regarding the ACE2 expression [35]. Therefore, future research on ACE2 levels in the genomic context, copy number variations, X-inactivation, and existing comorbidities is essential. This will help us understand the sex differences in ACE2-related pathophysiology, and how they are fully related to the SARS-CoV-2 pandemic [38].
COVID-19 and Pregnancy
Pregnancy is generally considered a high-risk condition with respect to infectious conditions, as immune changes in pregnancy can greatly increase a woman's susceptibility to pathogens, and their potential complications [12]. Pregnancy establishes a unique immunological condition, enabling the protection of the fetus from maternal rejection, providing an adequate fetal development at the same time preventing against microorganisms [39]. So far, approximately 55 pregnant women have been infected with SARS-CoV-2, and no mortality among these individuals has been associated with the disease course [40]. A total number of 46 neonates have been so far reported as being infected by the SARS-CoV-2, but there is no evidence indicating the vertical transmission of the virus [40].
Scientists are trying to understand the impact of the COVID-19 pandemic on pregnant woman and the fetus. There are few confirmed cases in the literature related to women who tested positive for PCR during pregnancy ( Figure 1).
Preliminary reports from China suggested that pregnant women do not face an increased risk of complications related to SARS-CoV-2 in relation to the general population [9]. Further cases from the United States, for instance, provided contradictory information, suggesting that pregnant women may require mechanical ventilation and, in addition, are at a higher risk of death due to the occurrence of the disease [41]. In a study by Chen et al., nine women were diagnosed with COVID-19 infection in the third trimester of pregnancy [42]. In this relatively small group, the clinical presentation was similar to that observed in adult non-pregnant women i.e., fever (seven women), cough (four women), myalgia (three women), and sore throat accompanied by malaise (two women). Additionally, out of nine women, five had diagnosed lymphopenia. All of them suffered from pneumonia but required no additional intervention including mechanical ventilation and none of them died. All women had a cesarean delivery, and Apgar scores were 8-9 at 1 min and 9-10 at 5 min [43]. In another study by Breslin et al., no increased susceptibility of pregnant women to the risk of a severe course of the disease was found. Interestingly, most of pregnant women who were positive test for COVID-19 in the same study were asymptomatic [11]. Preliminary reports from China suggested that pregnant women do not face an increased risk of complications related to SARS-CoV-2 in relation to the general population [9]. Further cases from the United States, for instance, provided contradictory information, suggesting that pregnant women may require mechanical ventilation and, in addition, are at a higher risk of death due to the occurrence of the disease [41]. In a study by Chen et al., nine women were diagnosed with COVID-19 infection in the third trimester of pregnancy [42]. In this relatively small group, the clinical presentation was similar to that observed in adult non-pregnant women i.e., fever (seven women), cough (four women), myalgia (three women), and sore throat accompanied by malaise (two women). Additionally, out of nine women, five had diagnosed lymphopenia. All of them suffered from pneumonia but required no additional intervention including mechanical ventilation and none of them died. All women had a cesarean delivery, and Apgar scores were 8-9 at 1 min and 9-10 at 5 min [43]. In another study by Breslin et al., no increased susceptibility of pregnant women to the risk of a severe course of the disease was found. Interestingly, most of pregnant women who were positive test for COVID-19 in the same study were asymptomatic [11].
In a recent review by Allotey et al. including 11,432 women, the most frequently reported symptoms were fever (40%) and cough (39%) [44]. Interestingly, pregnant women are more likely to remain asymptomatic as compared to non-pregnant women [45]. Additionally, pregnant women report muscle pain, diarrhea, headache, and sore throat less frequently [44]. In a review by Hong Liu et al., 18 pregnant women from the coronavirus pneumonia were considered. Their average age was 30 years of age. Each of these patients had one or two common clinical symptoms such as fever, cough, sore throat, diarrhea, and cholecystitis [46]. The birth weight (BW) of newborns from these mothers ranged from 1520-3820 g, and twins showed the lowest birth weight [47]. However, these women also had obstetric complications, such as pre-eclampsia, premature rupture of the mucosa, irregular contractions, and a history of stillbirth, indicating early intervention in the pregnancy [46]. Whether these complications were causally related to COVID-19, which in turn led to preterm labor, requires further investigation, although the characteristic immune response during pregnancy, which may result in a potential cytokine storm due to COVID-19 infection, needs to be considered. In a recent review by Allotey et al. including 11,432 women, the most frequently reported symptoms were fever (40%) and cough (39%) [44]. Interestingly, pregnant women are more likely to remain asymptomatic as compared to non-pregnant women [45]. Additionally, pregnant women report muscle pain, diarrhea, headache, and sore throat less frequently [44]. In a review by Hong Liu et al., 18 pregnant women from the coronavirus pneumonia were considered. Their average age was 30 years of age. Each of these patients had one or two common clinical symptoms such as fever, cough, sore throat, diarrhea, and cholecystitis [46]. The birth weight (BW) of newborns from these mothers ranged from 1520-3820 g, and twins showed the lowest birth weight [47]. However, these women also had obstetric complications, such as pre-eclampsia, premature rupture of the mucosa, irregular contractions, and a history of stillbirth, indicating early intervention in the pregnancy [46]. Whether these complications were causally related to COVID-19, which in turn led to preterm labor, requires further investigation, although the characteristic immune response during pregnancy, which may result in a potential cytokine storm due to COVID-19 infection, needs to be considered.
In the case of pregnant women who have been diagnosed with severe or critical COVID-19 infection, they are suspected to be at increased risk, and therefore more likely to be at risk of pregnancy loss, and preterm births. In studies conducted during hospitalization of pregnant women diagnosed with COVID-19, including 240 to 427 women, the risk of preterm labor ranged from 10 to 25%. At the same time, the percentage of preterm labor reached 60% in the women with a critical course of disease [48,49]. In an analysis of national surveillance data including pregnancy status of 409, 462 women with symptomatic COVID-19 illness, the adjusted risk ratio in pregnant women (vs. those of similar age and not pregnant) was 3.0 for intensive care unit admission, 2.9 for mechanical ventilation, and 1.7 for death [50]. Therefore, it is important to prevent critical COVID-19 infection for the mother and the fetus [49].
There are also studies that have examined the placenta of women infected with COVID-19. The study of three placentas delivered and taken from pregnant women with confirmed SARS-CoV-2 infection, infected in the third trimester of pregnancy, and performed with an unplanned cesarean section, describes various degrees of fibrin deposition [39]. In all samples taken from the aforementioned placenta were negative for the presence of SARS-CoV-2 nucleic acid [51]. In another study by Sahanes et al., which included 16 placentas taken from pregnant women with positive SACR-CoV-2 were examined, and the most significant finding is an increase in the rate of features of maternal vascular malperfusion (MVM), most prominently decidual arteriopathy including atherosis, fibrinoid necrosis, and mural hypertrophy of membrane arterioles [52]. No pathognomonic features were identified in the above study. In addition, there was also no increased rates of maternal vascular malperfusion, and intervillous thrombi, suggesting a common theme of abnormal maternal circulation, as well as an increased incidence of chorangiosis. The above results suggest that it is justified to increase the observation and supervision of women with a confirmed diagnosis of SARS-CoV-2 [52].
SARS-CoV-2 and Testes
The human testes are the male gonads which are composed of a highly integrated system of specialized cells ensuring a proper microenvironment to produce sperm cells. Each testicular lobule contains seminiferous tubules responsible for the process of spermatogenesis and steroid-secreting interstitial cells (Leydig) producing testosterone [53].
In seminiferous tubules, the maturing spermatocytes are separated from peripheral spermatogonia and spermatocytes by a blood-testis barrier formed by tight junctions between adjacent Sertoli cells. This barrier is crucial for the creation of an immune compartment that prevents spermatogenic cells against the body's immune system [54]. Mature spermatids are released into the lumen of seminiferous tubules and pass into the epididymis for storage [55].
The capability of endocytosis of SARS-CoV-2 virus into target cells depends on the connection of viral S protein with the ACE2 receptor [7]. In 2004, Doughlas et al., reported the expression of ACE2 in adult Leydig and Sertoli cells of the human testis [56]. Wang et al. examined the ACE2 expression levels across testicular cell types. Sertoli and Leydig cells presented the highest expression followed by the spermatogonia [57]. Moreover, the ACE2 expression in the testes is age-dependent and peaks in men in their 30s [58].
Nevertheless, for ACE2-mediated SARS-CoV-2 endocytosis viral spike proteins must be primed by the transmembrane protease, serine 2 (TMPRSS2) [7,59]. The mRNA levels of TMPRSS2 protein in testes have been reported to be high in spermatogonia and spermatids [59]. Although both ACE2 and TMPRSS2 are expressed in testicular cells, high co-expression of the two proteins is not observed [13]. Pan and colleagues in the cohort study analyzed 34 adult males, and simultaneous expression of both proteins was rarely displayed in testicular cells according to the RNA profiling method [60].
Other potential cell receptors and cofactors of SARS-CoV-2 endocytosis are also present in testicular cells. CD147 and cathepsin L are mainly expressed in spermatocytes; cathepsin B in testicular macrophages [58]; exact distribution of CD26 and neuropilin-1 in testes needs to be established. The presence of ACE2 and other potential receptors in the testicular cells implies a possible vulnerability of testes to SARS-CoV-2 infection. However, the low co-expression of ACE2 and cofactors minimalizes the probability of direct invasion of the testes.
Several histopathological studies directly investigated the presence of SARS-CoV-2 in the testicular tissue. Song et al. reported that the virus was not present in testes samples from a patient who died in the acute phase of COVID-19 [61]. However, in four other studies, the samples positive for virus particles were identified. Ma et al. found SARS-CoV-2 nucleic acid in two out of five testis samples through RT-PCR. It was further confirmed by immunohistochemistry using an anti-SARS-CoV spike S1 antibody which suggests that SARS-CoV-2 infects testicular cells through the spike glycoprotein binding mechanism. Final TEM analyses revealed coronavirus-like particles in the interstitial compartment of the testes of COVID-19 patients providing evidence that SARS-CoV-2 enters human testicular tissues [62]. A study by Bian et al. also detected the virus in one autopsy case of testis tissue via RT-PCR, TEM, and immunohistochemistry [63]. Yang et al. performed an autopsy of the testes of 12 COVID-19 patients. SARS-CoV-2 was detected in one patient who had a high viral load via RT-PCR; however, TEM failed to identify the virus [64]. In the study conducted by Achua et al. TEM revealed SARS-CoV-2 in the testis tissue of one of six COVID-19 patient autopsies [65]. However, whether the virus can enter the testes does not entirely depend on the viral mechanisms. A vital role may be played by imperfect blood-testes barrier, local inflammation, and high viremia. Scrotal heat stress, as such during viral infections, may cause leakage of the blood-testis barrier and passage of the virus to the testes [66,67].
However, it has been observed that direct virus access is not required to damage the male reproductive system. SARS-CoV-1 infection can cause severe orchitis with damaged germ cells, reduced mature spermatozoa, thickened base membranes, and significant leukocyte infiltration. The histopathological changes arise with the absence of SARS-CoV-1 in the testicular tissue, which implies that damage is caused by inflammation rather than viral infection [68]. Similar changes have been observed in COVID-19 patients. These included interstitial edema, congestion, inflammatory cell infiltrate, and red blood cell exudation in testes and epididymis, as well as thinning of seminiferous tubules [64,69,70].
Men with severe COVID-19 course presented a significantly higher possibility of orchitis than the non-severe COVID-19 patients [71]. Patients with severe COVID-19 infection had high levels of inflammatory cytokines: IL-2, IL-6, IL-7, IL-10, TNF-α, and MCP-1 [71,72]. High blood plasma concentration of these cytokines may lead to damage of the bloodtestes barrier and induce inflammation in seminiferous tubules [66,73,74]. Moreover, inflammation-induced blood-testis barrier damage could be a potential explanation for the presence of SARS-CoV-2 in the testes of patients with a severe form of the disease [63].
SARS-CoV-2 and Semen
Li et al. identified SARS-CoV-2 RNA in the semen samples of four patients with acute infection and two that had recovered from the disease. The interval times from onset of symptoms to the testing of semen were 6-11 days and 12-16, respectively [75].
However other studies have identified no virus in the semen of convalescent men or during acute infection [76][77][78][79][80][81][82][83][84]. Most of these studies collected semen samples in a longer time interval from the onset of symptoms, which may suggest that SARS-CoV-2 penetrates to the semen at an earlier stage of the disease. Nonetheless, a study by Kayaaslan et al. collected the semen within 0-7 days and still reported negative viral RNA [85]. Therefore, the reasons for false-positive results of SARS-CoV-2 in semen might be considered. These include unknown conditions of sperm collection and detection limits of RT-PCR; imperfect specificity of commercial kits available for the detection of SARS-CoV-2 [86] and possible urine origin of viral RNA [87].
Apart from the presence or absence of a virus in semen, COVID-19 may impact spermatogenesis. The reported changes in semen include poor sperm morphology, decreased concentration and motility as well as increased sperm DNA fragmentation [69,[77][78][79]83,84]. Additionally, immune factors, including IL6, TNF-α, MCP-1, were increased in the semen [69]. A decline in sperm quality has been found in patients with a moderate course of the disease which may be associated with fever and inflammation [77]. Fever is one of the most common COVID-19 symptoms [72] and harms spermatogenesis and impoverishes sperm quality [88]. However, high temperature usually does not result in irreversible damage to male fertility. The return to the basal state of the sperm parameters can take up to 3 months [89]. It should be emphasized that increased sperm DNA fragmentation leads to decreased fertility, poor embryo quality, and increased embryonic loss [90,91]. Therefore, it has been recommended to monitor sperm parameters and to delay ART management for three months in SARS-CoV-2-infected males who developed a fever [92].
SARS-CoV-2 and Male Hormonal Dysfunction
Male sex hormones secreted from the testes are crucial to the process of spermatogenesis. Their synthesis is regulated by the hypothalamus, pituitary, gonad (HPG) axis. Changes in levels of gonadotropins and testosterone in male COVID-19 patients might be caused by dysregulation of the HPG axis [93]. Schroeder et al., in a cohort study of 45 critically ill male COVID-19 patients, found 68.6% had low testosterone and 48.6% had low dihydrotestosterone levels [94]. Rastrelli et al. reported that lower total testosterone (TT) and calculated free testosterone (cFT) were found in the group of clinically deteriorating or deceased patients (n = 4) as compared to the group of patients with stable or improving clinical conditions (n = 27) [95].
A retrospective single-center study involving 119 COVID-19 patients and 273 agematched control men reported that infected men had no statistically significant differences in serum testosterone and FSH levels compared with the controls. Yet they revealed a higher serum LH level and significantly decreased ratios of T/LH and FSH/LH. This study implies that SARS-CoV-2 tends to impair the function of Leydig cells [96]. In addition to Leydig cell dysfunction, there is increasing evidence that hypothalamic dysfunction is responsible for the impairment of male sex hormones secretion [97]. Moreover, the substantial ACE2 expression and local lesions recorded in the hypothalamus of COVID-19 patients increase the risk of HPG axis dysfunction [98,99].
Apart from the direct impact of SARS-CoV-2 infection on patients, the dysregulation of the HPG axis can be also induced by psychological factors. COVID-19 may trigger several mental problems, including anxiety, depression, post-traumatic stress disorder, and sleep disturbances, which negatively affect semen parameters [100,101].
It is necessary to further investigate the relationship between male sex hormones level and SARS-CoV-2 infection in prospective long-term studies to evaluate hypogonadism and sexual dysfunction caused by abnormal sex hormone levels.
COVID Vaccine and Fertility
The COVID-19 vaccine acts by training the human organism to develop immunity and fight against the virus that causes COVID-19. This may prevent future infectious disease. The COVID-19 vaccines target the spike protein (S) of SARS-CoV-2 virus which induce in the human body the production of the neutralizing antibodies. mRNA vaccines are characterized by high potency connected with an adjuvant. These types of vaccine induce activation of two complementary parts of immunity: B cell responses by antibody production and T cell cytotoxicity as cellular response. The main purpose of vaccination is to induce herd immunity, which should contribute to elimination or significant decline in disease transmission in a population. Researchers estimated that 67% of the population is required to be vaccinated for herd immunity to SARS-CoV-2 [102]. The mRNA (Pfizer-BioNTech and Moderna) and viral vector (AstraZeneca) COVID-19 vaccines are novel in design and, to date, are the first mRNA and viral vector vaccine trials to have been comprehensively evaluated for disease prevention in people. The latest publication of the Association of Reproductive and Clinical Scientists and the British Fertility Society indicates that is no evidence that any forms of COVID-19 vaccination can affect male or female fertility [103]. In the ongoing research, there are no data about the COVID-19 vaccines influencing infertility. It is also hard to find in literature any credible scientific theories explaining the mechanism of influence of COVID-19 on female infertility.
Researches from Charles River Laboratories France Safety Assessment SAS prepared animal experiment using COVID-19 vaccines; during the experiment, female rats were administered BNT162b2 (30 µg mRNA/dose) intramuscularly 21 and 14 days prior to the start of mating and on gestation days 9 and 20 [104]. BNT162b2 is a lipid nanoparticle (LNP) formulated nucleoside-modified mRNA encoding SARS-CoV-2 spike protein. This vac-cine preparation was authorized for emergency use in the United States and conditional approval in Europe. It demonstrated 95% efficacy in a clinical trial and over 90% effectiveness in preventing COVID-19 in patients aged 16 and older [105]. According to data achieved in the experiment, no BNT162b2-related effects on female fertility were observed. The presence of neutralizing antibodies was confirmed in dams, fetuses, and offspring. All examined parameters and behaviors, including estrous cycle, pre-coital interval, mating, fertility, and pregnancy indices were lacking any departure from normal as compared to the control group. These results are very promising and confirm early assumptions about COVID-19 vaccines. In a short amount of time, after vaccination against SARS-CoV-2 some side effects may occur-injection site pain, fatigue, fever, swelling, muscle pain, headache, cough, disturbed appetite, diarrhea, and nausea [106].
Conclusions
According to the currently available literature, it seems reasonable to assume that SARS-CoV-2 infection might affect both female and male reproductive organs, and to some extent could also affect human fertility and the pregnancy course and outcome. The latter is highly possible due to the reported possibility of vertical transmission of the virus. Based on the studies' results, the elements of both male (testes, semen) and female (ovaries, uterus) reproductive organs might be the potential target of SARS-CoV-2 infection. Even though there are reports indicating the possible role of the viral infection on the reproductive organs, very little is known about the long-term effects of the infection. Therefore, it is of major importance to provide in-depth research explaining the underlying mechanism of SARS-CoV-2 infection and its impact on human reproductive organs and fertility. | 2021-10-06T13:08:56.462Z | 2021-09-29T00:00:00.000 | {
"year": 2021,
"sha1": "357a9bcf911a47c425360fbcd187f929489b9c43",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/10/19/4520/pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "49db0cc292308095d7398c9ba984702b6ddbdcd6",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247134598 | pes2o/s2orc | v3-fos-license | Metformin ameliorates polycystic ovary syndrome in a rat model by decreasing excessive autophagy in ovarian granulosa cells via the PI3K/AKT/mTOR pathway
. Polycystic ovary syndrome (PCOS) is a common gynecological disease accompanied by a variety of clinical features, including anovulation, hyperandrogenism, and ovarian abnormalities, resulting in infertility. PCOS affects approximately 6%–15% of all reproductive-age women worldwide. Metformin, a popular drug used to treat PCOS in patients, has beneficial effects in reducing hyperandrogenism and inducing ovulation; however, the mechanisms by which metformin ameliorates PCOS are not clear. Hence, we aimed to explore the mechanisms of metformin in treating PCOS. In the present study, we first treated a letrozole-induced PCOS rat model with metformin, detected the pathological recovery of PCOS, and then assessed the effects of metformin on H 2 O 2 -induced autophagy in ovarian granulosa cells (GCs) by detecting the level of oxidative stress and the expression of autophagy-associated proteins and key proteins in the PI3K/AKT/mTOR pathway. We demonstrated that metformin ameliorated PCOS in a rat model by downregulating autophagy in GCs, and metformin decreased the levels of oxidative stress and autophagy in H 2 O 2 -induced GCs and affected the PI3K/AKT/mTOR signaling pathway. Taken together, our results indicate that metformin ameliorates PCOS in a rat model by decreasing excessive autophagy in GCs via the PI3K/AKT/mTOR pathway, and this study provides evidence for targeted reduction of excessive autophagy of ovarian cells and improvement of PCOS. syndrome, Ovarian granulosa cells, Excessive autophagy, PI3K/AKT/mTOR
vascular syndrome, etc. [2]. In addition, PCOS is accompanied by some psychological symptoms, such as increased anxiety and depression and reduced quality of life [3]. PCOS is an endocrine disorder that is considered to be the main cause of infertility in reproductive-age women, and it presents long-term health risks throughout life. Although numerous studies have shown that environmental, genetic, and hormonal factors are important in PCOS development, the etiology of PCOS is still unclear due to its clinically heterogeneous characteristics [4]. Clinically, the aim of PCOS treatment is to decrease hyperandrogenism and improve insulin sensitivity; however, overcoming these symptoms remains challenging because of the complex etiology and variable responses among PCOS patients [5]. Recently, clinical studies have shown that obvious autophagy occurs in ovarian granulosa cells (GCs) of patients with PCOS [6,7], and reviews have reported that autophagy in GCs is an important cause of PCOS [8,9]. In addition, studies have shown that imbalanced oxidative stress occurs in ovarian GCs of PCOS patients [10][11][12]. Therefore, these investigations suggested that autophagy of GCs induced by oxidative stress may be an important inducer of the occurrence and development of PCOS pathology.
Metformin, one of the most widely used insulinsensitizing drugs for PCOS treatment, can increase the insulin sensitivity of ovaries to enhance glucose uptake and improve steroidogenesis and the menstrual cycle [13]. Furthermore, metformin can ultimately recover ovulatory function, enabling embryo implantation and improving pregnancy rates in women with PCOS [14]. Metformin has become a popular drug for treating PCOS patients with symptoms of insulin resistance, as it has beneficial effects on reducing hyperandrogenism and inducing ovulation [15]. In addition, metformin can reduce endogenous mitochondrial reactive oxygen species (ROS) involved in maintaining cell survival [16][17][18]. Although numerous studies have confirmed that metformin exerts its anticancer function by enhancing autophagy in cancer cells, it has been recently reported that metformin can also protect cells by reducing autophagy rates [19][20][21]. However, the mechanisms of metformin on GCs in the therapy of PCOS patients remain unclear. Furthermore, metformin can protect against oxidative stress-induced autophagy through the PI3K/AKT/mTOR pathway [19][20][21][22][23]. These results indicate that metformin may play an important role in reducing autophagy in GCs induced by oxidative stress through the PI3K/AKT/ mTOR pathway.
In this study, to test our hypothesis, we investigated the therapeutic effect of metformin and found that metformin decreased the autophagy level in the GCs of PCOS rats. Subsequently, we used a cell model of ovarian granulosa cell autophagy induced by H 2 O 2 to study the mechanism by which metformin reduces autophagy.
Experimental animals and metformin treatment
Twenty-four six-week-old specific pathogen-free (SPF) inbred female Sprague-Dawley (SD) rats (mean body weight of 180 ± 20 g at the onset of the experiments) were obtained from the Experimental Animal Center of Ningxia Medical University. All treatments and animal care procedures were performed in accordance with the National Institute of Health guidelines and were approved by the Medical Ethical Committee of Ningxia Medical University, China (NXMU-2021-009). All animals were fed a free-access commercial diet and sterile water at a controlled temperature of 22 ± 2°C in a 12-h light/dark cycle environment.
Before the experiment, all rats were confirmed to have normal estrous cycles by examination of vaginal smears for two sequential cycles. The rats were randomly assigned to two groups: the control group (Control, n = 8) and the PCOS group (PCOS, n = 16). The rats in the PCOS group received a gavage of letrozole (Hengrui Pharmaceutical Co., Ltd., Jiangsu, China) suspension with 0.5% carboxymethylcellulose at a dose of 1 mg/kg once daily for 21 consecutive days, and the control group rats received an isopycnic placebo [24]. Then, the subsequent letrozole treatment was terminated for the PCOS group, which was randomly divided into two groups as follows: the PCOS control group (PCOS, n = 8) and the metformin-treated group (PCOS + MET, n = 8). The rats in the PCOS group were treated with 500 mg/kg/day metformin (suspension with 0.5% carboxymethylcellulose) orally once daily for 21 consecutive days, and the control group rats received an isopycnic placebo [25]. After 21 days of metformin treatment, oral glucose tolerance tests (OGTTs) were performed before all the rats were sacrificed. Briefly, the fasting blood glucose of all the rats was measured after fasting for 12 h, after 30% glucose (0.5 mL) was given by gavage, and at time points of 0.5, 1, and 2 h [26], the blood glucose level of each rat was detected with a blood glucose monitor (Yuyue710, Jiangsu, China) at the corresponding time points. After 21 days of metformin treatment, cardiac blood and ovarian tissue samples were obtained for subsequent experiments. The ovarian organ coefficient was evaluated by the ratio of the two ovary weights to body weight.
Measurement of hormone levels
Blood samples (n = 8 each group) were used to measure hormone levels, including estradiol and testosterone, with commercially available enzyme-linked immunosorbent assay (ELISA) kits (USCN Life Science, Wuhan, China) according to the manufacturer's instructions. The intra-and interassay coefficients of variation for estradiol and testosterone were less than 8% and less than 10%, respectively.
Histology and immunostaining
Ovary tissues were fixed in 4% paraformaldehyde for 24 h, embedded in paraffin, sectioned at a thickness of 5 μm, stained with hematoxylin-eosin (H&E) according to standard procedures, examined under a microscope, and photographed using a Micro Image (Olympus, Japan). The follicle count was performed in the ovary through the interval of 5 ovarian sections. The ovarian tissue sections were deparaffinized in xylene and rehydrated with gradient concentrations of ethanol, endogenous peroxidase was blocked with 3% hydrogen peroxide, and nonspecific binding was blocked with 2% BSA for 20 minutes. Subsequently, the sections were incubated with rabbit polyclonal anti-LC3II antibody (Beyond Biotech, Jiangsu, China) at a concentration of 1:100. After washing, the sections were incubated with anti-rabbit secondary antibody conjugated with horseradish peroxidase for 30 minutes and washed with PBS, and then, the protein expression was visualized with diaminobenzidine and hematoxylin staining. After mounting, the sections were photographed with a photographic system (Nikon) under a light microscope, and the quantification of LC3-II and Beclin-1 expression in immunohistochemical images was performed using ImageJ software (National Institutes of Health, USA).
Western blotting
The protein expression levels of PI3K, p-AKT, p-mTOR, LC3, and Beclin-1 were detected by western blotting. For tissue sample procurement, ovaries were added to 400 μL of lysis buffer and homogenized with a homogenizer. The samples were centrifuged at 4°C for 10 minutes at 12,000 rpm, and the supernatant was collected in a new centrifuge tube for the protein concentration assay. To obtain cell proteins, 200 μL of lysis buffer supplemented with protease inhibitors and phosphatase inhibitors was added on ice for 10 minutes and then centrifuged at 4°C for 10 minutes at 12,000 rpm, and the supernatant was collected in a new centrifuge tube for the protein concentration assay. After boiling for 5 minutes, 20 μg of protein was loaded in each lane of a 12.5% sodium dodecyl sulfate polyacrylamide gel, electrophoresis was performed, and the proteins were blotted onto a polyvinylidene difluoride (PVDF) membrane. Subsequently, the PVDF membrane was preincubated in blocking buffer containing 5% nonfat milk and 0.05% Tween 20 and incubated with the appropriate primary antibodies and secondary antibodies. The expression of the proteins was visualized with an ECL western blotting detection kit (Amersham Biosciences) on a ChemiDoc MP imaging system (Bio-Rad). Band densitometry was performed using ImageJ software. Assays were independently performed in triplicate for each sample. Antibodies against p-PI3K (p100α), PI3K, p-AKT (Ser473), AKT, p-mTOR (Ser2448), mTOR, LC3, LC3-II, and Beclin-1 were purchased from Abcam Co., and antibodies against α-tubulin, β-actin, and GAPDH were purchased from Beyond Biotech Co.
Isolation, culture, and treatment of GCs
Female SD rats (21-29 days old) were obtained from the Experimental Animal Center of Ningxia Medical University. After 1 week of acclimation, the rats were injected intraperitoneally with 40 IU of pregnant mare serum gonadotropin in saline. Forty-eight hours after the injection, the ovaries were separated from the anesthetized rats under sterile conditions. After washing twice with PBS, the GCs were released after puncturing the follicles with a needle under a stereoscopic microscope.
Monodansylcadaverine (MDC) staining of autophagic vacuoles
The GCs were divided into a series of groups, including control, H 2 O 2 , and H 2 O 2 + MET groups, and these cells adhered to and were grown on glass coverslips for 24 h. After washing with PBS, autophagic vacuoles were labeled with 0.05 mmol/L MDC staining at 37°C for 10 minutes. Then, the cells were washed with PBS and observed under a fluorescence microscope (Nikon, Japan), and the fluorescence intensity of the MDCstained autophagic vacuoles was measured at an excitation wavelength of 380 nm and an emission wavelength of 530 nm.
Statistical analysis
All of the data were analyzed with GraphPad Prism 6.0 software. The data were presented as the means ± SD. Student's t test and one-way ANOVA were used to determine the significant differences, and a p value less than 0.05 was considered to indicate a significant difference. Each experiment was repeated 3 times.
Metformin alleviated the pathology of the PCOS model
After letrozole gavage for 21 days, the PCOS model rats were evaluated, and the models were confirmed to have been successfully established (data not shown). Subsequently, the PCOS model rats were orally treated with metformin (MET) for 21 consecutive days. After treatment, we found that the body weight of the PCOS + MET group was reduced from Day 14 compared with that of the PCOS group, and the body weight of the PCOS group was significantly higher than that of the control group (Fig. 1A). An OGTT was performed, and the results showed that glucose tolerance was recovered in the PCOS + MET group compared with the PCOS group, and the value of PCOS + MET was close to that of the control group (Fig. 1B). The estradiol level in the PCOS model group was lower than that in the control and PCOS + MET groups, and the testosterone level in the PCOS model group was higher than that in the control and PCOS + MET groups ( Fig. 1C and D). The ovaries of the three groups were collected, and the organ coefficient of each ovary was determined. The results showed that the ovarian organ coefficient in the PCOS group was lower than that in the control and PCOS + MET groups (Fig. 1E). In the ovarian histopathology, we found subcapsular cyst formation, a capsular thickness increase, corpus luteum reduction, and atretic and cystic follicle increases in the PCOS group compared with the control, and these phenotypes were improved after metformin treatment (Fig. 1F-J).Collectively, these data indicate that the body weight, glucose tolerance, serum hormone level, ovarian organ coefficient, and histopathological characteristics were affected in the PCOS group, while metformin treatment alleviated these alterations, with the assay results nearly reaching the levels of the control group. In addition, the oestrous cycle, insulin, FSH and LH were evaluated and the results showed the phenotypes of PCOS were alleviated by metformin (As shown in Fig. S1 of supplementary materials).
Metformin improved ovarian autophagy in the PCOS model
Autophagy plays an important role in PCOS occurrence. To determine whether metformin alleviates PCOS by improving autophagy, we detected the ovarian total expression of autophagic markers LC3 and Beclin-1 in the control, PCOS, and PCOS + MET groups. The results showed that the expression of LC3-II/LC3-I was statistically higher in the PCOS than in the control and PCOS + MET groups, and the expression of Beclin-1 was also significantly higher in the PCOS than the control and PCOS + MET groups; on the other hand, the expression levels of LC3-II/LC3-I ( Fig. 2A and C) and Beclin-1 ( Fig. 2B and D) were deceased after metformin treatment compared to those in the PCOS group. These results suggested that metformin can improve ovarian autophagy in the PCOS model.
To investigate whether metformin improves the autophagy of ovarian cells in the PCOS model, we detected the expression of LC3-II and Beclin-1 by immunohistochemistry. The results showed that LC3-II and Beclin-1 were predominantly expressed in GCs and were higher in the PCOS model than in the control group, while their expression was obviously decreased in the PCOS + MET group compared to the PCOS model (Fig. 3). Therefore, it is likely that the downstream targets of metformin regulation are involved in decreasing autophagy in the GCs of the ovary.
Metformin attenuated oxidative stress in GCs in the PCOS model
To determine the causes of ovarian autophagy and the mechanisms by which metformin improves autophagy in the GCs of the PCOS model, we detected the MDA level in ovarian tissue from the different groups. We found that the MDA level in the PCOS group was higher than that in the control and PCOS + MET groups (Fig. 4A). This result suggested that the oxidative stress level of the PCOS model was higher than that of the control and that metformin can attenuate the oxidative stress of the PCOS model. Additionally, studies have shown that the PI3K/AKT/mTOR pathway plays an important role in metformin functions and pathogenesis; therefore, the related proteins of PI3K/AKT/mTOR were detected, and the results showed that metformin ameliorated PCOS via the PI3K/AKT/mTOR signaling pathway (Fig. 4B-E). Combining the immunostaining results, we hypothesized that metformin reduces GC autophagy caused by oxidative stress in the PCOS model via the PI3K/AKT/mTOR signaling pathway.
Metformin attenuated H 2 O 2 -induced oxidative stress in rat GCs
To confirm the abovementioned hypothesis, we used H 2 O 2 to treat rat GCs to establish a cell model of oxidative stress. In the H 2 O 2 -treated group, the fluorescence intensity of MDC-stained autophagic vacuoles was higher than that in the control and H 2 O 2 + MET groups (Fig. 5A). Subsequently, markers of oxidative stress, including MDA, GSH, and SOD, were detected, and the results showed that the MDA level was higher than that in the control and H 2 O 2 + MET groups (Fig. 5B) and that the GSH level was lower than that in the control and H 2 O 2 + MET groups (Fig. 5C). SOD activity was also lower than that in the control and H 2 O 2 + MET groups (Fig. 5D). These results suggested that metformin can decrease excessive autophagy caused by oxidative stress.
Metformin decreased oxidative stress-induced excessive autophagy via the PI3K/AKT/mTOR pathway in GCs
Numerous studies have reported that the PI3K/AKT/ mTORC1 signaling pathway plays a central role in autophagy [28,29]. Therefore, the present study investigated whether metformin affects oxidative stressinduced excessive autophagy through the PI3K/AKT/ mTOR pathway. First, the autophagy marker proteins LC3 and Beclin-1, phosphorylation of PI3K, phosphorylation of AKT, and phosphorylation of mTOR were detected after metformin treatment. As shown in Fig. 6, LC3-II, Beclin-1, and p-PI3K were significantly lower in the H 2 O 2 + MET group than in the H 2 O 2 group. In the metformin-treated H 2 O 2 -exposed GCs, the levels of AKT phosphorylation (p-AKT) and mTOR phosphorylation (p-mTOR) were lower in the PI3K/AKT/mTOR pathway interference groups (H 2 O 2 + 3-MA, H 2 O 2 + LY294002, and H 2 O 2 + Rapa) than in the metformin-treated H 2 O 2exposed groups. In addition, Beclin-1 and LC3II/I were increased after adding PI3K and mTOR inhibitors. These results suggested that metformin alleviated H 2 O 2induced autophagy of GCs through PI3K/AKT/mTOR. Furthermore, in contrast to the H 2 O 2 -exposed autophagy group, the levels of p-PI3K, p-AKT, and p-mTOR were
Discussion
PCOS is a common and complex gynecological disease in reproductive-age women and is characterized by hyperandrogenemia, hyperinsulinemia, and abnormal ovulation [4]. Although metformin has been widely used for PCOS patients with hyperglycemia and insulin resistance and is beneficial for ovarian function in fertility clinics, the mechanisms of metformin in ameliorating PCOS and improving ovarian function are still unclear. In the present study, we used metformin-treated letrozole-induced PCOS model rats in vivo and metformin added to H 2 O 2 -induced GC model rats in vitro to investigate the mechanism by which metformin improves PCOS. The results indicated that metformin ameliorates PCOS by decreasing oxidative stress-induced autophagy through the PI3K/AKT/mTOR pathway. We demonstrated for the first time the regulation of GC autophagy by metformin in vivo and in vitro.
Metformin is beneficial to PCOS patients and can improve glucose metabolism and fertility. Numerous studies have shown that metformin can increase the ovu-lation and pregnancy rates and enhance the live birth rates among PCOS patients [30,31]. Compared with the letrozole-induced PCOS animal model, the heterogeneity of PCOS diseases determines the complexity of the inducing factors and pathogenesis in clinical outcomes. However, most studies have shown that the letrozoleinduced PCOS animal model can basically simulate the characteristics of PCOS patients, including hyperandrogenemia, hyperinsulinemia, and abnormal ovulation; in addition, this PCOS model has been widely used and accepted in numerous studies [32][33][34]. In evaluating the therapeutic effect of metformin on a PCOS rat model, this study showed that metformin could decrease body weight, improve insulin resistance, restore sex hormones, improve ovarian weight, and reduce cystic follicle numbers, which is consistent with previous studies [35,36]. It has been reported that metformin has a direct effect on steroidogenesis by culturing GCs and theca cells, and studies have suggested that steroidogenic enzymes such as CYP17A1 and CYP11A1 are potential targets for metformin to regulate related signaling pathways [37,38]. However, the precise signaling pathways or sites of metformin on steroidogenic enzymes and the mechanisms of its effects remain to be determined. In addition, the phenotype caused by letrozole as an inhibitor of aromatase to inhibit estrogen synthesis is very similar to the PCOS clinical symptoms, including increased body weight, glucose resistance, and altered sex hormone levels. Therefore, these results indicated that the effect of metformin in the treatment of the PCOS model rats can better simulate the clinical outcome. Currently, various studies have shown that autophagy of GCs is an important factor in PCOS occurrence [39,40]. In addition, a study reported that oxidative stress participates in the pathophysiology of PCOS and is independent of weight excess [41]. Therefore, these results indicated that autophagy in GCs induced by oxidative stress may be the main factor in the occurrence of PCOS. In the present study, the letrozole-induced PCOS rat model also showed the characteristics of GC autophagy and a high level of ovarian oxidative stress, which indicated that this letrozole-induced PCOS model is valid and is consistent with the clinical pathological features of PCOS. Oxidative stress in GCs originates from endoge-nous and exogenous factors, and the biosynthesis of sex steroids and other accelerated metabolic rates can generate high levels of intracellular reactive oxygen species (ROS). On the other hand, exogenous factors such as environmental pollutants, ionizing radiation, and unhealthy lifestyle factors can also promote excess production of ROS in GCs and potentially induce PCOS [42,43]. A recent study showed that pineal gland-secreted melatonin, as an endogenous mitochondrial-targeted antioxidant, can ameliorate excessive mitophagy in GCs of PCOS patients, indicating that oxidative stress is the main cause of mitochondrial autophagy in GCs of patients with PCOS [44]. In the present study, the level of oxidative stress and autophagy in ovarian GCs and the level of androgen in the PCOS model were decreased by metformin, and an earlier study showed that androgen levels play a critical role in GC autophagy in patients with PCOS [6]. Additionally, the oxidative stress level increase in GCs was considered to be the primary pathogenesis of PCOS [45], implying that androgen may be the key trigger leading to the increase in oxidative stress in ovarian GCs and autophagy, which results in the occurrence and development of PCOS; however, the detailed mechanisms need to be further studied.
While investigating the effect of oxidative stimulation on GCs, Shen et al. found that melatonin, as a robust antioxidant, could repress H 2 O 2 -induced GC autophagy by upregulating antioxidative enzymes and scavenging ROS directly or indirectly [42,46]. H 2 O 2, as an exogenous inducer, has been widely applied in many studies to investigate damage mechanisms, such as apoptosis and autophagy, caused by oxidative stress in GCs [47,48]. In this study, we established a rat GC model of H 2 O 2induced autophagy that can be reversed by metformin, implying that metformin plays a role as an antioxidant in scavenging ROS. Subsequently, we found that the levels of oxidative stress markers, including MDA, GSH, and SOD, were restored in the metformin treatment group compared with the H 2 O 2 group. In the in vitro results, it was found that only MDA levels showed a significant difference, which may be related to the limited number of GCs in the ovarian tissue and the loss of GCs in PCOS. Furthermore, the antioxidant system in the tissue is more complex than that at the cellular level, which requires further study.
Metformin, a widely used drug to treat type 2 diabetes and PCOS, has been found to improve endothelial function and protect against oxidative stress by increasing AMP-activated protein kinase activity and promoting antioxidant protection at the cellular level [49]. In addition, a recent study showed that metformin has a protective effect on the myocardial ischemia-reperfusion injury-dependent AMPK pathway through upregulation of antioxidant enzymes to reduce oxidative damage accumulation [50]. In a study of metformin-treated cerebral ischemia, indicators of oxidative stress, including antioxidant enzyme activities of catalase, MDA, SOD, and glutathione peroxidation enzyme, were affected by metformin treatment [51]. In the present study, we found that metformin can decrease the MDA level and increase the levels of GSH and SOD. Therefore, these results indicate that metformin has an underlying function in regulating antioxidant enzymes and sequentially plays an antioxidant role [52]. However, this potential function of metformin still needs to be further studied.
It has been reported that the PI3K/Akt/mTOR signaling pathway is essential for autophagy, and the activation of the PI3K/Akt/mTOR pathway can inhibit autophagy [53,54]. Akt is a principal mediator, and mTOR plays a targeted role downstream of the PI3K/Akt pathway; moreover, it has been accepted that the process of autophagy is negatively regulated through mTOR activation [55]. Rapamycin is an inhibitor of mTOR and an agonist of autophagy [56]. 3-MA can block the transition from LC3-I to LC3-II, inhibit PI3K, and destroy the antagonistic function of mTOR against autophagy [57]. LY294002 is a broad-spectrum PI3K inhibitor that activates autophagy [58]. Numerous studies have reported that the PI3K/AKT signaling pathway plays a critical role in PCOS pathogenesis, likely because PCOS is cured by activating the PI3K/AKT signaling pathway in the ovary [59][60][61]. Zhao and his colleagues reported that the traditional Chinese medicine Heqi San has beneficial effects on PCOS and restores serum hormone levels, recovers ovary morphological lesions, and attenuates insulin resistance, which is mediated through the PI3K/ AKT pathway in ovary tissue of a PCOS rat model after dehydroepiandrosterone treatment [62]. On the other hand, another traditional Chinese medicine, Liuwei Dihuang pills, obviously improved ovarian polycystic pathogenesis and alleviated insulin resistance by acting on the PI3K/Akt signaling pathway in a letrozole-established PCOS rat model. Furthermore, the development of follicles was regained by upregulating cytochrome P450 family 19 subfamily A member 1 (CYP19A1), which plays an important role in the estrogen synthesis of ovarian GCs and whose irregular expression is involved in the development of PCOS [63,64]. In addition, metformin was used to treat PCOS in model rats induced by insulin plus hCG, and metformin improved uterine dysfunction and inhibited insulin resistance-induced inflammation through downregulation of the PI3K/AKT pathway [61]. This outcome suggests that the PI3K/AKT pathway in GCs is a potential target for PCOS therapy. In our study, intervention in autophagy and the PI3K/AKT/ mTOR pathway suggested that metformin decreased H 2 O 2 -induced autophagy in GCs through the PI3K/AKT/ mTOR pathway, and these results are consistent with previous findings that PI3K/AKT/mTOR pathway activation has beneficial effects on PCOS [65][66][67]. Therefore, in summary, the PI3K/AKT/mTOR pathway of GCs is a possible therapeutic target in the treatment of PCOS with metformin. In the present study, we explored the detailed mechanism of metformin in the treatment of PCOS; however, there were some limitations in this study. First, the subsequent pregnancy outcomes of the PCOS rats were not observed; second, primary granulosa cells do not reflect human GCs in PCOS; therefore, these two areas should be further investigated in the future. Furthermore, whether metformin therapy can improve ovarian function in infertile patients with PCOS needs to be addressed in large clinical trials.
In conclusion, this study demonstrated a beneficial effect of metformin on PCOS pathology in a rat model, including recoveries in body weight, glucose tolerance, serum hormone levels, and ovarian morphology. Subsequently, we found that autophagy of GCs and oxidative stress levels were restored in the ovaries of PCOS model rats after metformin treatment, and then, a H 2 O 2 -induced GC model was established to investigate the mechanisms by which metformin decreases oxidative stress, resulting in excessive autophagy of GCs. We found that metformin could decease excessive autophagy as well as oxidative stress through the PI3K/AKT/mTOR signaling pathway in GCs. Therefore, we propose the novel view that metformin improves PCOS by altering the oxidative stress level and upregulating antioxidant enzymes in GCs through the PI3K pathway.
Disclosure of Interest
The authors declare that they are no conflicts of interest. | 2022-02-27T16:17:27.975Z | 2022-02-26T00:00:00.000 | {
"year": 2022,
"sha1": "4b840b57cb424aafb8342bacdf14bc5c2b2add9d",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/endocrj/advpub/0/advpub_EJ21-0480/_pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "095c5f28eedbd5011f7792603517a9830d2da248",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235313845 | pes2o/s2orc | v3-fos-license | Decoupling environmental modes from tunneling electrons in a partially wet phase molecular mechanically controllable break junction
A quantum effect at ambient conditions is presented. A benzene dithiol (BDT) molecule or a tetrahydrofuran (THF) molecule is used as a barrier molecule bridging the gold electrodes from a Mechanically Controllable Break junction. It has been known for a long time that the environment of a junction couples to the conduction. With a microscopic layer of fluid it is actually possible to influence this coupling. The electrode-molecule-electrode configuration is lined with a microscopic layer of THF fluid, also known as the partially wet phase. This is effectively decoupling the electrode-molecule-electrode system from environmental modes outside of this system. The effects on coherence and quantum interference are exceptional.
Introduction
More than 60 years ago von Hippel [1] laid out his vision where he saw a world using the intrinsic properties of atoms and molecules: "One could build materials from their atoms and molecules for the purpose at hand ... . He can play chess with elementary particles according to their prescribed rules until engineering solutions become apparent." It was a time where radio tubes were being replaced by integrated circuits and transistors. In contrast to the logical and, already in those days, destined successful path of ongoing miniaturization for IC's, von Hippel was advocating a bottom up approach which he called Molecular Engineering. With this view von Hippel laid the foundation for a field which we now call Molecular Electronics [2].
During that same time frame, although seemingly unrelated at the time, Gorter [3] established an important finding on the resistivity-temperature curve of granular Sn particle films: "The granularity of the transported charge manifests itself in the conductance as a result of the Coulomb repulsion of individual electrons. The transfer by tunneling of one electron between two initially neutral regions, of mutual capacitance C, increases the electrostatic energy of the system by an amount of e 2 /2C. At low temperatures and small applied voltages, conduction is suppressed because of the charging energy." This phenomenon is now known as the Coulomb blockade of single-electron tunneling [4]. Although the physics of granular films and systems need to account for a distribution in particle-size and thus capacitance, theoretical elements used in the early descriptions [3,5,6] are still present in today's theory for more well defined systems for example a single junction.
A single small-capacitance junction, consisting of superconducting material, was predicted to show a new type of voltage oscillation in 1975 [7] transposing the current and voltage character as compared to the normal AC Josephson effect. This is reminiscent of Bloch electrons in a perfect crystal with lattice constant a, moving through the energy band representation and being reflected by the Brillouin zone at 2π/a to the opposing Brillouin zone at −2π/a, physically identical in nature. A conduction electron under the influence of a constant electric field E, will start to oscillate in real space, as a result of interaction with the periodic lattice potential with a Bloch frequency: f B = aeE/(2π ). In real life this effect is not observed in crystals, as the conduction electrons are scattered before they can complete the cycle in the Brillouin zone in k space.
In superconducting small-capacitance current biased junctions the elementary frequency equals f SC = I/(2e), with a corresponding band-zone edge at Q = +e or −e in the energy band representation, Q being the charge on the capacitor. The origin of these predicted voltage oscillations, is the continuous charging until the level reaches +e or −e after which a tunnel event occurs and the recharging starts again. At the tunnel event the system jumps from the band-zone edge to the opposing zone edge, both physically identical in nature, and the cycle starts again.
The charging of the capacitor with a continuous charge Q, smaller than a 2e Couper pair charge is physically possible. A moving charge carrier can create some influence charge far smaller as compared to the unit charge of a charge carrier on the small capacitor while moving to the location where it will eventually tunnel, hence treating Q as a classical variable.
The interest in this system originates from the uncertainty relation between ϕ, the superconductor order parameter, and Q[ϕ, Q] = i2e. When ϕ is well defined, the system is described by the AC Josephson effect, it implies that Q is undefined, which is equivalent to a large capacitance system. In the other extreme, for small capacitance systems, theoretically Q is well defined and ϕ is undefined, leading to the small-capacitance effects as described above.
In the mid 1980's several authors of the first hour [8][9][10][11] theoretically analyzed small-capacitance effects in both normal metal and superconducting metal junctions, they predicted that both the superconducting as well as the normal metal small capacitance junction would each have their own type of voltage oscillation. The normal metal type single electron tunneling (SET) oscillations will have a frequency f N = I/e, the tunnel event occurs at the zone edge +e/2 or −e/2 in the energy band representation. A rigorous theoretical underpinning of these predicted effects [12,13] initiated a widespread experimental research, for extensive early reviews see [14,15]. Defining a phase ϕ S (t) = (e/ ) t −∞ V (t )dt being the integral over the voltage across the junction, Schön [16] demonstrated the commutation rule [ϕ S , Q] = ie for normal metal junctions. It was well recognized that with a small-capacitance junction we might have a system with the possibility to complete the full cycle through the energy band representation and arrive at a novel quantum effect.
Experiments on small capacitance single junctions became possible due to advanced e-beam lithography capabilities in the 80's. A capacitance of the order of 10 −15 F leads to a charging energy equivalent to 1 K, a temperature well accessible by cryogenic techniques. Results showed that the direct electrical environment of the junction played a crucial role in the occurrence of a Coulomb blockade gap in the current-voltage (IV)-curve. Parasitic capacitance of the electrical leads connecting to the junction is in the pF range, much larger than the junction capacitance, leading to a high capacitance-value as seen by the junction and thus easily destroying any E C = e 2 /2C charging-energy effects. This was prompting experiments where the junc-tion was decoupled from the leads by inclusion of large resistors, with values much larger than R Q = h/e 2 in the electrical leads at very close proximity, µm's, to the junction [17].
An alternative elegant way to decouple a small capacitance from the leads, is by implementing two high resistive tunnel junctions, with tunnel resistances R T R Q , on either side of a conductive island [18], very similar to Gorter's use of granular films where small Sn particles are connected via oxidized tunnel barriers. These structures are known as "dots". The low capacitance central island part of the dot is perfectly isolated from environmental modes in the leads by the two high impedance tunnel junctions. The coupling or tunnel rates of these model systems to the leads can be tuned by the characteristic resistance of the tunnel junctions. The capacitance can be low as it scales with the size of the dot. The condition R T R Q for the two tunnel junctions ensures that after tunneling to the dot, the electron is localized there. The dot system thus exhibits both the wave character, tunneling to and from the dot, as well as the particle character of an electron as there can only be an integer number of electrons on the dot. If R T becomes smaller than R Q the charge becomes delocalized along the system and charging effects will be suppressed.
There are a number of dot regimes possible [19]. A metallic grain of µm size holds billions of electrons occupying billions of states up to the Fermi energy. The coulomb repulsion by putting 1 additional electron on this grain requires this electron to have an excess energy of e 2 /2C, before being able to enter the grain. The Fermi wavelength is of the order of the inter atomic distance, much smaller as the particle size, ensuring the metal electron states are as in bulk metal, finely dispersed with energy separations much smaller than the e 2 /2C value.
A different regime holds in a semiconductor dot also of µm dimensions. The Fermi wavelength in semiconductors is of the order of 100 nm. Now the charging energy as well as finite size effects in this dot interact with one-another. When the charging energy of these dots become comparable to the energy level spacing it is predicted that the spin degeneracy is lifted and a regular renormalized level spacing is the resultant [20]. Because next to the charging effects the finite size effects of these types of dots start to become a crucial factor in the description these dots are called quantum dots.
Where Gorter meets von Hippel we arrive at the ultimate quantum dot. One molecule attached via two tunnel junctions to the leads. Note that this dot diameter is smaller by more than a factor of 1000, as compared to the quantum dot regime detailed above. HOMO and LUMO orbitals are interacting with the charging energy and are expected to play a role in the conduction. Free molecules have a discrete set of electronic levels, typical level spacing is in the eV range. Molecules exhibit vibrational and rotational degrees of freedom, where the typical vibrational energies are in the 0.1 eV range and the rotational energies are in the meV range. One of the early and most spectacular successes of quantum theory is the answers it provided to questions related to molecular structure and spectra. Now we are faced with questions related to a single connected molecule. Will such a system still be described by an electrostatic capacitance? As the molecule only measures a few Å it is not safe to assume the screening length in the molecule to be smaller as compared to the size of the molecule. It cannot be assumed to be an electrostatic capacitance. For simplicity, this system is treated as if it can be described by a single electrostatic capacitance value.
In all of the above described systems, the environment of the junction has played an important role ever since the first experiments on small capacitance junctions. It is important to define the quantum mechanical environment [21][22][23]. Next to the system defining modes, determined for example by junction capacitance, inductance, resistance, the tunneling electron can couple to other modes. These modes can couple to the junction via the leads (lead environmental modes) or via the vacuum (vacuum environmental modes). The vacuum environmental modes reflect the coupling between tunneling electrons and modes in the medium enclosing the electrode-molecule-electrode configuration. These modes are present in every enclosing medium, even in vacuum, hence "vacuum environmental modes" even though the enclosing medium may technically not be a vacuum.
In reality the different types of modes are hard to distinguish, a certain environment encompasses a coupling of various modes and mode types [21][22][23]. Quantum dots with R T R Q are well decoupled from lead environmental modes. The decoupling from vacuum environmental modes is however from an experimental point of view difficult to influence. It is important to realize that a tunneling electron can couple to vacuum environmental modes and loss or gain energy to or from these modes. With increasing frequency, the coupling to vacuum environmental modes becomes easier as high frequency processes experience very low impedance in vacuum, as the impedance of vacuum equals 377 Ω only, far less than R Q .
A study of partially wet phase molecular junctions is presented, where a molecule is bridging the gap between two electrodes of a MCB junction. In the ideal situation, the system defining modes are the only relevant modes for tunneling electrons to couple to. Ideally all lead environmental modes and vacuum environmental modes are decoupled from the electrode-molecule-electrode system. Below it will be shown that nature provides for such a system.
Experimental Setup and Way of Work
Experiments are executed at ambient conditions. The classical notched filament MCB junction [24] has been used. A phosphor bronze bending beam is laminated with a 200 µm thick insulating Kapton layer, glued with Stycast epoxy to the bending beam. Filament material is either made up of 99.99 % hard temper Au (Au H ) or 99.99 % soft temper Au (Au S ) glued with Stycast epoxy to the Kapton insulated bending beam. A glass liquid cell is mounted surrounding the junction area using a liquid rubber. Fig. 1 shows the bending beam assembly. Two types of cells have been used. One with a 4.5 mm inner diameter (φ 4.5 in ), which releases the solution upon pivoting of the setup to the cell opening, where the solution can be absorbed. The other cell inner diameter measures 2 mm (φ 2 in ). This cell needs to be drained with a paper tip due to the capillary forces holding the fluid in the cell upon tilting. In the measurements no dependency could be detected on the temper of the gold filament. Some dependency was detected related to the inner diameter of the cell, φ 2 in or φ 4.5 in . For completeness the temper of the gold and the inner cell diameter are listed for reported data of every bending beam assembly.
The experimental setup makes use of a pivoting MCB setup, enabling the partially wet phase [25], as detailed in ref [26]. In this setup the MCB bending beam assembly can be pivoted to a desired angle, to either submerge the junction in fluid, or to drain the junction area from fluid while the adjusted junction remains intact during pivoting. For bending beam assemblies using the φ 4.5 in cell, the pivoting was used to drain the cell, after draining the cell was placed in vertical position again. For bending beam assemblies using the φ 2 in cell the setup was maintained in a fixed vertical cell position.
The IV curves are recorded with a HP4155 Semiconductor Parameter Analyzer. The junctions are voltage biased, one side of the junction is kept at zero the other side was voltage scanned, while measuring the current. The HP4155 exhibits 4 independent signal monitoring units (SMU's) of which 2 are required for a measurement. The reproducibility of the obtained results has been validated on 2 independent sets of SMU's. The HP4155/measurement setup attached to the MCB setup has been regularly tested for linearity with a 50 MΩ resistor connected at the device location.
Double shielded cables are used to connect the HP4155 to the device; the last few centimeters to the junction are unshielded. No additional filtering is used. The piezo HV supply, normally in use for fine tuning of a MCB junction, is not used for the reported experiments. IV traces (single direction, indicated by small arrows) consist of 1000 data points, recorded in 45 seconds. The time between consecutive data points remains 45 ms where partial traces are shown. The scans which are recorded faster are indicated. On some occasions two consecutive scans with opposing scan direction are shown, in these cases also each scan consists of 1000 data points and there is no delay at the point of scan reversal.
Two different types of experiments have been performed. In one type a BDT THF solution is used with a concentration of approximately 1 mM, where the intent is to create a THF partially wet phase electrode-BDT moleculeelectrode structure. The other experiment type made use of the pure THF fluid with the intention to create a THF partially wet phase electrode-THF molecule-electrode structure. In each type of experiment the solution/fluid is injected into the cell prior to breaking the filament. The molecules and THF fluid are handled under inert Argon gas. Standard cannulation techniques for handling air sensitive reagents have been used. Only at injecting the BDT/THF solution or the pure THF fluid into the cell, it is exposed to air. The following experimental way of work has been followed in the described experiments: • The MCB setup is pivoted such that the liquid cell is in upright, vertical position.
• The liquid cell is injected with either the 1 mM BDT solution or the THF fluid by a glass syringe with stainless needle, while the bending beam assembly filament is still unbroken.
• The motor is switched on while monitoring the conductivity of the filament, once the filament is broken the motor is immediately switched off. Based on the spindle velocity as well as the bending beam assembly attenuation factor it is estimated that the electrode separation at this stage is in the order of nanometers. • In this situation the junction is completely immersed in fluid and thus in the completely wet phase. Approximately 2-5 minutes settling time is provided for wetting the newly formed electrodes and for molecules to adhere, prior to draining. • The bending beam assembly is drained, either by pivoting the setup (φ 4.5 in cell) or by a paper tip (φ 2 in cell). Draining is performed in such a way that it takes about half an hour to get to the completely dry phase. • Directly after draining the setup is pivoted back to the original position in case the φ 4.5 in cell has been used; IV measurements can be recorded while the junction area slowly dries. Alternatively a current measurement can be taken (for example at −5 V) every few minutes where full IV measurements are only started once the partially wet phase has been reached.
• Once the setup is in the original position, it is not to be touched any more. The drying process progresses the junction through the following phases: -The completely wet phase, once broken and still immersed in fluid and also just after draining -The partially wet phase, where a layer thickness of only 1 or a few THF molecules will line the electrodemolecule-electrode system. -The completely dry phase, defined as completely dry from THF. • After reaching the completely dry phase the MCB junction has to be discarded of and needs to be replaced with a new (unbroken filament) bending beam assembly.
A limited number of experiments are carried out in the H 2 O partially wet phase. In order to arrive at this phase a regular household humidifier is put together with the MCB setup and a hair-hygrometer in an acrylate plastic enclosure.
The majority of the experiments are performed in the THF partially wet phase, limiting measurement time to a few minutes only before entering the completely dry phase. During this time the IV traces still change as a result of the continuous evaporation of THF molecules. Reproducing the results within the same bending beam assembly is therefore inherently not possible. For this reason only results are shown which have been demonstrated in multiple bending beam assemblies. The periodicity of the presented data, the validated data on two SMU sets, the specific reproducing line shape of the oscillations and in some cases the demonstrated progression of the effects over a short time lapse, all should provide confidence in the presented data despite the lack of reproducibility within the same bending beam assembly.
Field emission oscillations
Once the bending beam assembly filament is broken in pure THF, IV curves are as indicated in Fig. 2a. The IV curves show a cyclic voltammetry component, exhibiting a non-zero current at V = 0. Also structure in the IV curve is observed to reproduce at a voltage value which switches polarity once the scan direction is reversed. Towards higher voltages a more or less linear part shows in the IV curve which exhibits none or only minor hysteresis. This IV curve is very similar in shape to an IV curve of a junction immersed in a 1 mM BDT/THF solution of which the conduction at 10 V is approximately 100 nA see the inset of Fig. 3. These IV curves of devices submerged in fluid are independent on electrode gap distance unlike the exponential influence on the tunnel current for a vacuum gap. In this case we cannot draw any positional conclusion from these curves as to how large the gap between the electrodes actually is. Crashing the electrodes into each other leads to a metallic constriction. The resistance of the smallest possible one atom point-contact amounts to approximately h/2e 2 ≈ 12 kΩ, which would show as a vertical line through the origin on the scale in Fig. 2a. The limits of cyclic voltammetry will be investigated under the continuous reduction of the amount of fluid, until we reach a single layer of fluid molecules, solidly adhered to the electrodes. In Fig. 3 this is demonstrated during the drying process for a BDT/THF solution. The effect of the draining process can be registered during a measurement, indicated in the inset of Fig. 3 as a steep decrease of the current. In general a current reduction at draining of about 50-65 % is observed. After draining, the wet electrodes and junction area will slowly start to dry which lasts approximately half an hour. The conductance of the junction increases continuously during the drying process. The IV curves can be recorded during this process, as the duration to take a trace is short as compared to the duration of the drying process. The IV curves in Fig. 3 are recorded at fixed time intervals for a positive scan direction during the drying process. Just after draining the current is approximately 50 nA at 10 V. Increases during the drying process to values of 3 µA at 10 V have been measured, prior to a sudden conductivity decrease upon entering the completely dry phase.
A law of nature seems to hold for two electrodes spaced at nanometers distance during the drying process. Be it soft temper Au or hard temper Au used as the MCB electrode material, be it broken in air and after that immersed in fluid or broken directly in a THF environment, the shown electrode behavior upon the drying process is the same. Once the excess fluid has been drained, the continuous increase of the conductivity is attributed to a constantly decreasing distance between the two closest points on the opposing electrodes due to the lasting evaporation of the fluid on and between the electrodes. Apparently there is enough elasticity in the nanometers separated electrodes such that the exact initial separation does not matter. The continuously decreasing amount of fluid between the electrodes pulls the electrodes ever closer together. This process continues until the fluid is getting depleted, and a molecule is squeezed between the electrodes, blocking any further movement. The adhesion forces of the molecules (either BDT or THF) to the gold electrode surface, together with the partially wet phase layer which is lining the entire electrode-molecule-electrode system are such that a metal-molecule-metal junction is favored over a metallic junction.
The above also holds for electrodes broken in air where a metallic microscopic contact is re-established by reversing the motor action and subsequently exposing the junction to the solution. Once the solution is submitted the microscopic metallic contact is "broken" because fluid gets in between the electrodes. Fig. 2b shows data from a bending beam assembly in the drying process after being immersed in a BDT/THF solution. Although the conductivity has increased by a factor of 10 compared to the value just after draining, there is still a large similarity with the curve in 2a immersed in pure THF. The conductance of the IV traces in 2b is shown in 2c and d. Typical voltammetry behavior is observed at the structure between 1-2 V which reproduces in the opposing scan direction at opposite polarity. In addition an increase in noise towards higher voltages shows. This is consistent with Fig. 4, which shows partially wet phase data of a junction which also has been immersed in a BDT/THF solution. In the conductance data in panel b and c also an increase in noise exists for positive polarity in panel b. Conductance oscillations show in both panels b and c, for negative voltages in excess of −4 V. In panel c the conductance oscillations are bounded by a smooth envelope, sometimes called "wave package" or "frequency beating". Oscillations in the conductance at voltages in excess of the work-function Φ Au , approximately 5 V for gold, are well known and are usually explained in the context of field emission resonance [29][30][31].
The IV curve in the positive scan direction shows a steep decrease in conduction which is typical for entering the completely dry phase. In the completely dry phase the THF is fully evaporated. The remaining molecules on and between the electrodes are strongly adhering to the electrodes and are now exposed to water and air molecules. The current is quickly reduced from a maximum value just prior to entering the completely dry phase to pA values in a matter of minutes. In this situation the electrodes are still in contact with undefined molecules between them acting as glue. After entering the completely dry phase the bending beam assembly can no longer be used as the electrodes are contaminated with undefined molecules and have been exposed to air and water molecules. Fig. 5 presents data in the partially wet phase from a bending beam assembly which has been exposed to the THF fluid. The IV curve over the full −10 V to 10 V range is shown in the upper inset, an enlargement of the central section is shown in the lower inset. Also this IV curve shows pronounced oscillations in this case however the current oscillates, showing in the IV curve as negative differential resistance parts. This is uncommon in devices where the electrodes are made from metal and where the density of states in the electrodes is expected to be fairly constant at the Fermi energy. It is posed that partially wet phase molecular junctions created in the way described above should not be defined by a fairly constant density of states in the electrodes. Current oscillations can also be observed in Fig. 6a and Fig. 6b which details IV traces of two different bending beam assemblies in the partially wet phase after being exposed to a THF/BDT solution. The insets show the corresponding large scale IV curve for both bending beam assemblies. Fig. 6a and Fig. 6b show again "wave package" or "frequency beating" where alternating parts of the IV curve do and do not show current oscillations. Fig. 6a shows an abrupt transition to the completely dry phase near 8 V. Although gradual transitions to the completely dry phase also occur, it shows that defining the point in time where the partially wet phase ends is not difficult.
Normally the partially wet phase exists just prior to entering the completely dry phase. The transition between an increasing and a decreasing conductance is an indication that the completely dry phase nears. In addition, the partially wet phase in general shows oscillations in the current at larger (eV > Φ Au ) voltages.
Oscillations in a THF barrier THF partially-wet-phase junction
In this section all devices have been immersed in pure THF fluid, after draining and drying the results are recorded in the partially wet phase. The bridging THF molecule(s) between the electrodes are considered constituting the tunnel junction barrier. The THF layer lining the device provides for the partially wet phase.
The experiments presented in the following sections zoom in at a high voltage offset with a 1 mV resolution for a 1 V scan and a 2.5 mV resolution for a 2.5 V scan. Due to the nature of the experiment and the short duration of the partially wet phase no data for the large scale −10 V to 10 V total IV curve have been measured. In general the IV curve at large −10 V to 10 V scale for THF barrier junctions can be expected to be similar in shape to the one shown in Fig. 5. Data from Fig. 7 panel a, b and c are measured for the same junction with about 1 min intervals. In panels b and c an increasing and decreasing voltage scan are part of the same scan, in panel a only the increasing scan is shown. The curve in panel a starts with a steep decrease in current, this "switch on" effect is followed by a linear increase. Initially the drying process increased the conductivity going from panel a to panel b as can be observed by the current levels. Going from panel b to panel c the overall conduction has started to decrease. A clear pattern of oscillations is revealed. In a timescale of several minutes the pattern changes with the overall trace changing as a result of the drying process. The amplitude of the oscillations is more or less constant for the larger stretches in panel b and c. Panel c in addition shows that the period of the oscillations is not constant. The lower curve shows periods from 21 mV at 9 V to 12 mV at 10 V.
In the upper curve the period varies from 14 mV at 9.35 V to 12 mV at 10 V. Parts of Fig. 7 have been enlarged in Fig. 9. The line-shapes of the oscillations in the different panels have a sharp "V shaped" minimum in common. The top part of the line-shape is rounded. The line-shape in Fig. 9b2 is more or less vertically oriented, approximately 2 minutes later the increasing voltage scan in Fig. 9c show line-shapes leaning to the left while the decreasing scan shows line-shapes leaning to the right. Apart from panel b1 which shows the developing oscillations near 10 V, the panels show sharp defined "V-shaped" oscillations of similar amplitude. The period in panel a is seen to gradually reduce for increasing voltage.
A scan from 9 V-10 V with a 1 mV resolution in an alternative bending beam assembly is shown in Fig. 8. The line shape obtains a period of 12 mV at 10 V. An enlargement of the 9.75 V-10 V range from panel a is shown in panel b, where the increasing voltage scan has been offset with −1 nA for clarity. Similar to Fig. 9 also this line has a sharp "V shaped" minimum. A spike of a single data point stands out on one of the flanks and reproduces over the stretch of the oscillations. The line-shape in the increasing voltage scan is continuing, current-axis mirrored, in the decreasing voltage scan inclusive of the (1-2) mV features which are mirrored as well.
A scan which has been taken in 5 seconds in one direction is presented Fig. 10. The increasing and decreasing voltage scan have been recorded in one turn without interruption. A clear "switch on" effect is also present in this IV curve. The amplitude is of the same order as the oscillations in
Oscillations in a BDT barrier THF partially-wet-phase junction
The junctions in this section have all been immersed in a 1 mM BDT/THF solution. All results are obtained in the partially wet phase, after the draining and drying process. The goal is to capture an island in the form of a phenyl ring from the BDT molecule, coupled via two barriers to the opposing electrodes.
In Fig. 11 an increasing and decreasing voltage trace is shown for two bending beam assemblies, one in panel 11a the other in panel 11b. The lower IV curves with a decreasing voltage are offset −0.2 µA for clarity. The increasing and decreasing voltage scan are both part of the same scan. Clear discontinuous current steps are present in the observed structure. For the increasing scans a negative differential resistance section follows after each step, this section has some curvature before the next step occurs. For the decreasing scans the behavior is simply a continuation of the behavior in the increasing scan but mirrored in the current-axes. The curvature of the segments gets somewhat steeper towards higher voltages, which coincides with smaller voltage increments in subsequent steps. The insets show the IV curve over the full −10 V to 10 V range the lower curve is also offset with −0.2 µA. The structure at positive bias does not reproduce at negative bias in both shown traces.
Also for the high voltage offset BDT barrier junctions the scans are not accompanied by the −10 V to 10 V scan. These scans are similar to the ones shown in the insets of Fig. 6 and Fig. 11. Fig. 12 presents data from a trace between 8 V and 9 V. Repetitive structure is present in both the increasing and the decreasing voltage scan. Panels b and c provide an enlargement, in panel c the lower IV curve has been offset −3 nA for clarity. Over 40 oscillations in the upper curve and over 30 oscillations in the lower curve are counted. Similarly as in Fig. 11 the shape of the oscillation for the decreasing voltage scan is current-axes mirrored of the shape in the increasing scan.
The oscillation shape has also some resemblance to the line shapes shown in Fig. 11, there is a distinct discontinuous step present in every period. The periodicity of the oscillation changes from 12 mV at 8.4 V to 14 mV at 10 V. In Fig. 13 data from 2 additional bending beam assemblies a/b and c/d show a similar result as indicated in Fig 12. A regular pattern of peaks and discontinuous current jumps is clearly visible. Also in this case asymmetry in the peaks indicates a negative differential conductance trailing the current jump for the increasing scan and a positive differential conductance trailing the jump for the decreasing scan. Fig. 11 are current discontinuities, trailing negative and positive differential resistance after a discontinuity depending on scan direction and current-axis mirrored structure upon scan reversal. In panel c the current of the lower curve is offset by −3 nA for clarity, Au S , φ 2 in .
The complete (9-10) V scale of the data in Fig. 13a and b are presented in Fig. 14. Over 100 peaks are counted in this scan, the periodicity changes from 24 mV near the start of the scan to 6 mV at 10 V.
A markedly different line-shape is observed at the start of the oscillations in Fig. 15a with a scan extending from 7.5 V to 10 V. The initial line shape exhibits a very sharp "V" shaped minimum and a somewhat rounded top as observed in the previous section. For increasing voltages exceeding 8.75 V a "shoulder" is developing which is shifting with respect to the sharp bottom of the line as shown in Fig. 15b. Fig. 15c shows the development of a smooth line-shape into a more chaotic one, although the repetition of the sharp minima reflects the presence of the periodicity all the way to 10 V. Fig. 14: A high voltage scan over a 1 V range, current discontinuities show as narrow spikes. The periodicity exceeding 9.65 V is regular as was shown in Fig. 13a and b. The lower curve has been offset by −5 nA for clarity. The transition to the completely dry phase is indicated in Fig. 16. A steep decrease in conduction, with a lot of structure, is abruptly transformed in a line without any structure. This is the point where the partially wet phase is expected to transit abruptly to the completely dry phase as for example also noted in the inset of Fig. 6a.
Large voltage range molecular spectra
In the previous sections it has been shown that some of the high voltage fine structure of the overall line-shape for THF barrier and BDT barrier junctions in the THF partially wet phase reproduce. Is it possible to also reproduce partially wet phase molecular spectra over a large voltage range? Devices in Fig. 17 have been immersed in the BDT/THF solution in the φ 4.5 in cell. Two consecutive voltage scans for bending beam assemblies a and b in the partially wet phase are presented. The IV curves are offset by 0.4 µA with respect to each other. For the increasing scans two discontinuities are visible at negative bias for which the amplitude and exact positions differs. The observed structure at positive bias shows some resemblance to what was shown in Fig. 11. In this case there are plateaus, there is structure on the plateaus, there are discontinuous steps as well but the repetition is more irregular and less in number as compared to the structure in Fig. 11. For the decreasing voltage scans the structure at positive bias resembles that in Fig. 11 in that the discontinuous steps are trailed by a positive differential resistance, however the repetition is more irregular and there are far less steps. For the scans in negative direction there is an inflexion point at approximately −2 V and −5 V. The inset shows how the IV curve from assembly a has progressed over time after approximately 15 minutes from recording the IV curves in the main figure. The structure has disappeared and the current extends to 3 µA at 10 V. IV traces of two bending beam assemblies, a and b, with an H 2 O partial wet phase are shown in Fig. 18. These junctions are created by starting off with a bending beam assembly immersed in a BDT/THF solution. The procedure in order to arrive at a molecular junction in the THF partially wet phase as described above has been followed to the point where during the formation of the junction the ambient H 2 O humidity was increased to 95 % by using a humidifier within a contained environment. The 95 % humidity level was reached in a matter of seconds, the humidifier was switched off and the system was allowed to gradually transit to an ambient humidity level of 60 % in a few hours' time. A gradual transition from the default BDT IV curve (see for example insets Fig. 6, Fig. 11) to the IV curves shown in Fig. 18 took place. The IV curves have been recorded in a continuous scan mode between −5 V and 5 V, the current level compliance was set at −3 µA and 3 µA. Once reaching their final shape the IV curves a and b1 were stable for hours. Doubling the voltage scan speed, see curve b2, left the IV curve as is however the subgap structure near −1 V and 1 V, approximately doubled in height. The insets show an enlargement of the central section of curves a and b1. Fig. 18 shows the only data concerning H 2 O partially wet phase junctions.
One-dimensional conduction
From the data in section 3 it is clear that the conductance of the devices under study are of the order of, or well below 10 −3 2e 2 /h. This indicates that the physics is described by transmission and reflection, deep in the Landauer regime [32]. Fig. 19 provides a schematic representation of the junctions which are studied.
Barrier Fig. 19: A schematic representation of the junctions under study. The molecule is connected via two barriers to a quantum wire on either side.
The central molecule is connected via tunnel barriers to the two quantum wires, extending in the x-direction with a length L x , a width L z and a height L y . For a free electron gas the energy of the conduction modes in the quantum wires is described by: E n,p = E c + n 2 ε y + p 2 ε z + ( k x ) 2 /2m, likewise the relation of the valence modes is given by: is the gap between the lowest conduction band and the highest valence band, m is the electron mass and ε y , ε z are provided by: (π ) 2 /2mL 2 y , (π ) 2 /2mL 2 z respectively, n , p are integers > 0. Neglecting spin-degeneracy and taking L z = L y equals 8.6 Å or 3 Au atom diameters leads to 0.5 eV for ε y , ε z . The density of states of the quantum wires is provided in Fig. 20a where the estimate for E c − E v is 4 eV, n, p runs from 1 to 4 and all modes are counted. It is noted that the inputted numbers are rough estimates to be able to provide a qualitative picture of the conduction through a single molecule. In Fig. 20b only one conduction mode is populated, the lowest n = 1, p = 1 mode is responsible for the conduction in this case the higher energy conduction modes are empty. Finally in Fig. 20c the filled valence modes are allowed to interact and couple, leading to a smoother density of states as indicated qualitatively. Derived from first principles Grigoriev et al. [33] arrived at a gap pinned to the Fermi level for a BDT molecule bridged between two Au electrodes, which supports the described experimental observations. Fig. 20: a. One dimensional density of states for a free electron gas, estimated input-parameters: L z = L y equals 8.6 Å leads to ε y = ε z = 0.5 eV, E c −E v = 4 eV. Panel b shows the filling of the modes up to and including the lowest conduction sub-band. In panel c the valence sub-bands are allowed to interact and couple together.
For the voltage biased systems under study it is notable that almost the entire voltage will be biased across the molecule and associated barriers. The voltage over the quantum wire typically amounts to less than one or a few mV even if the molecule is biased at 10 V. This is due to the large difference in the total conduction with respect to 2e 2 /h of conduction per mode in the quantum wire. The density of states of the quantum wire in Fig. 20c is voltage independent and will simply shift "up and down" with an increasing or decreasing voltage bias over the contacts, even if this amounts to 10 V. Is 10 V not an extremely high voltage to bias a single molecule with? Molecules exhibit efficient tunnel barriers, capable of sustaining high eV molecular energy levels. In addition, a single molecule can consist of multiple barriers [33], for example a bridging BDT molecule exhibits 4 tunnel barriers reducing the bias per barrier: • Barrier 1 from the 1 st Au electrode to the 1st S atom • Barrier 2 from the 1 st S atom to the phenyl ring • Barrier 3 from the phenyl ring to the 2 nd S atom • Barrier 4 from the 2 nd S atom the 2 nd Au electrode Finally, the experiment allows and caters for this. Dissipation will in accordance with the Landauer theory take place far away from the molecule and quantum wires, in the electrode banks where energetic electrons will be able to get into thermal equilibrium with the lattice.
What is the experimental evidence in support of one dimensional conduction? To answer this question the process of how to arrive at partially wet phase junctions is detailed. The process is split into two: a formation process followed by a nucleation process. During the formation process excess fluid is slowly evaporating, BDT molecules are attaching to surfaces and the two electrodes are slowly getting ever closer together, continuously increasing the conductivity. The conduction still increases over time for a THF partially wet phase BDT junction, after the formation process has stopped. This increase is attributed to physi-sorption of one or more molecules to the bridging molecule. The increasing size of the central island leads to a denser energy level spacing thus increasing conductivity while still maintaining a single point of tunneling related to both electrodes. This process continues until the nucleated island becomes too big and starts to touch one of the two electrodes, creating multiple conduction points and thus losing the one dimensionality.
The data in Fig. 17 have been carefully selected to be similar at a specific point in the nucleation process from a data set of 20 different bending beam assemblies. The data exhibit structure in the IV curve which disappears altogether into a higher conductivity curve by waiting 15 minutes see the inset in Fig. 17. Even though the a and b curves are not identical, they do show similar features for the increasing as well as the decreasing curves. In general the φ 4.5 in cell shows more nucleation as compared to the φ 2 in cell as defined by the conductivity of the final IV curve just prior to entering the completely dry phase. The φ 4.5 in cell can conduct 3 µA at 10 V for a BDT barrier junction, for the φ 2 in cell a 1 µA level is the norm. This is attributed to an easier access of the junction to air molecules in the φ 4.5 in cell. The data in Fig. 18 stand out as halfway the BDT THF formation process the H 2 O humidity was increased to 95 %. It is astonishing that this system reorganizes itself to a stable molecular junction for hours with an H 2 O partial wet phase layer. The collapse into a completely dry phase is not observed, nor is the nucleation phase observed. Apparently the H 2 O partially wet phase layer protects the bridging BDT molecule from other molecules nucleating on it. The fact that the H 2 O partially wet phase remains intact indefinitely at ambient conditions is to be expected [25]. These data have not been selected. The same reproducible enduring end-state IV curves indicated in Fig. 18 result from this experiment in 4 out of 8 bending beam assemblies.
The sub-gap current peak in the curves in Fig. 18 shows voltammetry behavior: the current peak at approximately 1V in the positive direction reproduces in the opposing scan direction at reverse polarity. What is the source of this current? As the electrodes are not in the wet phase there can be no chemical electrode surface reaction responsible. The dipole moment of the H 2 O molecule is expected to play a determining role. At a certain voltage bias the dipole-moment of all the partially wet phase H 2 O molecules on the biased electrode will direct themselves with their electropositive or electronegative side towards the electrode, depending on the polarity of the bias. These aligned dipoles will hold some opposite charge, residing on the electrodes, captive. Once the dipole is flipped by a changing polarity on the electrode all the charge that was held captive becomes available and creates a displacement current proportional to dV /dt . From Fig. 18 it is obvious that the flipping of the dipoles occurs in a range around −1 V and 1 V, releasing about 10 −6 C as obtained from integrating the sub-gap current peak over the time it was measured in. Doubling the voltage scan rate in curve 2b in Fig. 18 also approximately doubles the measured sub-gap current which is consistent with an independent constant total captive charge, released upon flipping of the dipoles. Partially wet phase voltammetry can be used as a means of detection of H 2 O in for example the THF partially wet phase layer, by studying the presence of structure at −1 V and at 1 V.
The two main experimental findings in support of 1D conduction are: 1. Obtaining the same IV curves for multiple BDT barrier H 2 O partially wet phase junctions is a strong indication that the same singular molecular junction is created. This is indicative that also the THF barrier and BDT barrier THF partially wet phase junctions are of singular nature.
2. The regular occurrence of negative differential resistance in the IV curves in section 3.2 and 3.3 also is a strong indication of 1D conduction.
Grundlach oscillations
Oscillations in the conductance as a function of energy in a planar metal tunnel junction biased with a voltage larger than the work functions of the electrodes were predicted by Grundlach in 1966 [29]. Grundlach oscillations, or also field emission resonances, can be viewed as electron standing waves in the resonator bounded by the barrier created by the field on one side and the surface potential step on the other side see Fig. 21. Gundlach oscillations have been observed in planar junctions [34], in high vacuum STM junctions [30] as well as in MCB junctions at cryogenic temperatures [31]. Solving the 1 dimensional Schrödinger equation with a trapezoidal barrier and a surface poten-tial step at electrode 2 leads to standing waves at specific energy eigenvalues E n in the well: The one dimensional density of states peak at the Fermi level is ideally suited to probe these states, explaining the observed current oscillations in IV curves.
An effective electric field inside the barrier is represented by ε. At the surface potential step in electrode 2 there is no barrier, the energy spectrum is continuous and electrons can enter electrode 2 at any energy above Φ Au . Fig. 21 shows the junction conductance to peak whenever the bias voltage reaches one of the levels E n .
In Fig. 21 the one dimensional density of states from Fig. 20c is included for electrode 1 and 2. For the vast majority of the data obtained from partially wet phase junctions, created in the way described above, current oscillations are detected. This is consistent with a one dimensional density of states. With an increasing bias the oscillations in the current in Fig. 5 for example, are explained in a natural way. Whenever the peak at the Fermi level coincides with an energy state the current is maximized, if the Fermi level is in between states the current is reduced. At some bias greater than E c − E v the valence band states starts to gradually contribute to the current as well.
In Fig. 22 the peak number n and peak energy values are displayed for data from Fig. 4c and Fig. 5. The workfunction from gold Φ Au as well as the effective electric field ε can be directly determined from the interception with the energy axis and the slope of the graph respectively. Values are indicated in the graph, the values for ε are low, work-function values are in line with what is reported in [31]. Fig. 22: Energy of the observed peak in the IV curve versus peak index revealing the work-function Φ Au as well as the effective electric field ε The "beating" or "wave package" visible in the oscillations in Fig. 4c have been observed in various experiments [31,35]. Fig. 6 seems to indicate a number of these wave packages however it is not only the amplitude being modulated also the period of the oscillations shows modulation. We expect on the basis of Fig. 21 the periods closest to zero to be the largest, with an increasing bias away from zero the periods get smaller. In contrast we observe in Fig. 6a a decreasing period getting closer to zero and for Fig. 6b an increasing period for voltages increasing towards 8 V. Both IV curves Fig. 6a and Fig. 6b show regions where there seems to be destructive interference for a substantial voltage range. The full explanation of this behavior is unclear at present.
What is the effect of the partially wet phase?
This is explained in the framework of the Fermi Golden Rule for tunnel junctions and the more generic form which takes into account energy exchange between tunneling electrons and the environment. The Golden Rule describes a transition rate between an initial state i and a final state f which correlates with the strengths of the coupling between the states i and f . The Golden Rule is used successful in a large variety of physical transitions: The delta function represents the condition for the energy of the initial and final states: Applied to the tunnel barrier the Golden Rule states that the transition rate Γ i→f represents the transition from an electron in state i on one side of the barrier to state f on the other side of the barrier. For elastic tunneling, where the energy of the electron prior to and after tunneling is identical, equation 3 is described in the following well-known form, at a voltage V for the forward rate, Γ(V ), only: Usually it is assumed that the electrodes are large on an atomic dimension, to ensure the electrodes contain a large number of free electrons and possess quasi continuous energy spectra. The tunnel resistance R T is defined by: , D represents the density of states in the left and right electrode and is expected to be approximately the same and constant, f (E) = (1 + exp(Eβ)) −1 is the Fermi Dirac distribution at inverse temperature β = 1/k B T . In a full quantum mechanical description the tunneling electrons are allowed to interact and exchange energy with the environment. For example the system defining modes related to the charging energy may be excited by exchange of energy quanta e 2 /2C, just as other environmental modes may be excited by energy quanta absorption from the tunneling electron. Likewise the energy of the tunneling electron can be increased by transfer of energy quanta from the environment to it. Under the 2 conditions that RT h/2e 2 (weakly coupled electrodes) and charge equilibrium is established: i.e. the time between tunnel events is larger than the charge relaxation time, equation 4 takes on the following form, for the forward rate, Γ(V ), [21][22][23] The corresponding physics can be described by a tunneling electron in an initial state with energy E having a multitude of probabilities of tunneling to a multitude of final states, thereby transferring an energy (E − E ). If (E − E ) is positive the electron transfers energy to the environment, if it is negative the electron absorbs energy from the environment. Quantum mechanics teaches us that we can only know the probability P (E − E ) of the tunneling electron ending up in a final state E , thus ∞ −∞ P (E)dE = 1. For every initial state tunneling to a final state integration over P (E − E ) needs to be performed. P (E) is provided for a number of different environments in reference [22].
At this stage a "gedankenexperiment" related to the influence of the partially wet phase is postulated. It is assumed that charge is able to accumulate at the THF partially-wetphase/air interface. The charge is able to freely move at this interface, without being able to flow to the underlying metal conductor. Under these conditions the function P (E) in equation 5 is dramatically impacted since the conducting layer at the partially-wet-phase/air interface effectively prohibits any interaction of tunneling electrons with vacuum environmental modes. The device, enclosed by the partially wet phase layer, provides for the only surviving modes that tunneling electrons can couple to. In other words, the device is caged. A miniature Faraday cage prevents tunneling electrons inside the cage from interacting with low impedance vacuum environmental modes outside of the cage.
Continuous Q charging of a THF barrier THF partially-wet-phase junction
A THF barrier/partially-wet-phase junction is schematically shown in Fig. 23c. The shield lining the junction indicates the junction is in the partially wet phase and thus shielded from vacuum environmental modes. The wellknown unshielded counterpart of this junction is shown in Fig. 23a. A sudden switch to a large voltage bias often leads to a switch on effect as demonstrated in Fig. 7a. The current adapts over seconds or 10's of seconds to the sudden high voltage bias. Other presented graphs may not show this as these graphs may be recorded during continuous scanning after the first "switch on" scan. The switch on effect is attributed to a redistribution of charge at the THF partiallywet-phase/air interface. The partially wet phase layer is finite, it will stop at the cell wall and charge on it will try to counter the electrode charge, acting as a capacitor C d . The reported systems may show a displacement current I d = C d dV /dt , or a continuous increasing current as in Fig. 8 or decreasing current as in Fig. 7b and c as a consequence of the stage the drying-process is in. The resulting drift in the measurements is a given, it does however not prevent the study of the processes taking place inside the junction, which in general correlate to fast timescales in relation to the scale at which the drift takes place.
The charging energy E CH to charge the junction capacitance C with one additional electron on top of the n − 1 electrons already on the capacitor relates to: Q is representing the total charge on the capacitor. This equation has been used for the single electron-box decoupled capacitor [36] it will in general define a decoupled capacitor. The energy E CH is drawn in Fig. 24a as a function of Q for various occupation electron numbers n. With increasing Q, the electron number related to the lowest energy state will increase. This is a continuous process, at the degenerate points there is a change-over to adding an additional electron, only increasing Q at a fractional electron charge.
The curve in Fig. 24a equals the free energy ∆F of the system, the energy able to do work. Adding electrons one by one in a continuous manner reveals the nature of the e/C periodicity.
Only taking into account the contribution from the density of states peak at higher biases (eV > Φ Au ), the current through the molecular junction in a one dimensional conduction channel as indicated in Fig. 19 follows from equation 3 which modifies to: Where ∆F is the free energy catering for the change from state i to state f . The matrix element i|Ht|f ties to both states i and f , it represents the strength of the coupling between the two states hence | i|Ht|f | 2 is proportional to ∆F : It is possible that a number of states constitute an ensemble responsible for the observed quantum effect. For simplicity, transitions between only two states are considered below. Disregarding a linear term in V , the current contribution I 1D = −e Γ i→f yields: The delta function in equation 7 equals unity for E f −E i = ∆F . Equation 9 explains the origin and periodicity of the oscillations shown in Fig. 7-10 it also explains the "V shaped" minima in these oscillations. The sharp minima coincide with the degenerate points at Q = (n ± 1/2)e of the free energy in Fig. 24a. A section of Fig. 7b and The following two processes in this junction are dependent on voltage. The value of Q on the junction electrodes, and thus the free energy ∆F (Q), can be continuously varied via Q = CV . Where the electrodes are in closest proximity tunneling will take place, the peak density of states pinned to the Fermi level will be located at an energy E = eV . In scanning the voltage the density of states peak probes the free energy, ∆F (Q), of the system via coupling to subsequent Q states in a continuous manner.
The following view emerges for the origin of this quantum effect: the energy E of the tunneling electron is provided to de-populate a filled Q state, to occupy that state while the remaining energy E is provided to the depopulated electron. At a specific Q state, the energy values for E, E , and ∆F (Q) are each separately identical for every subsequent tunnel event, enabling a coherent time correlated flow of tunneling electrons all sequentially involved in the same processes with the same energy transfers. Due to the existing THF partially wet phase layer, there is no possibility for low impedance vacuum environmental modes to couple into this process, which would break up the coherence and destroy this quantum effect.
Continuous Q charging of a BDT barrier THF partially-wet-phase junction
An ultimate quantum dot is created, where the BDT conducting phenyl ring is weakly connected to the two electrodes. This double junction is schematically presented in Fig. 23d. The shield lining the structure indicates that it is in the partially wet phase and thus shielded from vacuum environmental modes. The well-known unshielded counterpart of this SET device is shown in Fig. 23b.
The characteristic features in the IV curves in Fig. 11-14 are the discontinuities or current jumps. The large periods trailing the discontinuities in Fig. 11 are rare, observed in only 5 % of the bending beam assemblies. The smaller periods trailing discontinuities (Fig. 12-14) are observed in more than half of the bending beam assemblies. Even though the occurrence differs the similarity in shape and slope is clear. Sometimes the periods are too small to clearly attribute a slope to, they appear as oscillations. For those discontinuities where a slope can be attributed it attains without exception a negative differential resistance for increasing voltages and a positive differential resistance for decreasing voltages.
The difference with the THF barrier junction is the conducting island which can only contain an integer number of electrons. The interesting physics of such a dot is that with an increasing Fermi level of the source electrode, the highest filled level of the dot will in general not be aligned to the Fermi level. The electrodes can obtain any Q value, the dot can only host n electrons, n being an integer. The capacitance as seen from the island consists of a capacitance to the source as well as a capacitance to the drain. Below the total capacitance C as seen from the island is used.
Also for these BDT barrier junctions e/C periodicity is reflected. For devices where the single electron levels are small compared to e 2 /C the spin degeneracy of the levels is lifted and the charging energy regulates the spacing [20].
With an increasing E F at some point an electron occupies an additional single electron level on the dot with an energy e 2 /C above the previous level. During the increase of E F and prior to the "electron to island tunneling" the free energy ∆F increases as well. The Fermi level increased above the last filled level in the dot. An imbalance is created in the energy spectrum of the dot, less than e 2 /C. As a consequence ∆F increases with this amount. This lasts until the free energy can be reduced by the amount e 2 /C, at that stage an electron tunnels to the dot. By reducing E F the situation is not invariant as also in this case the free energy is increased. In this case E F is below the highest filled dot level. When E F gets e 2 /C below it an electron leaves the dot. At that point the free energy can be reduced by e 2 /C which it gained in this cycle.
This process is detailed in Fig. 25 and Fig. 26 where in panel a the free energy is provided for the charging of the capacitor as well as for the charge imbalance on the dot for both increasing and decreasing E F . Both contributions are separately indicated to the far left in Fig. 25a and the far right in Fig. 26a. The remaining curves present the combined contribution to the free energy. A section of Fig. 11 a and b is shown in Fig. 25b and Fig. 26b respectively, the data have been aligned to the voltage-axis with the top of the periodic structure by subtracting a linear function. The positive and negative differential resistance parts in the IV discontinuities naturally find their origin from detailing the total free energy ∆F , which increases saw tooth wise both for increasing and decreasing voltage scans. The minimum of ∆F was set at 0. The magnitude of ∆F is 2E C . The function was scaled to match the current extrema in Fig. 25c and Fig. 26c. The capacitance is derived from the e/C periodicity, the current extrema are obtained from the data providing: I ex = 75 nA, C = 7.5 × 10 −19 F and I ex = 130 nA, C = 9.1 × 10 −19 F for Fig. 25 and Fig. 26 respectively. The equations in section 4.4 are derived for a single junction, caution is required interpreting a derived α for a double junction system.
Based on the free energy it is established that every e 2 /C cycle an electron tunnels to or from the island depending on an increasing or decreasing E F , since the free energy can be lowered by this energy amount. The resulting discontinuities are observed in the IV curves as current increases. In Fig. 14 for example over 100 electrons are tunneling in individual e/C voltage cycles to the island for an increasing voltage in the range from 9 V to 10 V. Why is the level spacing of the single electron levels so small in a molecule where we expect eV levels rather than meV levels? At this stage this is not clear. It is possible that the interaction of the BDT molecule with the lining THF molecules leads to many more levels as compared to the original BDT molecule.
The data in Fig. 15 show many similarities with the smooth THF barrier THF partially-wet-phase junction oscillations with sharp "V-shaped" minima. No discontinuous current jumps are visible in panel b. Even though the BDT molecule solution has been used, it shows the unsuccessful attempt to create a quantum dot. It also shows that in being unsuccessful, most likely a THF barrier THF partially-wetphase junction has been created instead. It is possible that not the entire electrode surface has been covered with BDT molecules or that some molecules went down with both thiol groups on one electrode, the resulting IV curve is very similar to the THF barrier THF partially wet phase junctions. Panel b and c show a shoulder in the oscillations appearing and increasingly causing the regular oscillation to break up. Similarly to Fig. 8 this is attributed to excited states coupled to a Q state, entering the e/C window. Fig. 16 provides insight in the criticality of the partially wet phase. The IV curve shows a somewhat irregular structure, for the increasing voltage scan the structure shows at least for a number of discontinuities a negative differential resistance. Electrons are tunneling onto the dot. In the decreasing voltage scan electrons are tunneling from the dot. This effect disappears abruptly once the partially wet phase has disappeared in the lower curve at 9.25 V. What happens to the hundreds of electrons that may be stored on the dot at this point? The nature of the quantum effect disappears once vacuum environmental modes start to mingle in the conduction. The conduction becomes incoherent single electron effects are no longer visible. The hundreds of electrons may remain on the dot, however the intricate and subtle difference of adding or subtracting them one by one, gets lost.
Capacitance reviewed
This section is deviating from the electrostatic capacitance assumption. Defining capacitance at the microscopic level is complex. In one view [38] the characteristic time τ t the electron takes to traverse the barrier while tunneling is determining the distance τ t c it surveys, c is the charge propagation speed. Nazarov posed a different view [39] where the time τ N , defined by τ N = /eV , is the relevant parameter for the junction-lead system. This leads to a distance τ N c, being surveyed by the electron. The high bias voltages up to 10 V in the shown junction results will lead to extremely small τ N c distances as compared to normal measurements in the sub 0.1 V range. Fig. 27 and Fig. 28 show data of two bending beam assemblies THF barrier junctions in the partially wet phase. In these IV curves the scan speed was modified from the standard fairly low 22 mV s −1 to 200 mV s −1 for a few indicated voltage sections. Fig. 27b is an enlargement of 27a, the IV curve in Fig. 28b was recorded just after the curve in Fig. 28a. In Fig. 27 oscillations in the 22 mV s −1 sections are visible. Increasing the scan-speed 9 fold is increasing the period of the oscillations by a factor of 9 as well. For Fig. 28 a similar behavior is shown. In this figure the black parts appear noisy, however by zooming in it is clear that this is reflecting small oscillations, see for example the inset. The 9 fold increase in scan-speed also leads to an approximate 9 fold increase in period in both panels a and b, where the period of the smaller oscillations has been measured next to the increased scan-rate section. It is notable that the amplitude of the large period oscillations remains very similar to the amplitude of the original small period oscillations. The above described behavior on scan-speed change and amplitude has also been measured in BDT barrier junctions in the partially wet phase. Table 1 summarizes the results from Fig. 27 and Fig. 28 as well as from all other figures where periodic oscillations have been measured. The capacitance has been derived from the periodicity e/C. Fig. 29 shows the derived capacitance as a function of the used scan rate in the experiment. This graph shows that by increasing the scan-rate by a factor of 20 the capacitance is reduced by a factor of 20, consistent with the findings in Fig. 27 and Fig. 28 where the period increased by a similar fraction as the scan-rate increase.
Practically the capacitance can be described by C 0 exp(−8SR) with SR the scan-rate in V s −1 and C 0 = 2 × 10 −17 F. The data in the preceding sections are still valid, however now it is known that it would have been different with a different scan-rate. Essentially the scan-rate of 22 mV s −1 is very low and close to the C 0 value in a static situation. The dependence of period and capacitance on scan-rate may also explain the rare occurrence of curves as indicated in Fig. 11 which were measured at the highest scan-rate. It appears that increasing the scan-rate reduces the τ N c distance, the horizon of the electron. This is reducing the capacitance. The scan-rate increase, coinciding with a proportional capacitance reduction, leaves dQ/dt constant, within the accuracy of the experiment. Fig. 29 shows dQ/dt for the junctions indicated in table 1, the data points from Fig. 27 and Fig. 28 are pairwise highlighted, indicating that dQ/dt remains constant at a scan-rate increase. The amplitude of the oscillations remains constant at a scan-speed change. The amplitude correlates to the free energy in equation 8 which in turn correlates with the charging energy in equation 6. The capacitance behavior at scan-speed changes related to the periodicity appears to be different from the capacitance behavior related to the free energy scale.
Summarizing this section, the high voltage is plausible to play a role as to why single junctions are decoupled from the leads and display small capacitance effects. The measured periodicity/capacitance is dependent on the scan-rate. At scan-speed changes dQ/dt remains constant. Fig. 30: dQ/dt as a function of scan-speed for the various junctions indicated in table 1. The data points from the three IV curves in Fig. 27 and Fig. 28 are pairwise highlighted as they originate from the same IV curve.
General discussion
One of the striking observations is related to temperature. Why is it possible to observe effects in a single junction at room temperature where single electron tunneling is regularly studied at dilution fridge temperatures? Why is it possible to reproduce a 1-2 mV feature, measure sharp mV width "V-shaped" curves at room temperature where k B T equals 25 meV? The answers to these questions will probably have the same basis as the existence of this quantum effect. One aspect will be that the tunneling electrons are simply not capable of interacting with any other modes than a few device defining modes.
Confining charge carriers to a plane is not new; the two dimensional electron gas (2DEG) has advanced our understanding of science over the past 4 decades. Bandgap engineering in a GaAs-AlGaAs heterojunction can create a potential trap for electrons in the surface potential perpendicular to the interface leading to a 2DEG at the interface of the electronically different materials GaAs and AlGaAs.
Here, nature provides for a perfect conductive layer, lining the molecular device via the THF partially wet phase. In this case the charge gas follows a contour-shaped liner layer. The surface potential-trap is perpendicular to the THF partially wet phase layer/air interface, see Fig. 31. The trapped charge carriers are free to move in the plane of the lining layer, in the same way as in a 2DEG. Based on the presented experiments, the fact that the partially wet phase is a requirement to be able to observe these effects, the analogy to the physical principle of a 2DEG at material interfaces, as well as the explanation and derivation of the oscillations and line shapes in the IV curves, the only possible conclusion is that the earlier posed "gedankenexperiment" is in actual fact the reality. Fig. 31: An impression of the BDT barrier junction in the partially wet phase, not to scale. The blue/purple layer represents the THF partially wet phase layer the red liner represents the finite width of the surface potential trapping charge carriers at the THF/air interface.
The above does not imply that every partially wet phase layer will have floating charge carriers, in fact for H 2 O it was shown that the partially wet phase layer is able to hold charge at the electrode side captive due to the dipole moment of the H 2 O molecule. It is simply not known at this stage how charge at the H 2 O/air side will behave.
The high voltage and the THF partially wet phase seem to be a combined requirement for the observation of the reported quantum effects in the THF barrier and BDT barrier junctions. There are multiple experimental findings revealing that the Q-states are real: dQ/dt has been shown to be constant for a junction. The IV line shape of the THF barrier junction shows behavior as predicted by the free energy. Excited states couple to specific Q-states and reproduce over the oscillations with e/C periodicity. The BDT barrier junction shows expected behavior with respect to tunneling to and from an island depending on scan direction. The voltage scan-rate couples to the inner quantum mechanics of the junction, leading to scan-rate dependent IV curves. Based on the above as well as the combined presented results we feel confident stating that the Q-side of the commutator has been reached. This implies that the Schön phase, ϕ S , has become undefined and Q needs to be treated as a classical variable.
The effects of the partially wet phase on fundamental tunnel properties are probably not restricted to molecular junctions only. Small capacitance solid state tunnel junctions and compiled multi-junction systems, lined with a single molecular layer of a well-chosen gas, at cryogenic/dilutionfridge temperatures are potentially interesting devices. Almost half a century ago Kulik and Shekhter predicted their voltage oscillations [7]. We are now on the verge of discovering the full potential of these quantum effects.
In the end Molecular Electronics will be unified with the ongoing device miniaturization by the partially wet phase.
Conclusions
A method to produce molecular junctions in the partially wet phase has been presented. In the partially wet phase the conduction of these systems is essentially one dimensional. The connection to the "known" has been established via observed Grundlach oscillations. Reproducible and longterm enduring single molecular junctions at ambient conditions can be realized. At high voltages a quantum effect related to continuous Q charging has been demonstrated. The THF barrier junction reflects e/C-periodicity of the free energy. In the BDT barrier junction IV curve discontinuities reflect electron tunneling to or from an island. For a particular junction dQ/dt is constant and independent of the voltage scan-rate. Q needs to be treated as a classical variable.
The final conclusion is that the THF partially wet phase is responsible for shielding interactions between tunneling electrons and vacuum environmental modes. | 2021-06-04T01:15:46.502Z | 2021-06-03T00:00:00.000 | {
"year": 2021,
"sha1": "8395ee06b1aba3fe8347f3d7e597d7b621c1875b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "8395ee06b1aba3fe8347f3d7e597d7b621c1875b",
"s2fieldsofstudy": [
"Physics",
"Chemistry"
],
"extfieldsofstudy": [
"Physics"
]
} |
119087483 | pes2o/s2orc | v3-fos-license | Quantum Mechanics in the Infrared
This paper presents an algebraic formulation of the renormalization group flow in quantum mechanics on flat target spaces. For any interacting quantum mechanical theory, the fixed point of this flow is a theory of classical probability, not a different effective quantum mechanics. Each energy eigenstate of the UV Hamiltonian flows to a probability distribution whose entropy is a natural diagnostic of quantum ergodicity of the original state. These conclusions are supported by various examples worked out in some detail.
Introduction
Quantum chaos is an old topic motivated by a simple question: what is the analogue of classical chaos in quantum systems? Equivalently, how are apparent randomness of motion and sensitivity to initial conditions encoded in quantum dynamics at long times? This question has appeared in many guises, e.g. in the study of the spectrum of highly excited states of quantum systems [1][2][3][4], in connection to the localization/thermalization transition in many-body systems (see e.g. [5]), in attempts to understand the black hole information paradox [6,7], and even in relation to reformulating and proving the Riemann hypothesis [8].
Ultimately, the study of quantum chaos reduces to understanding the long-time (but not necessarily long-distance) behavior of a quantum system. For instance, spectral statistics of a Hamiltonian can be probed using correlation functions at long times, or, equivalently, by calculating a certain path integral over long classical trajectories that fill up the available phase space [9][10][11][12][13]. By now there exists a wealth of information about the long-time regime of many quantum systems, and many universal properties that diagnose quantum chaos (or lack thereof) are known. In particular, correlators at the longest time scales available in the system govern spacings between nearby eigenvalues of the Hamiltonian, and these spacings are universally captured by simple random matrix theory in all chaotic systems [3,14,15].
Given the overwhelming evidence for long-time universality, and more generally given the importance of studying quantum chaos in connection to questions in both the condensed matter and the high energy communities, developing a systematic renormalization group (RG) procedure that takes us to long times would be of great interest. This cannot be the usual RG in the sense of Wilson and Kadanoff [16,17]. For instance, the desired procedure should work in quantum mechanics, i.e. in 0 + 1 dimensions, where there is no analogue of "spin-blocking" available. Another reason is that this procedure should also work for conformal field theories, which are fixed points under Wilsonian RG.
In this paper we will formulate long-time RG for quantum mechanics (QM). There are no spatial dimensions here, so it is possible to think of QM as a QFT in 0 + 1 dimensions and apply the usual Wilsonian ideas to the path integral in order to orient ourselves; we will later comment on why they are not enough. Let us consider a QM theory given by The field q has mass dimension −1/2 and it takes values on a compact target space. 1 The linear size σ of the target is a dimensionful coupling in the theory. Under Wilsonian RG, σ will flow towards zero, and the theory will become strongly coupled. More precisely, the dimensionless parameter that sets the strength of interactions is σ 2 /τ , where τ is the time scale of interest. For fixed σ, short times τ are described by coherent, classical dynamics, while long times represent the limit in which quantum fluctuations are important. 2 Wilsonian RG has very limited use here. QM dynamics is quasiperiodic whenever there are at least two incommensurate frequencies in the problem, so unless the theory is free or otherwise fine-tuned, the usual RG equations will be oscillatory and will not reduce long-time dynamics to some universal fixed point [18]. Another way to see this problem is to notice that the IR is not easily expressed as an effective field theory because all powers of q are, naïvely, relevant operators; an explicit regularization is needed to make sense of the IR. This issue is related to the fact that approximate conformal symmetry in QM only arises in conjunction with singular potentials and other pathologies [19,20] (see also [21,22] for new results on the dual nearly-AdS 2 space).
In this paper we avoid path integral methods and present a very natural way to describe a monotonic RG flow in the Hamiltonian formalism. The idea is very much in the spirit of Wilsonian RG: at each step, we reduce the algebra of observables to an appropriately chosen subalgebra. (Usual Kadanoff decimation can be expressed in this language.) As we will argue in detail in Section 2, at short times this algebraic coarse-graining naturally corresponds to changing the cutoff used to discretize time in the definition of the QM path integral. The true benefit of our framework, however, is that it allows us to rather transparently understand what happens in the deep IR, i.e. when the time step becomes finite, or even when it is of the order of the recurrence time. The upshot is that going to longer time scales (or smaller algebras) does not take us to a different, effective QM; rather, each decimation turns some degrees of freedom into classical random variables, and flowing to the deep IR turns a QM theory into a theory of classical probability whose event space corresponds to the remaining Abelian algebra of observables. This is decoherence. 1 In order to meaningfully talk about chaos, the phase space has to have finite volume. This is why we only work with compact target spaces in this paper.
2 Note that we set = 1 as in any other QFT. Literature on quantum chaos often studies the semiclassical limit → 0 while keeping the target space fixed, but we follow the field-theoretic practice of keeping = 1 and using other parameters -in this case σ 2 /τ -as dimensionless couplings that govern quantum fluctuations.
It is also important to contrast the two kinds of classical regimes one can talk about. At σ 2 /τ → ∞, the path integral is dominated by trajectories around a unique classical saddle point; the kinetic term controls most of the dynamics and there is typically ballistic transport with very little diffusion. At σ 2 /τ → 0, the path integral receives contributions from all possible long classical trajectories/saddle points. The conventional lore is that this regime reflects the statistics of energy eigenstates, as these are the only states with trivial time evolution -it is this property that allows them to probe long times without getting averaged to death. Looking at the Hamiltonian − 2 2µ ∂ 2 ∂q 2 + V (q), one sees that studying the spectrum at → 0 makes states with high momenta (∼ 1/ ) have finite energy, thereby allowing us to study them in a controlled way. This way the classical limit gives us access to the long-time dynamics. This conclusion has some far-reaching implications. In particular, if it is assumed that all welldefined RG procedures can be defined algebraically, then the work presented here implies that there is essentially no effective, UV-independent, quantum description of interacting IR dynamics in 0 + 1 dimensions. The only exception is a trivial one: time-independent states in a given theory will have well-defined IR behavior. This means that there is no universal quantum-mechanical fixed point in any model, and the only sensible fixed points in QM are time-independent classical probability distributions that come from decimating energy eigenstates in the UV. For models whose target spaces are Abelian groups, we will put these claims on solid footing by the end of Section 2.
In Sections 3 and 4 we study algebraic coarse-graining of energy eigenstates and its connection to diagnosing chaos. We restrict ourselves to flat target spaces with various potentials. In each case we decimate the algebra of observables, and we follow how eigenstates of the Hamiltonian decohere and become mixed following a very specific pattern governed by the center of the subalgebra.
Along the way we note that the von Neumann entropy of the resulting mixed states acts both as a Zamolodchikov c-function and as a measure of quantum ergodicity or "eigenstate thermalization," i.e. as a measure of how randomly distributed the eigenstates are compared to those of a free Hamiltonian that is naturally defined on the target group.
In the Conclusion, we emphasize the lessons learned and offer more comments on related topics.
In particular, we point out other systems of potential interest that this procedure extends to. As a bonus, we also clarify the nature of the Hilbert space of a single Majorana fermion.
Setup
We will exclusively work with QM theories whose target G is a d-torus or, when regulated, a discrete group of the form (Z N ) d . The generalization of our results to other (nonabelian) compact groups is straightforward but often tedious, and we will briefly review how it is done at the end of this Section.
The operator algebra inherits the group structure of the target space in the following way. Let g i be the d generators of G. There are then 2d generators of the algebra. These are the d position operators, U i , and the d shift or momentum operators, L i . The Hilbert space appropriate to the given target group G is spanned by a set of vectors {|g n 1 1 · · · g n d d } labeled by elements g = i g n i i ∈ G with n i = 0, . . . , N − 1. (For continuous G, we will use e iθ i e i with θ i ∈ [0, 2π) instead.) The position operators are chosen such that each of these basis states is an eigenstate of all the U i 's; the spectrum of U i is then given by a unitary representation of U (1) or Z N . The shift operators in the discrete case are chosen so that L i |g = |g i g ; in the continuous case, L i = exp i dθ i ∂ ∂θ i . Since G is Abelian, all shift generators commute with each other, and the only nontrivial commutation relation is For simplicity, we now specialize to d = 1. Consider first the continuous case: a particle on a circle. Let us focus on propagation over a very short time interval dt (relative to any other time scale in the problem). The length dt can be thought of as a regulator in the path integral, which is formally defined as a product of ∼ 1 dt ordinary integrals. If the kinetic term has two derivatives, any two trajectories that differ by less than √ dt at all times will have the same action and will therefore constructively interfere in the path integral. Higher derivative terms in the action will merely change the power of dt without changing the qualitative picture.
We can thus conclude that a temporal cutoff induces a target space cutoff, in the sense that a QM path integral whose time is measured in steps of dt → 0 is well-approximated by 1 dt sums over the target space with step √ dt. Conversely, a target space cutoff dx induces a natural time step dx 2 , the time it takes to diffuse across length dx; any trajectories that started within length dx of each other will be mixed together after time dx 2 . However, if dx ∼ 1 as in, say, G = Z 2 , then this heuristic makes no sense, as dt is no longer infinitesimal.
Now we see in more detail how Wilsonian RG leads to trouble. As we begin the flow in a theory with target U (1) = lim N →∞ Z N , the natural time step is dt ∼ 1/N 2 → 0. As dt is increased, N will decrease, and at some point it will stop being the largest quantity in the problem. In fact, after sufficiently many naïve Wilsonian decimations, N will become finite, and at that point the natural time step is finite and we have no right to view the effective theory as being given by a path integral.
This means that we need a better framework for understanding what happens after many decimations. The connection between dt and dx (or N ) provides a natural way forward in a Hamiltonian framework: one flows to the IR by coarse-graining the target space without any explicit reference to time. We will henceforth refer to the starting target space as the UV, and to the final target space as the IR. In order to flow from the UV while maintaining all the IR correlations, we follow the same strategy that proved useful in understanding entanglement in general QFTs [23][24][25]: we construct a sequence of operator algebras, each contained in the previous one, and we choose that the position generator in each new algebra can distinguish fewer different positions. For instance, if U was a position generator in the UV Z N theory, U 2 will be a position generator appropriate for a Z N/2 theory, U 4 will be appropriate for a Z N/4 theory, and so on. States -represented by density matrices that belong to algebras -are then naturally projected along the flow in such a way as to ensure that expectations of all IR operators are preserved.
In the next subsection we will present this construction in detail, but here we highlight how it differs from Wilsonian RG as we typically understand it: 1. We flow to longer time scales, but not necessarily to smaller energies. However, the long-time dynamics encodes information about gaps between nearby eigenvalues in the Hamiltonian, while dynamics at shorter times encodes coarser properties of the spectrum. In this (admittedly vague) sense we are flowing towards smaller energy scales.
2. States in the IR will typically be mixed, even if we started from a pure state in the UV. In QFT we work with the pure state sector in the IR, but here we will not be able to do this because the UV and IR sectors are strongly coupled. The coupling between the UV and IR in a QFT is governed by the small width of the momentum shell being integrated out; here we have no such parameter. Thus, evolution of a pure IR state will almost always make it mixed.
In a similar vein, time evolution in the IR is not governed by an effective Hamiltonian built
out of IR operators. The mixing of states is described by a classical probability distribution whose evolution is set by the time-dependent expectation values of the UV operators in the Hamiltonian.
4. The Kadanoff decimation in QFT can also be represented as a reduction in algebras, as we are there removing position and shift operators associated to modes with high spatial momentum.
In QM we have no sense of spatial momentum, and instead we are removing position operators while leaving the shift operators intact. Unlike in QFT, our IR algebras will always have a nontrivial center. The superselection sectors labeled by the center eigenvalues are the sectors that are being mixed; center operators are the classical variables. After many decimations, we will be left only with the center. This is now a wholly classical regime, with each density matrix diagonal in the center eigenbasis.
5. There exists scheme dependence. For instance, coarse-graining shift operators instead of the position operators leads to a different IR with different center operators. For theories with a quadratic kinetic term and a position-dependent potential, the scheme in this paper is natural.
6. We will be particularly interested in the IR fixed point in which we have no more position operators to remove (the "deep IR"). In principle, we can then continue coarse-graining the shift operator algebra. This leads to a trivial algebra that contains just the identity, which is the ultimate fixed point of our RG. However, in this paper we focus on the fixed point of the scheme in which just the position generators are coarse-grained. 7. The dimension of the Hilbert space never changes in our setup, but all density matrices become block-diagonal in the center eigenbasis. The labels of these sectors are classical variables, and so one can say that the effective amount of quantum variables in the theory decreases after each coarse-graining.
The decimation procedure
Let us now describe in some detail the proposed target space coarse-graining in a Z N theory. For simplicity, we take N to be a power of two. The starting algebra of observables A 0 consists of all possible products of position and shift generators U and L, with U N = L N = 1. In position basis, U = diag(g n ) where g = e 2πi/N and n = 0, . . . , N − 1. We define a decimation to be the map where A r+1 is generated by the same shift operator as A r and by the square of the position generator of A r . Since we are starting from A 0 , this means that A r is generated by U 2 r and L. The center of A r is the Abelian group generated by L r ≡ L N/2 r , as can be easily checked by using the commutation relation (2).
Whenever we represent elements of A r as matrices on the N -dimensional Hilbert space (which is the same for every r), we will always work in a basis that diagonalizes L r . All elements of A r will be block-diagonal in this basis. We will refer to these blocks as superselection sectors, and we will index them with k = 0, . . . , 2 r − 1 such that the eigenvalue of L r in sector k is g k r ≡ g N k/2 r .
Any density matrix ρ is an element of the algebra of observables. It can be represented as In our setup, it can be shown [28] that Decimation also maps ρ r → ρ r+1 such that Tr( This decimation is simple in the eigenbasis of L: one just restricts ρ r to the block-diagonal form with 2 r+1 blocks, and sets all the other entries to zero. Only eigenstates of L r+1 remain pure after this decimation.
It is instructive to see how certain states map. Let us take as an example the Z 4 theory.
After one decimation, the position eigenstates |1 = |g 0 and |g 2 both become uniform mixes of On the other hand, the L 1 eigenstates 1 √ 2 |1 ± |g 2 remain pure. The same thing happens for the pair of states |g and |g 3 . In general, any linear combination of pure nondegenerate eigenstates of L r will become mixed, i.e.
for any set {|ψ k } of nondegenerate eigenstates of L r .
Going back to the Z 4 example, we may thus view the decimated system as a coupled set of two Hilbert spaces (sectors) of dimension two, or of two Z 2 theories. Each sector is spanned by states with the same L 1 eigenvalue; e.g. one sector is spanned by 1 √ 2 |1 + |g 2 and 1 √ 2 |g + |g 3 . Each sector can thus be viewed as a Z 2 theory with a prescribed winding along this decimated group manifold.
As another example, consider the U (1) theory (regulated to look like Z N with very large N ) and perform one decimation. Any state in the original theory will now be a mix of two states living on circles half the size of the original one; one of these two states will have trivial winding around the new circle, while the other state will pick up a factor of g 1 = g N/2 = −1 when winding. After r decimations, each state in the original theory will become a mix of 2 r states, each of which with a definite winding g k r = e 2πik/2 r around the decimated target space.
The reduced density matrix after r decimations thus takes the form Here, k are the 2 r diagonal blocks of the original density matrix in the L eigenbasis, normalized so that each has unit trace in its N 2 r -dimensional subspace. The normalization factors p k form a probability distribution on the space of sectors. They are given by the expectations of the center of A r , and it is straightforward to check that Another useful way to express these probabilities is via the eigenstates | of the shift operator L.
These states satisfy L| = g | , = 0, . . . , N −1, and for a given state |Ψ the classical probabilities We already see from this form that in the deep IR, when 2 r = N , the sector probabilities reduce to the classical probabilities of eigenstates of operators in the remaining Abelian algebra. We also see that p k is the momentum-space analogue of Wigner, Husimi, and other quasiprobability distributions that are often the main tools in studying eigenstate statistics, as in [10,11,26,27] and references therein. However, we stress that in our case the states are determined by the group structure of the target space.
For any initial pure state, the matrices k all describe pure states. 3 The only mixing that happens is between different sectors, and the entropy of this mixing is the Shannon entropy This is a nondecreasing function of r for any starting state so it can be identified as a c-function for our RG, though it is generally time-dependent. As we will discuss next, there is no time dependence for energy eigenstates of the UV Hamiltonian, and we may define the average IR entropy of the Hamiltonian asS where |n are the N energy eigenstates and S (n) r are their entropies after r decimations. The average IR entropy is bounded by r log 2, and after each decimation it cannot rise by more than log 2. This in turn implies that in the deep IR, at r = log 2 N , the IR entropy is bounded bȳ In the examples we will find that this entropy is connected to the mixing properties of the underlying classical system -or, more precisely, that it measures the quantum ergodicity of the theory.
Time evolution in the IR
In the UV, dynamics is given by ρ t = e iHt ρ e −iHt . In the IR, things are no longer as simple.
If the UV Hamiltonian contains no operators that are lost after r decimations, i.e. if it is block-diagonal in an L r eigenbasis with H = k H k , then the appropriate probability distribution p k of any state will be time-independent. The time evolution of any state will then just be captured by However, as long as the Hamiltonian is not the identity matrix, eventually the decimation will make us lose operators that govern time evolution. Once this happens, no operator in the remaining observable algebra will be able to determine how the sector probabilities evolve, unless the UV state was an energy eigenstate to begin with. This follows easily from eq. (7), with any operator in H that is also in A r will necessarily commute with L l r , and hence dp k /dt will be determined by the UV expectation values alone. From the point of view of the IR, the sectors all couple to each other as they evolve in time, and in addition this evolution has external parameters whose time dependence is specified by the UV state and Hamiltonian.
The above conclusions are very general; we could have started from any other algebraic decimation scheme, and the IR time-dependence would have still been encoded in the UV data. Assuming that algebraic decimation is the only controlled way of performing RG, this means that there is no effective IR theory that governs the dynamics of a QM system. Said another way, time evolution in QM is always sensitive to UV physics because of the nontrivial center. The only exception to this rule justifies the lore stated in the Introduction: only eigenstates of the Hamiltonian, due to their trivial time-dependence, have a UV-insensitive time evolution at long time scales. Therefore, the only IR physics that one can meaningfully talk about is obtained by decimating energy eigenstates.
Examples
We now work out some examples in increasing order of complexity. Our goal is to study the average IR entropy. Where available, we perform computations analytically; otherwise, we perform preliminary numerical studies and leave a more detailed analysis for future work. We will always work with the class of Hamiltonians with a quadratic kinetic term, for some constants α and β n . Note that the continuum limit is reached by letting N → ∞ while scaling α ∼ 1/dθ 2 ∼ N 2 and β n ∼ N 0 .
Free particle on a circle
Let us start with a free particle moving on a Z N group in the limit of large N . The Hamiltonian is (Note that we drop constant terms in H when going to the continuum.) Eigenstates and eigenvalues of the Hamiltonian are Sector probabilities for each state ψ n are easily calculated from (7) or (8), and they are 4 Being eigenstates of L r , the states ψ n (θ) retain their purity during decimation. For each n, there is a unique k such that n+k ∈ 2 r Z, and this is the sector the state is in. Note that this means that the classical probability is a δ-function for each eigenstate, independent of the number of decimations.
The average IR entropy is thusS The fact that the average entropy is independent of N is one possible signature of quantum ergodicity, as we will discuss later.
The Hamiltonian commutes with all of the decimated operators in this theory, and therefore all the p k 's for all the states will stay constant. In fact, they can be found explicitly for any state n Ψ n e inθ . Sector probabilities are given by (8), Note that once all of the U 's are decimated, we get the famous result p k = |Ψ 2 k |. This is the aforementioned decoherence in the IR. Further decimation now coarse-grains the Abelian algebra of shift operators. This can be interpreted as going to ever smaller σ-algebras of events in classical probability. It is only because of this decimation in the deep IR that the free theory is not, strictly speaking, a fixed point of our procedure.
SHO on a circle
Now we add the simplest possible potential to the free particle: This is the potential of the quantum rotor. The eigenstates are solutions to the Mathieu equation and can be manipulated numerically.
Before proceeding, let us notice that potentials U n + U −n , for any n ≤ N/2, all lead to the Schrödinger equation of a Mathieu type. At n = N/2 we obtain an extreme case that can be solved exactly due to the high degree of symmetry (L 2 and all its powers commute with this Hamiltonian).
Here the theory is To control the computation we must keep thinking about the discrete target space, as the potential oscillates with a wavelength equal to the shortest distance in the problem.
with a sector Hamiltonian
Here n runs from 0 to N/2 − 1. The eigenstates of the full Hamiltonian are thus states |n ± ∝ α(g n + g −n )|odd, n + β ± α 2 (g n + g −n ) 2 + β 2 |even, n with eigenvalues These are eigenstates of L 2 and all other powers of L. In particular, these states are eigenstates of L r up until the last decimation, when r = log 2 N . Hence, until the last decimation, eigenstates stay pure. At the last decimation, the entropy of each eigenstate rises by a term bounded by log 2.
Going back to the U + U −1 potential (19), we numerically investigate how periodic Mathieu functions (eigenstates in the continuum limit) behave under decimation. These are functions ψ S n (θ) and ψ C n (θ) that, as ωµ → 0, reduce to ordinary sines and cosines. For a fixed ωµ, the states with n ωµ are oscillations localized at θ = π, the minimum of the potential; at n ωµ, the wavefunctions approach ordinary sines and cosines spread out over the entire circle.
We have enough control over these functions to immediately bound their IR entropyS r at the maximal number of decimations, r = N log 2. To do this, it is enough to use (8) there are more sectors with nontrivial probabilities, but they are all located at | | ωµ n , and the entropy for these states can be very roughly bounded by log ωµ n . At ωµ 1, the contribution from all the low-energy states is then approximately bounded by ωµ n=1 log ωµ n ∼ ωµ, and the average IR entropy isS The average IR entropy is finite in the continuum limit, and approximately satisfies the same bound as the one we analytically derived for the U N/2 potential. As before, the finiteness of the IR entropy is a signature of the system not being ergodic.
It is worthwhile pointing out that the IR entropy increases with ωµ. This increase in entropy comes from the low-energy states that are more and more localized by the deep cosine potential at θ = π. This low-energy sector very much resembles the spectrum of a particle in a box at ωµ 1.
We will see that there is always some (and often a lot of) IR entropy associated to such theories.
Particle in a box
A particle in a box does can be modeled by an infinitely repulsive δ-function potential on a circle, There will then be N − 1 energy eigenstates that represent standing waves in the box, and there will be one state localized at the δ-function -the position eigenstate |1 . The energy of the latter will be O(βN ) while the standing waves will have energies O(1); therefore we may think of the standing waves as an effective low-energy description of the δ-function potential on the circle.
Coarse-graining will eventually make any state become a mix that includes the high-energy localized state. From the point of view of the low-energy theory in the box, this means that after each coarse-graining, there are fewer and fewer states that do not mix with the high-energy degree of freedom. In this sense, the low-energy theory upon decimation loses degrees of freedom and has to be modeled with Hilbert spaces of ever lower dimensionality. This corresponds to the walls of the box closing in on the particle until the particle has no low-energy degrees of freedom left.
The localized state always decimates to a maximally mixed state, as can be readily checked by representing its density matrix in momentum basis and noticing that all diagonal elements are equal to 1/N . Its entropy is thus log N in the deep IR, but since there is only one such state, its contribution to the average IR entropy is negligible.
The standing waves also contribute nontrivially to the IR entropy. Compared to the propagating waves of the free particle, a standing wave entropy is higher because the δ-function forces it to have a node at θ = 0, so after enough decimations it will necessarily become mixed.
The wave with p humps corresponds to the state This makes sense when 0 < p < N . The overlaps in (8) can be computed to be When p = 2q = 0, we simply get On the other hand, when p = 2q + 1, we get For even harmonics, the entropy in the IR is simply log 2. For odd harmonics, the formulae are slightly more complicated, but the sector probabilities in the deep IR are still highly peaked at = ±(q + 1) and = ±(q − 1). Their IR entropies are thus approximately log 4. The average IR entropy is thenS As expected, this result is not very interesting, as it just agrees with the intuition that there is no mixing in d = 1. To make matters more interesting, we must go to higher dimensions. The equivalent of a particle on a circle with a δ-function potential will be the famous Sinai billiard, and indeed we will see signatures of chaos there.
The Aubry-André model
Before leaving the comfortable confines of d = 1, let us study a classically ergodic model and see how it qualitatively differs from the examples above. We consider a variant of the Aubry-André model [29], i.e. a particle in a weak quasiperiodic potential, To ensure quasiperiodicity, ξ and ζ are taken to be irrational, and we include both the sine and cosine in order to eliminate the parity symmetry and make the spectrum nondegenerate. In the discrete model, the couplings are This model is very similar to one in which the couplings are randomly chosen [30]. The wavefunctions are localized in position space whenever the coupling is greater than the scale set by the system size (1/N ). The original Aubry-André model is more sophisticated and, for almost all irrational periods, features a localization/delocalization transition in the continuum limit. 5 (purple, upper). The second graph zooms in to the area around the crossover to localization, whereS IR is found to smoothly interpolate between the two phases. The difference between the two entropies asymptotes to log 2, and each entropy asymptotes to log N , its maximal possible value.
We exactly diagonalize the discrete model with up to N = 256 sites. The onset of delocalization due to finite size effects can be seen on The average IR entropy is straightforward to evaluate using (8). The main finding is that in the delocalized regime, the IR entropy does not grow with N , just like in the previous examples, while in the localized regime it grows asS with s 0 (β) > 0. The parameters s 0 (β) and s(β) approximately saturate the entropy bound, s * ≈ 1 and s * 0 ≈ 0 at β 1/N . Examples of this behavior are shown on Fig. 2. In the next Section we will argue that this behavior ofS IR corresponds to the lack of mixing in the underlying classical model (or, more precisely, to the lack of quantum ergodicity in the quantum model).
A modified Sinai billiard
Our final example is the Sinai billiard: a particle moving on a two-dimensional torus with a circular, totally reflecting obstacle. We will model this system the same way we modelled a particle in a box, by studying motion on Z N ⊗ Z N with several impenetrable δ-functions scattered across the torus surface. A single obstacle would be enough to make the very excited states ergodic [10], but this will provide only a O(1/N ) contribution to the IR entropy. The entropy will increase the more scatterers we add, especially if we break symmetries and lift degeneracies while adding them. The Hamiltonian is where as before β N and ϑ i are random points on the torus.
To study the influence of the scatterers, we compute the IR entropy for different numbers R of randomly sprinkled δ-functions on a two-torus. Each obstacle will contain a localized state which will contribute a log N 2 to the sum over entropies. However, we are interested in the entropy contained in the standing waves only, as these are the eigenstates one gets when studying the Laplacian on the surface bounded by the obstacles. This is why the IR entropy we present in what follows contains an average only over these standing wave entropies. (For the particle in a d = 1 box, this was irrelevant, as there was only one localized state; here we may have O(N 2 ) localized states.) We find that, for R scatterers, the average IR entropy of the remaining N 2 −R eigenstates grows These results for N 2 = 64 and N 2 = 256 are shown on Fig. 3. The number R can be construed as the size of the scattering obstacle, and we thus see that the entropy of each standing wave with a macroscopic obstacle (R ∼ N ) is of the order log N without saturating the bound log N 2 until, approximately, the obstacle size fills up the entire torus at R = N 2 . This is a striking qualitative deviation from all the previous results, where the average entropy was either O(1) or essentially equal to its theoretical maximum. As we will argue in the next Section, this is the signature of quantum ergodicity.
We now want to make connections between the calculated IR entropies and the quantum ergodicity of theories in question. Recall that a system is called quantum ergodic if, roughly, its energy eigenstates are randomly distributed, or if there is "eigenstate thermalization." More precisely, this condition is fulfilled if for almost any operator, its expectation value in an energy E eigenstate is equal to the average of the classical function representing the operator over the constant energy E surface in classical phase space [31,32]. This happens when the underlying classical system is chaotic.
The entropy of the probability distribution (8) measures how much the eigenvectors of H deviate from an "ordered" set of eigenvectors -the eigenstates of the free particle on the underlying group.
In this sense, one would expect that the larger the IR entropy, the more random the eigenstate.
However, there is a duality at work here: if an eigenstate's entropy reaches its theoretical maximum log N (with N the Hilbert space dimension), this state is very atypical -it is localized in position space. Indeed, we saw in the Aubry-André model that the localized phase has an average IR entropy that approximately saturates its maximal value. In addition, even with the cosine potential, we saw that the entropy was a monotonically increasing function of µω. If we took this coupling to infinity, all eigenstates would become localized in the bottom of the cosine potential, and indeed we would This situation can be compared to that in quantum field theories on a lattice when both sides of a strong-weak coupling duality are tractable. Examples include the Ising chain in a transverse field and the Kitaev toric code in two spatial dimensions. The strong coupling regimes in these two theories would be called "ordered" and "confined," respectively; at weak coupling the regimes are disordered and deconfined/topological, which are just order and confinement in the dual basis of states. At strong coupling, the ground state entanglement entropy of a region is zero; at weak coupling, the entropy is the maximal possible one consistent with locality and global constraints.
We see that in QM the same story plays out, with "ordered" replaced by "free" (or "localized in momentum space") and "disordered" replaced by "localized in position space," and with the IR entropy playing the role of the ground state entanglement entropy.
We are thus led to expect that a theory is quantum ergodic when its average IR entropy diverges in the continuum/thermodynamic limit (N → ∞) -but only when it does not diverge maximally fast. In particular, we may speculate that the most chaotic classical theories giveS IR = s log N upon quantization, with s being a potentially universal number near 1/2. Deviations from this conjectural baseline would then reflect the existence of nonergodic states like quantum scars [33].
Ideas like these are likely not new to the cognoscenti of quantum chaos. The novelty of our approach is the exclusive usage of groups as target spaces. One benefit of this new approach is that free particle eigenstates (both the position and the momentum ones) can be defined with reference just to the group structure, and thus provide a privileged set of eigenstates with respect to which the overlaps in (8) are to be measured.
A different but complementary benefit is that the notion of algebraic decimation naturally gives rise to the entropy of sector probabilities as a measure of long-time behavior (and, therefore, of quantum ergodicity). Moreover, performing a few decimations already allows us to compute an entropy function via eq. (7), giving us a natural coarsening of the microscopic overlap function that may still be a useful diagnostic.
Discussion
Not only is our renormalization group of quantum mechanics not a group, it also does not renormalize any infinity. Nevertheless, we have argued that it is still a very useful operation to consider: it formalizes the Wilsonian idea of flowing to longer times and allows us to understand the fixed points of quantum mechanics in the deep infrared. We have shown that for any quantum mechanics defined on a flat group manifold, the fixed point is not another quantum mechanics -instead, it is a classical theory of probability. The entropy of this infrared probability distribution is a natural diagnostic of quantum ergodicity while naturally sharing many parallels with other entropies from QFT.
The following points are worth emphasizing at the close of this paper: 1. The group structure was crucial to defining and interpreting the algebraic decimation. Many quantum mechanical problems lack an apparent underlying group structure, including some integrable ones like the Calogero model. Perhaps a group structure can be uncovered in each such problem, just how we have interpreted a particle in a box as a particle on a torus with a totally reflecting barrier.
2. We have not addressed the infrared fate of particles moving on nonabelian groups. For positively curved groups, this is a straightforward, if tedious, extension of the methods presented here. For hyperbolic spaces (which are of great interest on their own), a few more tools seem to be needed, and this analysis will be presented in a separate publication [34].
3. One of our main results is that there exists no effective quantum mechanics that governs the evolution in the infrared. Our analysis was valid for all Hilbert space dimensions. However, perhaps when this dimensionality is large, an approximate, statistical description of the IR evolution can be defined. Understanding this possibility would be a valuable goal.
4.
Our results were all derived for Hilbert spaces whose dimensions were powers of two. Extending these to other powers requires a straightforward modification of the decimation procedure so 6. Degeneracy of the spectrum can lead to ambiguity in the infrared entropy. We manually fixed this issue when it arose, but a more systematic treatment might be possible. 7. We end by simply emphasizing the most important question that this paper fails to address: the origin of random matrix universality in quantum chaotic systems. We have argued that going to long times does not give rise to an effective Hamiltonian that is a random matrix. A different kind of flow and universality potentially await to be uncovered. | 2016-08-27T17:17:58.000Z | 2016-08-25T00:00:00.000 | {
"year": 2016,
"sha1": "aef28d6907b036546aa920d0f27627d07c25abfe",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "aef28d6907b036546aa920d0f27627d07c25abfe",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
237805520 | pes2o/s2orc | v3-fos-license | Empirical Study of E-Learning on Financial Literacy and Lifestyle: A Millenial Urban Generations Cased Study
Technological developments have an influence on education, one of which is the development of online learning or more commonly known as e-learning. Technological developments in e-learning have long developed but significant developments occurred during the current Covid-19 pandemic where the education sector carried out online learning. This causes the existence of e-learning in both formal and non-formal education to be an important factor in the success of educational attainment. People in this era of globalization still have inconsistent attitudes in financial management individually, this causes the financial literacy of each individual to be different. This difference in financial literacy is one of the factors caused by different lifestyles. This study aims to see patterns of financial literacy in the development of e-learning where access to financial literacy learning is more developed in online channels. In addition to assumptions in financial literacy that are currently starting to develop online, this study also aims to determine the extent to which lifestyle will affect financial literacy. This study uses primary data and quantitative research methods approach using Confirmatory Factor Analysis (CFA). The results show that in e-learning, financial literacy is currently running and lifestyle has a significant effect on financial literacy. One of these lifestyle changes is the existence of E-Learning which shifts the lifestyle pattern of face-to-face learning to online learning. This can provide novelty in educational technology in the application of E-learning, especially in the case of financial literacy. In addition, e-learning can still support the achievement of education in financial literacy. Repair and improvement of e-learning facilities as the main learning media must be carried out. It is hoped that with this research, e-learning can be used in financial literacy learning, both through formal and non-formal education in the community and public web seminars during the Covid-19 pandemic.
Introduction
Technological developments at this time have a significant impact on the education sector. Online learning or better known as E-Learning has become a new trend in the world today because online learning can provide aspects of convenience in learning remotely, anytime and anywhere. Someone is indirectly familiar with the E-learning approach in everyday life such as when reading news online, studying tutorials online even when watching random learning content on streaming platforms is a form of E-learning. Directly, the implementation of E-learning, especially during the Covid-19 pandemic since the beginning of 2020, has become a necessity in education where teachers are starting to use E-learning technology to teach. Learning technology continues to evolve, However, in principle, these technologies can be developed into two groups, namely Technology-based learning and Technology-based web-learning [1]. The new learning paradigm that no longer describes face-to-face meetings in the classroom even though it still applies the concept of social interaction in it, until now the concept has been widely accepted and can affect human social life [2]. E-Learning provides new hope as an alternative solution to most educational problems in Indonesia, with functions that can be adjusted to needs, either as a supplement (additional), complement (complementary), or substitution (substitute) for learning activities in the classroom that have been it is used [3]. With the help of information technology, Subjective well being is a person's evaluation of his life, while the evaluation is an affective and cognitive evaluation. Cognitive evaluation in question is individual life satisfaction as a whole and specifically. While the affective evaluation in question is the individual's reaction to events in life which includes pleasant emotions and unpleasant emotions. A person is said to have high subjective well-being when they feel a lot of pleasant emotions and few feel unpleasant emotions, when they are engaged in interesting activities, when they have many pleasant experiences and few sad experiences, and they are satisfied with their life. Lifestyle is a form of human life. Lifestyle is showing how people live, how to spend money, and how to allocate time. So it can be concluded that lifestyle is a person's pattern expressed in activities, interests and habits in spending money and how to allocate time [25]. People in the modern era simply access everything they need through their gadgets. A dynamic lifestyle coupled with a lack of financial management knowledge makes people find it difficult to manage finances [4] [5]. In the process of fulfilling the needs, it will be influenced by the surrounding conditions which eventually lead to a new lifestyle pattern which will definitely affect life [6] [7]. Financial literacy as the ability to understand financial conditions and financial concepts and to transform that knowledge appropriately into behavior [25]. Financial literacy includes the ability to discern financial choices, discuss money and financial issues without discomfort, plan for the future, and respond competently to life events that affect daily financial decisions [8]. Having financial literacy is the most important thing to get a prosperous life. With proper financial management supported by good financial literacy, it is hoped that people's living standards will increase, because no matter how high a person's income is, without proper financial management, financial security will definitely be difficult to achieve [9]. Financial knowledge is everything about finances that is experienced in everyday life [10]. Knowledge refers to everything an individual knows about a personal finance issue, as measured by the level of knowledge about various personal finance concepts [11]. The basic concepts of finance include matters relating to the calculation of simple interest rates, compound interest, the effect of inflation, opportunity cost, time value of money, liquidity of an asset, and others [9]. Financial knowledge also includes checking savings, mental health and home insurance, credit card use, taxes and investments [12]. Each individual has their own learning media, both obtained through formal education and non-formal education. In formal education, each individual can learn about financial literacy through economics education or take a financial specialization while pursuing higher education. In addition to formal education, each individual can learn about financial literacy through non-formal education available on various online platforms such as web seminars, online workshops, and so on. Basic financial knowledge and skills to manage financial resources effectively are indispensable for individual welfare [13]. Financial attitude is defined as the mental state, opinion և assessment of personal finances that relates to the attitude. Indicators of this variable include personal finance, money philosophy, money security, and personal financial valuation. Financial attitude is a combination of information concept of emotions about learning outcomes tendency to act positively [14] [15]. Accordingly, there is a relationship between the financial attitude and the "level of financial problems" [16]. Thus, it can be said that a person's financial attitude can also affect the way he manages his financial behavior. Financial attitude can be reflected in six concepts: obsession, strength, effort, inadequacy, maintenance, and security. To achieve a return on investment, you need to analyze using psychology և financial science, known as behavioral financing. Behavioral finance is a science that studies how psychological phenomena affect financial behavior [17]. Financial behavior is one of the measures of financial literacy է is an essential or fundamental element of financial literacy [18]. The financial behavior is an approach that explains how humans invest or activities related to finance are influenced by psychological factors [19]. Behavioral finance is a science in which there is an interaction of various sciences and continues to be integrated so that the discussion cannot be isolated [20] [21]. To understand financial behavior, it is necessary to divide it into two sub-themes. Second, macro-financial behavior should be identified clarify that behavioral patterns may explain the occurrence of anomalies (deviations) in the effective market hypothesis [17], [20]. In this study using a quantitative approach with Confirmatory Factor Analysis (CFA). Macrofinance behavior is to detect and is llustrate that behavioral models can explain the occurrence of anomalies (deviations) in the Efficient Market Hypothesis [36]. In this study using a quantitative approach with Confirmatory Factor Analysis (CFA). macro-finance behavior is to detect and illustrate that behavioral models can explain the occurrence of anomalies (deviations) in the Efficient Market Hypothesis [20]. The purpose of this study is to determine the influence of lifestyle on financial literacy in the current e-learning period. In this study using a quantitative approach with Confirmatory Factor Analysis (CFA).
Method
This research used a quantitative approach with non experimental research. In this study, the data collection method in the form of a cross-sectional survey was used to two different groups, namely group A and group B. Group A consisted of 50 members of the Capital Market Study Group (KSPM) of Siliwangi University and group B of 63 students who were non-members of the Group. Capital Market Studies (KSPM) Siliwangi University (both in student status and people who has joining capital market seminar). This study wants to find new trends in the millennial generation's lifestyle towards their financial literacy power. The data primary data used with the form of questioners. The instrument used for this empirical study uses a scale of 1-10. (SEM) is one of the multivariate-based techniques in statistics that is used to examine the relationship between exogenous variables and endogenous variables [5]. Factor analysis is used in relation to data reduction, namely by finding the relationship between variables that have independent identification which are then collected in variables that have fewer numbers to estimate the structure of the latent dimension [22]. Factor analysis can be used through two approaches, namely exploratory factor analysis and confirmatory factor analysis. Each variable in this study will be tested for validity using Confirmatory Factor Analysis (CFA). CFA is used to see the relationship between test indicators and latent variables, but basically there are differences in the number and nature of specifications and limitations based on theory and data created by factor models [23]. Therefore, the CFA method explicitly tests the hypothesis regarding the relationship between the observed variables and the latent variables or constructs.
Result
Respondents in this study can be classified according to several characteristics, namely by gender, occupation, income and information regarding participation in web seminars/lectures/trainings in the capital market sector. Gender is differentiated between male and female. The job of respondents is divided into students, BUMN employees, private employees, government employees, entrepreneurs and others. Income of respondents is measured from a minimum of IDR 500,000,0 to a maximum of IDR 2,000,000 and above. To analyze the concept of e-learning, the participate status of capital market course of respondents is used which indicates the respondent's participation in e-learning in the capital market sector. Most of the respondents who took the questionnaire in this study were men as many as 60 people (53.1% see table 1). Most of the jobs owned by respondents were students, as many as 91 people (80.5% see table 2). For income, respondents who filled out the questionnaire had a dominant income below Rp. 500,000,-which was 75 people (see table 3). As for the status of participation in capital market education activities, 67 people have participated at least once in capital market learning activities (59.3% see table 4).
Fig. 2. Structural Modelling
The results of the analysis using Confirmatory Factor Analysis (CFA) are shown in Figure 2 in structural modeling. Hypothesis testing can be done by first fulfilling the validity and reliability tests shown in table 2. The model in Figure 2 has met the validity and reliability tests shown by the Average Variance Extracted (AVE) which is stable above 0.50. Based on table 5, the value of AVE is greater than 0.50 so that it shows the fulfillment of the requirements for convergent validity in research. Table 6 shows data from Discriminatory validity used to determine whether the variables or indicators in our research have unique values are related only to the variables or indicators, not to the variables or indicators that are expected or presented. Table 6 shows the estimated value of discriminant validity for cross loading. In cross-loading, the value of each item has a structure value that is greater than the loading value of other structures. It can be concluded that there is no problem with discriminatory validity. All indices have a higher correlation coefficient with each structure compared to the correlation coefficient of the indices of the structural block of other columns. that the construct in the model has discriminant validity as well as an initial stage before testing the hypothesis after passing through various series. testing. The basis used in hypothesis testing is the value contained in the output path coefficients. The estimated output table for testing the structural model is given below. In PLS, the statistical testing of each hypothesis is performed by simulation. I n this case, a bootstrap calculation will be performed on the sample. The Bootstrap test aims to minimize the problem of abnormal survey data, while the test results generating the path coefficient are presented in Table 7. In the CFA model approach, the results can be found in table 7 above. Teststatistically the hypothesized relationship is carried out using simulation. In this case, a bootstrap calculation will be carried out on the sample. Testing with bootstrap is intended to minimize the problem of abnormal research data
Discussion
In determining the results of the study, a hypothesis was formed to facilitate the identification of the influence between latent variables in the model. a. The effect of Financial Literacy on Financial Attitude The first hypothesis which states that financial attitude has a positive effect on financial literacy is proven. This is because the results of testing hypothesis one show that financial literacy with financial attitude shows a path coefficient value of 0.319 and a p-value of 0.000 (p-value (0.000) < = 5% (0.05)). This means that financial attitude has a positive effect on financial literacy. Thus hypothesis one is accepted. b. The effect of Financial Literacy Financial Behavior The first hypothesis which states that financial behavior has a positive effect on financial literacy is proven. This is because the results of testing hypothesis one show that financial literacy with financial behavior shows a path coefficient value of 0.533 and a pvalue of 0.000 (p-value (0.000) < = 5% (0.05)). This means that financial literacy behavior has a positive effect on financial literacy. Thus hypothesis one is accepted. c. The effect of Financial Literacy on Financial Knowledge The first hypothesis which states that financial knowledge has a positive effect on financial literacy is proven. This is because the results of testing hypothesis one show that financial literacy with financial knowledge shows a path coefficient value of 0.405 and a pvalue of 0.000 (p-value (0.000) < = 5% (0.05)). This means that financial knowledge has a positive effect on financial literacy. Thus hypothesis one is accepted. d. The effect of Lifestyle on Attention, Interest and Opinion The first hypothesis which states that Attention, Interest and Opinion has a positive effect on Lifestyle is proven. This is because the results of testing hypothesis one show that Lifestyle with Attention, Interest and Opinion shows a path coefficient value of 0.448 and a p-value of 0.000 (p-value (0.000) < = 5% (0.05)). This means that Attention, Interest and Opinion have a positive effect on Lifestyle. Thus hypothesis one is accepted. e. The effect of Lifestyle on Financial Literacy The first hypothesis which states that lifestyle has a positive effect on financial literacy is proven. This is because the results of testing hypothesis one show that financial literacy with lifestyle shows a path coefficient value of 0.568 and a p-value of 0.000 (p-value (0.000) < = 5% (0.05)). This means that Lifestyle has a positive effect on financial literacy. Thus hypothesis one is accepted. f. The effect of Lifestyle on Food Relation Lifestyle The first hypothesis which states that Food Relation Lifestyle has a positive effect on lifestyle is proven. This is because the results of testing hypothesis one show that financial literacy with lifestyle shows a path coefficient value of 0.244 and a p-value of 0.036 (pvalue (0.000) < = 5% (0.05)). This means that lifestyle has a positive effect on financial literacy. Thus hypothesis one is accepted.
Conclusion
Financial literacy in its development is much influenced by influencing factors, both internally and externally. Financial literacy is an important aspect for an individual to make a choice in his economic transaction activities. The development of theory regarding financial literacy is closely related to various concepts that examine human behavior. In this pandemic period, a lot of learning is done online, this makes the development of e-learning increasingly rapid. The difference between online learning and face-to-face learning will have a different impact on education. This study tries to determine the impact of a restricted lifestyle during this pandemic on the sustainability of financial literacy. There are a number of findings, namely that all test variables, namely lifestyle, have an effect on financial literacy. This indicates that even in online learning (e-learning), the lifestyle that begins with online activities can still significantly affect financial literacy. Besides that, understanding in online learning interactions is closely related to various theoretical concepts regarding behavioral finance and subjective well being. These two theories will be able to theoretically explain the polarization of humans who ar e getting used to having an online lifestyle during the current Covid-19 pandemic. Besides that, understanding in online learning interactions is closely related to various theoretical concepts regarding behavioral finance and subjective well being. These two theories will be able to theoretically explain the polarization of humans who are getting used to having an online lifestyle during the current Covid-19 pandemic.
Besides that, understanding in online learning interactions is closely related to various theoretical concepts regarding behavioral finance and subjective well being. These two theories will be able to theoretically explain the polarization of humans who are getting used to having an online lifestyle during the current Covid-19 pandemic.
Referring to the results of the respondent's data, the financial literacy variable empirically turned out to have a significant effect on financial satisfaction. The path coefficient values found between the two variables were statistically significant. The direct contribution of financial literacy variables to financial satisfaction is positive, so financial literacy is a good predictor of financial satisfaction. Based on the results of the study, this study shows that financial literacy affects financial satisfaction. Thus, financial satisfaction in this study is caused by financial literacy. Satisfaction when having certain abilities. Certain abilities referred to in this study are the ability in terms of knowledge and financial management in financial literacy. The results of this study are in line with the theory in Subjective Well Being called Activity Theory. Based on the results of the study, this study shows that financial literacy influences behavior. Thus, financial behavior in this study is caused by financial literacy. Based on Behavioral Finance theory, a person will be seen from how psychological phenomena affect his financial behavior. The financial behavior in question is financial behavior. The financial behavior in question is financial behavior. The results of this study support this theory which states that increasing financial knowledge can be a tool and a means in the process of building wise and responsible financial behavior. | 2021-09-28T01:10:00.733Z | 2021-07-03T00:00:00.000 | {
"year": 2021,
"sha1": "418acb2544ce581d665dc977a0c783943f16e0d5",
"oa_license": "CCBYSA",
"oa_url": "http://ijesty.org/index.php/ijesty/article/download/99/72",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "8e5d563e606cb6a18a7fc8218d116ce26658dabd",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Sociology"
]
} |
202540194 | pes2o/s2orc | v3-fos-license | Collective cell migration of epithelial cells driven by chiral torque generation
Various multicellular tissues show chiral morphology. Experimental studies have shown this can originate from cell chirality. However, no theory has been proposed to connect the cellular chiral torque and multicellular chiral morphogenesis. We propose a model of confluent tissue dynamics with cellular chiral torque. We found that cells migrate unidirectionally under a gradient of cellular chiral torque. While the migration speed varies depending on the tissue's mechanical parameters, it is scaled solely by a structural order parameter for liquid-to-solid transition in confluent tissues.
The establishment of left-right (LR) asymmetry in tissues and organs is an intriguing event in development, which involves the coordinated activity of cells and molecules [1][2][3][4]. Since proteins as the basic components of biological systems are chiral molecules, the breaking of LR symmetry can be a collective property orchestrated by these chiral molecular components [5,6].
For the LR asymmetry in tissue morphogenesis, it has been shown that in Drosophila, the embryonic hindgut twists unidirectionally [7], and the male genitalia undergoes a clockwise rotation when looked from the apical side [8,9]. These are epithelial tissues, which consist of epithelial cells that adhere to each other through cell-cell junctions [10]. During the hindgut twisting, the hindgut epithelial cells exhibit chiral properties in their shape and other properties [7]. In the genitalia rotation, the surrounding epithelial cells move collectively in the clockwise direction driven by LR asymmetric positional rearrangements of cells, which rotates the genital disk [9,11]. Even more generally, such chiral cell rearrangements can induce unidirectional cell migration within an epithelial tissue [12]. The above examples suggest that a collective behavior of cellular chirality gives rise to the tissue LR asymmetry.
At the single cell level, several types of cells have been reported to exhibit chiral dynamics. Examples include the extension of neurites in cultured nerve cell [13], the chiral movement of C. elegans cell cortex [14], nuclear rotation in melanophores of zebrafish [15], and actin cytoskeleton swirling in human foreskin fibroblasts (HFFs) cultured on a micro pattern [16]. Remarkably, in zebrafish melanophores and the HFFs on the micro pattern, the single cells can cell-autonomously generate chiral torques in such a way that the apical side (top side) of cells exhibits rotations with respect to the basal side (bottom side) which adheres to substrates. Although a variety of specific mechanisms generate chiral torque at the cellular level, many of these chiral properties are governed by active chiral processes of actomyosin cytoskeleton [17][18][19].
Since actomyosin is a ubiquitous component of eukary-otic cells, it is natural to consider that chiral torque generation is not a feature restricted to the specific cells mentioned above. In particular, we consider a situation where chiral torque is generated by individual single epithelial cells in a tissue, motivated by the LR symmetry breaking of the epithelial tissues [7][8][9], and by the fact that single cells can generate chiral torque [13][14][15][16]. Based on the cell vertex model (CVM) [20], we propose a theoretical model of dynamics of an epithelial tissue with a chiral torque generated by individual single cells. Then we investigate the tissue scale dynamical emergent properties.
To model the dynamics of an epithelial tissue with chiral torques generated by individual single cells, we use a two dimensional (2D) CVM. In the CVM, cells are described by polygons with vertices and edges. The position of the ith vertex is represented by r i . The force acting on each vertex is given by the derivative −∂E({ r i })/∂ r i of a potential function E({ r i }), given by Here, the first term on the right hand side describes the area elasticity with the area A α of cell α, the elastic modulus K, and the preferred area A 0 . The second term defines the perimeter elasticity with the perimeter length P α of cell α, the elastic modulus K p and the preferred perimeter length P 0 . We assume for simplicity that these cellular mechanical properties K, K p , A 0 , P 0 are spatially homogeneous. N is the total number of cells. The third term is introduced to explicitly describe the fluctuation in the line tension ∆Λ ij (t). Here, ℓ ij is the length of a cell edge between ith and jth vertices. We introduce a fluctuating tension ∆Λ ij (t) as a colored Gaussian noise with ∆Λ ij (t) = 0 and ∆Λ ij (t 1 )∆Λ kl (t 2 ) = δ ik δ jl σ 2 e −|t1−t2|/τ . Such timecorrelated fluctuation of line tension is reported in [21]. We set τ = 1 in this letter for simplicity. Hereafter, we choose the units of length and forces such that the elastic modulus K p and the preferred cell area A 0 are unity, i.e., K p = 1 and A 0 = 1. With these units, P 0 is a so-called target shape index, which is a ratio between perimeter and square root of area [22,23]. If a single cell tends to be circular, P 0 = 2 √ π ∼ 3.54. We set P 0 = 3.54 and K = 10 to avoid large cellular shape deformation unless otherwise noted.
In the case of torque generation by isolated single cells, as reported in [15,16], the apical surface rotates unidirectionally with respect to the basal substrate, which is driven by the torque generated by actomyosin network. In the epithelial tissue, we also consider that a torque force is generated at the apical side in a specific direction with respect to the basal side, which adheres to the extracellular matrix (ECM). To represent such a torque in the framework of the 2D CVM, we introduce the torque force around the cell center as shown in Fig. 1(a). The simplest form may be given by where ν α is the coefficient of the torque force generated by the cell α, r α g is the area centroid of cell α, and n is a unit normal vector from the basal to apical sides.
The time evolution equation for r i is obtained by considering the force balance between the frictional force ηd r i /dt with friction constant η, the potential force −∂E({ r i })/∂ r i derived from Eq.(1), and the torque force T i given by Eq.(2) as follows; When the length of a cell edge falls below a threshold l th = 0.03 during a time-evolution according to Eq.(3), a T1 transition is performed by flipping the edge by 90 • [20]. We consider a simple configuration of a rectangular epithelial sheet with size L x and L y in the x-and ydirection, respectively, as shown in Fig. 1(b). For the x-direction, we apply the periodic boundary condition at x = 0 and x = L x . The rectangular area size is equal to the number of cells so that the average cell area is set to beĀ α = 1. This configuration corresponds to the tube geometry, a biologically ubiquitous structure of organs, which exhibits chiral morphogenesis as twisting of a heart tube and hindgut. We simply consider that the cells are attached to the boundary. We hence fix the ycoordinates of vertices on the bottom and top boundaries to y = 0 and y = L y , respectively. η is set to 1 unless otherwise noted. We calculate the time-evolution Eq. (3) with the time step ∆t = 0.01. We prepare initial cellular configurations packed with regular hexagonal cells to calculate the dynamics with σ = 0 and ν α = 0 to relax the system, and then set σ and ν α to the target values. The numbers of cells in the initial hexagonal configuration in the x-and y-directions are respectively denoted as N x and N y , and we set N x = 10 for all the simulation.
We first investigated the role of chiral torque generation on the dynamics of cells when the strength ν α of chiral torque generation is spatially homogeneous. As shown in Fig. 2(a), we found that the cells migrate bidirectionally when the fluctuation in line tension is present (ν α = 0.2, σ = 0.3, N y = 6). With negative ν α , the cellular behavior was reversed, confirming that the chiral torque generation is the driving force of the cellular migration. For sufficiently large system (N y = 40), the flow profile rapidly decays near the boundary, and the bulk velocity vanishes (Fig. 2(b)). This indicates that the torque force and the potential force are balanced in the bulk, while not balanced near the boundary. On the top and bottom boundaries, the torque forces on the vertices are exerted in the rightward and leftward direction, respectively, as depicted in Fig. 2(c), leading to the bidirectional cellular flow at the boundaries.
In the bulk, to understand how the torque force and the potential force are balanced, we consider a meanfield model where the torque force is exerted on a vertex O surrounded by three regular hexagonal cells as shown in Fig. 1(a), without the line tension fluctuation. When ν α is homogeneous, we readily find that the torque force exerted on the vertex O vanishes due to the 3-fold symmetry. Therefore, even though the torque force is present in the bulk due to cellular deformation, it should be so small that the deformation of cells can readily balance it.
We note that without the line tension fluctuation (σ = 0), all the cells are just deformed in the asymmetric fashion without continuous cellular flow (See Supplemental Material [24]). Hence, the cell rearrangements induced by the stochastic fluctuation of the line tension promote the continuous cellular flow.
We next consider a condition in which collective cell migration appears in a defined direction, induced by chiral torque generation. According to a symmetry argument [12], the symmetry along the y-axis has to be broken to achieve a unidirectional cellular movement along the x-axis. In this letter, we consider a situation where chiral torque generation depends on the position of the cells along the y-axis. We simply assume a linear form given by ν α = −λ(y α g − L y ), where y α g is the y-coordinate of the center of cell α. Here, we impose a boundary condition in which the friction coefficients on the top and bottom boundaries are considerably higher (η = 10000) to avoid the effect of the boundary torque force as we discussed above (Fig. 2(c)). With the torque gradient, we found that cells migrate unidirectionally in the direction perpendicular to that of the torque gradient ( Fig. 3(a)). A time-averaged flow profile is shown in Fig. 3(b) (λ = 0.01, σ = 0.3, N y = 40). With negative λ, the direction of the cellular migration is reversed. As λ increases, the steady-state cellular velocity V ave in the x-axis increases almost linearly (Fig. 3(c)). These results confirm that the chiral torque gradient is the driving force for this unidirectional collective cell migration. We also found that without the line tension fluctuation, the cells only deform without continuous flow (See Supplemental Material [24]). Hence, the cell rearrangements induced by the stochastic fluctuation in the line tension are necessary for the continuous cellular flow.
In contrast to the case in which the strength of torque strength is homogeneous, the flow profile shown in Fig. 3(b) indicates that the cells can migrate in the bulk region under the torque gradient. To see the mechanism of the unidirectional cellular migration, we consider the mean field model of three cells (Fig. 4(a)). We set the position of the target vertex O as the origin of the coordinate axes, and impose a linear torque gradient λ along the y-direction. Without loss of generality, we define the center of each surrounding cell as r α g = l(cos (2πα/3 + β), sin (2πα/3 + β)), where l = 2 √ 3/3 is the edge length for the hexagonal regular cell with area 1, β is an arbitrary constant and the cell number α is indexed from 1 to 3. In this configuration, since the force derived from the potential function E vanishes in total owing to the 3-fold symmetry, only torque forces from the 3 cells contribute to the force exerted on the vertex O. After a straightforward calculation, we obtain the force f = (3λl 2 /2, 0). Consequently, the mean field model predicts that, when the cells are packed in a regular hexagonal pattern without any boundary, the cells migrate with the speed v theory = 3λl 2 /2η.
We tested this prediction numerically using the model without fluctuation in line tension. We apply a small value of P 0 = 2.90 to make the cell shape close to regular hexagon. To suppress the boundary effect, we set the friction coefficients of the vertices on both of the bottom and top boundaries equal to those in the bulk, and set the torque force on the vertices on the boundaries to zero. As shown in Fig. 4(b), we found that, by increasing the system size N y , the average cell speed V ave approaches to v theory for each λ, confirming that the spatial gradient of torque generated by the cells drives the cellular migration. The deviation from the theoretical value even for the large system size is owing to the inevitable cellular deformation from the regular hexagon assumed in the mean field model. In the inset of Fig. 4(b), we confirmed the cellular deformation by calculating the av- erage q α of the cell shape q α = P α / √ A α , which equals to q hex = 2 √ 2 4 √ 3 ∼ 3.72, when the cells are regular hexagon.
From the above analysis, we found that the cell shape affects the cellular migration velocity. The cell shape can be modulated by altering the target shape index P 0 or the noise strength σ. Therefore, we investigated how the migration speed V ave depends on P 0 and σ (inset of Fig. 5). Here, we set N y = 40, λ = 0.01 and applied the same boundary conditions that were used in Fig. 3. We found that an increase of either P 0 or σ enhances the migration speed (inset of Fig. 5). It has been reported that an increase of P 0 turns the energy barrier for T1 transition to be lowered to almost zero above a critical value P * 0 ∼ 3.81, which is liquid-to-solid transition in confluent tissues [23]. Thus, as P 0 increases, cell rearrangements occur more frequently. The energy barrier is also overcome by increasing the line tension fluctuation σ, leading to an increase of the cell rearrangements frequency. Hence, the migration velocity increases as either P 0 or σ increases (inset of Fig. 5).
To see the influence of P 0 and σ in a unified way, we pay attention to the cell shape q α obtained from the numerical simulations. In Fig. 5, we plotted the migration velocity V bulk in the bulk layers against the cell shape q α BL averaged in the boundary layers, and found that the velocity curves in the inset of Fig. 5 are surprisingly collapsed in a single line. Hence, q α BL is an excellent indicator of the migration velocity in our model. Here, we define the boundary and bulk layers from the flow profile, and exclude the cells on the bottom and top boundaries for calculation of q α BL , since the cells are strongly deformed by the flat boundaries (See Supplemental Material [24]).
We discuss how the bulk velocity V bulk is uniquely determined by the cell shape q α BL in Fig. 5. The cellular velocity should be determined by the balance between the bulk torque force and how easily cells rearrange over the energy barrier of T1 transition in the boundary layers. In the present model, since the torque force depends on the cell shape, the bulk torque force should be determined by the cell shape q α BU averaged in the bulk. The cell shape q α has been also suggested as an indicator of how easily cells rearrange over the energy barrier of T1 transition [25,26]. Hence, the energy barrier of T1 transition in the boundary layers depends on q α BL . Consequently, considering that q α BL ≈ q α BU is satisfied (See Supplemental Material [24]), and hence both bulk torque force and energy barrier of T1 transition depend on q α BL , q α BL uniquely determines the migration velocity.
In this letter, we have reported that a dynamical chiral property at the single-cell level, which is cellular chiral torque generation, can induce the collective cellular migration. Our model predicts that, when the strength of torque is spatially homogeneous, the torque forces in the bulk almost balanced, and those generated by the cells at the boundaries drive the bidirectional cellular migration. Another prediction is that, under the gradient of chiral torque strength, the cells can migrate unidirectionally and perpendicularly to the gradient driven by the torque force generated in the bulk. Although the mechanism of the torque generation at the single-cell level is not fully revealed experimentally, previous studies reported that a combination of cytoskeleton and motor proteins generates the cellular torque. Activity of such cytoskeleton and motor proteins is regulated by various biochemical pathways such as Rho signaling pathway [27]. Hence, the spatial gradient of regulatory molecules should produce a gradient of the strength of the cellular torque. In in vivo systems, an epithelial tissue is attached to other different tissues, and hence such attached tissues probably emit biochemical signals to generate the concentration gradient of molecules regulating the strength of the cellular torque. Since both the cellular chiral torque generation and the gradient of the cellular torque are considered to be ubiquitous in biological systems, we expect that the mechanism of the LR symmetry breaking in tissue dynamics we have proposed plays an essential role in the chiral collective cellular movements, such as a twist of epithelial tube [7] and a unidirectional epithelial cellular flow [9], during development.
I. DEFINITION OF THE AREA CENTROID OF A CELL
We define the cell center of cell α using the area centroid as r α g = 1 6Aα Nα µ=1 ( r α µ + r α µ+1 )( r α µ × r α µ+1 ) · n. Here, µ is the index of the N α vertices around the cell α, indexed in the counterclockwise direction. The position vector of each vertex is represented by r α µ and r α Nα+1 = r α 1 .
II. PREPARATION OF THE INITIAL CELLULAR CONFIGURATION
We prepared the initial condition as shown in Fig. S1(a). In the initial condition, the cells in the bulk are set to regular hexagonal shape, while the cells at the boundaries, which are labeled with red dots in Fig. S1(a), are set to half of the regular hexagonal cells. The average cell areaĀ is set to 1. Then, we calculate the dynamics with σ = 0 and ν α = 0 to relax the system for 50 time units, and then we set σ and ν α to the target values. Fig. S1(b) is the configuration after the relaxation (P 0 = 3.54).
IV. DEFINITION OF THE BOUNDARY LAYERS
We systematically defined the boundary layers near either the bottom or the top boundaries. We discretized the area into N y layers from the 0th layer to the (N y − 1)th layer. The ith layer is defined as the layer ranging from y = iL y /N y to y = (i + 1)L y /N y (i = 0 ∼ (N y − 1)). The boundary layers are defined as the 0th∼N b th and N t th∼(N y − 1)th layers, respectively for those near the bottom and top boundaries.
We calculated the average velocity of each layer by averaging the velocity of cells in the layer in time, and drew the velocity profiles as shown in Fig. S3(a) (N y = 40, λ = 0.01, P 0 = 3.33). Using the velocity profile, we defined the boundary layers via the following procedure.
1. We calculated the approximate bulk velocity V ′ bulk and the approximate standard deviation σ ′ bulk of the bulk velocity by averaging the velocity of the 15th ∼ 24th layers.
2. We smoothed the velocity profile to obtain v = v sm (i) by a simple moving average, where the mean is taken from two data on either side of a central value.
3. We determined N b as the maximum integer i < 15 which satisfies either v sm (i) < V ′ bulk − 3σ ′ bulk or v sm (i) > V ′ bulk +3σ ′ bulk . Also, N t is determined as the minimum integer i > 24 which satisfies either v sm (i) < V ′ bulk −3σ ′ bulk or v sm (i) > V ′ bulk + 3σ ′ bulk .
In Fig. S3(a), we show the N b th and N t th layers, determined by the above procedure, with the square markers on examples of velocity profiles (N y = 40, λ = 0.01, P 0 = 3.33). V. RELATIONSHIP BETWEEN qα BL AND qα BU In Fig. S3(b), we show examples of spatial profiles of the shape index q α . We find that the shape index for the top and bottom layers are outlier due to the flat boundaries which induce large cell deformation. To avoid the outliers, we eliminated the top and bottom layers when we calculated q α BL .
In Fig. S4(a), we show the dependence of q α BU on the target shape index p 0 for the data in Fig. 5 in the main text (N y = 40, λ = 0.01, P 0 = 3.33). We found that larger σ and p 0 provide larger q α BU .
In Fig. S4(b), we show the relationship between q α BL and q α BU for the same data set. We find q α BU ≈ q α BL . As shown in Fig. S3(b), the spatial profiles of the shape index are nearly homogeneous except on the top and bottom layers. Hence, we obtain q α BU ≈ q α BL . | 2019-09-06T08:49:16.000Z | 2019-09-06T00:00:00.000 | {
"year": 2019,
"sha1": "49f1c8edc065948ce7994ae020fdbc4885bfb2b9",
"oa_license": "CCBY",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevResearch.2.043326",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "89fde90962d26c2222f0f52b6d096da5f49ad6ea",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
244732946 | pes2o/s2orc | v3-fos-license | Effect of Weekly Long-Acting Growth Hormone Replacement Therapy Compared to Daily Growth Hormone on Children With Short Stature: A Meta-Analysis
Background We performed a meta-analysis to evaluate the efficacy and safety of weekly long-acting growth hormone replacement therapy compared to daily growth hormone in children with short stature. Methods A systematic literature search up to April 2021 was performed and 11 studies included 1,232 children with short stature treated with growth hormone replacement therapy at the start of the study; 737 of them were using weekly long-acting growth hormone replacement therapy and 495 were using daily growth hormone. They were reporting relationships between the efficacy and safety of long-acting growth hormone replacement therapy and daily growth hormone in children with short stature. We calculated the odds ratio (OR), and mean difference (MD) with 95% confidence intervals (CIs) to assess the efficacy and safety of weekly long-acting growth hormone replacement therapy compared to daily growth hormone in children with short stature using the dichotomous or continuous method with a random or fixed-effect model. Results Long-acting growth hormone replacement therapy had significantly lower height standard deviation scores chronological age (MD, −0.10; 95% CI, −0.13 to −0.08, p <0.001), and insulin-like growth factor binding protein-3 (MD, −0.69; 95% CI, −1.09 to −0.30, p <0.001) compared to daily growth hormone in children with short stature. However, growth hormone replacement therapy had no significantly difference in height velocity (MD, −0.09; 95% CI, −0.69–0.5, p = 0.76), height standard deviation scores bone age (MD, −0.04; 95% CI, −0.10–0.02, p = 0.16), insulin-like growth factor 1 standard deviation scores (MD, 0.26; 95% CI, −0.26–0.79, p = 0.33), and incidence of adverse events (OR, 1.16; 95% CI, 0.90–1.50, p = 0.25) compared to daily growth hormone in children with short stature. Conclusions Long-acting growth hormone replacement therapy had significantly lower height standard deviation scores chronological age, and insulin-like growth factor binding protein-3 compared to daily growth hormone in children with short stature. However, growth hormone replacement therapy had no significant difference in height velocity, height standard deviation scores bone age, insulin-like growth factor 1 standard deviation scores, and incidence of adverse events compared to daily growth hormone in children with short stature. Further studies are required to validate these findings.
Conclusions: Long-acting growth hormone replacement therapy had significantly lower height standard deviation scores chronological age, and insulin-like growth factor binding protein-3 compared to daily growth hormone in children with short stature. However, growth hormone replacement therapy had no significant difference in height velocity, height standard deviation scores bone age, insulin-like growth factor 1 standard deviation scores, and incidence of adverse events compared to daily growth hormone in children with short stature. Further studies are required to validate these findings.
Keywords: children with short stature, long-acting growth hormone, daily growth hormone, height velocity, chronological age, bone age, insulin-like growth factor 1 BACKGROUND Growth hormone replacement therapy was first shown to cure growth hormone insufficiency in the late fifties (1). Growth hormone was manufactured by recombinant DNA technique in the early eighties, which increased its possible usages (2). The growth hormone usage has been accepted by the Food and Drug Administration in idiopathic short stature (3). Randomized controlled trials have shown that everyday recombinant human growth hormone can improve height velocity and efficiently progress body composition in subjects with idiopathic short stature (4)(5)(6). The use of growth hormone in cancer survivors and short children with RASopathies, chromosomal breakage syndromes, or DNA-repair disorders should be carefully evaluated owing to an increased risk of recurrence, primary cancer, or second neoplasia in these individuals (7). Early growth hormone replacement therapy was limited to two or three intramuscular injections weekly, which were rapidly substituted by subcutaneous injections every day due to the added suitability. Many studies have demonstrated that subcutaneous injections every day are effective as the early treatments of growth hormone insufficiency in terms of producing insulin-like growth factor 1 and encouraging linear growth (8,9). However, the need for injections every day causes non-compliance, higher health care costs, and reduced efficiency (10). This causes a decrease in height velocity in subjects with poor compliance (10,11). Multiple characteristic problems occur in the long-term administration of growth hormone therapy that can decrease their effectiveness. Some of the recognized problems for longterm adherence comprise the injections method, the injection difficulty, pain produced by the injections, the device type used for the injections, parental apparent benefits and formulary changes, and many others (10,11). The creation of pens, needle-free strategies, and infusion pumps with their limited applications helped in recovering compliance (12). Presently, oral administration is not similarly effective as injections in clinical practice (12). Many long-acting growth hormone formulae that need one injection every week, 2 weeks, or month have been developing offering a potential substitute to accomplish similar effectiveness and safety (13). Compared to daily recombinant human growth hormone treatment, a longacting growth hormone treatment needs one injection every week can probably save almost 300 injections year, and can also offer better suitability, tolerance, and compliance than daily injections (14). To maximize the growth effects of growth hormone treatment and its other physiological benefits, there have been many different advanced pharmaceutical tools used to improve the growth hormone preparations (15)(16)(17). The six types of long-acting growth hormone formulae that have been studied are pegylated molecules, depot formulae, growth hormone molecules bound to albumin, prodrug formulae, growth hormone molecules bound to Fab antibodies, and growth hormone synthesis proteins (18). However, longacting growth hormone effectiveness and safety in children with short stature is still conflicting. As an outcome, this metaanalysis aimed to generate efficacy and safety of weekly longacting growth hormone replacement therapy compared to daily growth hormone in children with short stature.
METHODS
The present study followed the meta-analysis of studies in the epidemiology statement (19), which was performed following an established protocol.
Study Selection
Included studies were that with statistical measures of association (odds ratio [OR], mean difference [MD], frequency rate ratio, or relative risk, with 95% confidence intervals [CIs]) between the efficacy and safety of long-acting growth hormone replacement therapy and daily growth hormone in children with short stature.
Human studies only in any language were considered. Inclusion was not restricted by study size or type. Publications excluded were review articles and commentary and studies that did not supply a degree of relationship. Figure 1 shows the whole study process.
The articles were included in the meta-analysis when the next inclusion criteria were met: 1. The study was a randomized control trial or retrospective study. 2. The target population is children with short stature treated with growth hormone replacement therapy.
3. The intervention program was growth hormone replacement therapy. 4. The study included comparisons between the long-acting growth hormone replacement therapy and daily growth hormone. The exclusion criteria were: 1. Studies that did not determine efficacy and safety of weekly long-acting growth hormone replacement therapy compared to daily growth hormone in children with short stature. 2. Studies with growth hormone replacement therapy in children with short stature. 3. Studies did not focus on the effect of comparative results.
Identification
A search protocol strategy was organized according to the PICOS principle (20) as follows: P (population): children with short stature treated with growth hormone replacement therapy; I (intervention/exposure): growth hormone replacement therapy; C (comparison): the long-acting growth hormone replacement therapy and daily growth hormone; O (outcome): change in the height velocity, height standard deviation scores chronological age, height standard deviation scores bone age, insulin-like growth factor 1 standard deviation scores, insulin-like growth factor binding protein-3, and incidence of adverse events in children with short stature treated with growth hormone replacement therapy; and S (study design): no restriction (21).
First, a systematic search was conducted in OVID, Embase, Cochrane Library, PubMed, and Google scholar till April 2021, by a blend of keywords and related words for the children with short stature, long-acting growth hormone, daily growth hormone, height velocity, chronological age, bone age, insulinlike growth factor 1, insulin-like growth factor binding protein-3, and incidence of adverse events as shown in Table 1. All selected studies were gathered in an EndNote file, duplicates were removed, and the title and abstracts were revised to eliminate studies that did not report a relationship between the efficacy and safety of long-acting growth hormone replacement therapy and daily growth hormone in children with short stature. The remaining articles were revised for related information.
Screening
Data were abbreviated based on the following: study associated and subject associated features onto a homogeneous form, the primary author last name, study period, publication year, country, the studies region, and design of the study; type of the population, the total number and subjects number, demographic data and clinical and treatment features; the evaluation period associated with measurement, quantitative method and qualitative method of assessment, source of information, and outcomes' assessment; and statistical analysis MD or relative risk, with 95% CI of relationship among the efficacy and safety of long-acting growth hormone replacement therapy and daily growth hormone in children with short stature (22). If a study fit for inclusion based upon the above-mentioned principles, data were extracted individually by two authors. In case of discrepancy, the corresponding author gave a final choice. When there were diverse data from a study, the data were extracted separately. For the bias risk in the studies: each study was assessed using two authors who individually evaluated the methodological quality of the selected studies. We used the "risk of bias tool" from the RoB 2: A revised Cochrane risk-of-bias tool for randomized trials to evaluate methodological quality (23). In terms of the evaluation criteria, each study was valued and allocated to one of the next three risks of bias: low: if all quality criteria were met; unclear or moderate: if one or more of the quality criteria were partly met or unclear, or high: if one or more of the criteria were not met, or not included. Any discrepancies were addressed by a reassessment of the original article.
Eligibility
The main result concentrated on measuring the efficacy and safety of weekly long-acting growth hormone replacement therapy compared to daily growth hormone in children with short stature. An assessment of the effects of efficacy and safety of long-acting growth hormone replacement therapy and daily growth hormone in children with short stature was extracted forming a summary.
Inclusion
Sensitivity analyses were limited only to studies reporting the relationship between the efficacy and safety of long-acting growth hormone replacement therapy and daily growth hormone in children with short stature. For subcategory and sensitivity analysis, we compared long-acting growth hormone replacement therapy to daily growth hormone treatment.
Statistical Analysis
The dichotomous or continuous method with random-effect or fixed-effect models was used to calculate the odds ratio (OR) or mean difference (MD) and 95% CI. We used the Chi-square test to perform biological heterogeneity analyses between different studies. We calculated the I 2 index; the I 2 index is from 0 to 100%. Values of about 0, 25, 50, and 75% indicate no, low, moderate, and high heterogeneity, respectively (24). When I 2 was higher than 50%, we chose the random effect model; when it was lower than 50%, we used the fixed-effect model. A subgroup analysis was performed by stratifying the original evaluation per liver cancer and chemotherapy different outcomes as described before. In this analysis, a p-value for differences between subgroups of <0.05 was considered statistically significant. Publication bias was evaluated quantitatively using the Egger regression test (publication bias considered present if p ≥0.05), and qualitatively, by visual examination of funnel plots of the logarithm of ORs or MDs versus their standard errors (SE) (22). All p-values were 2-tailed. All calculations and graphs were performed using Reviewer manager version 5.3 (The Nordic Cochrane Centre, The Cochrane Collaboration, Copenhagen, Denmark).
The 11 studies included 1,232 children with short stature treated with growth hormone replacement therapy at the start of the study; 737 of them were using weekly long-acting growth hormone replacement therapy and 495 were using daily growth hormone. All studies evaluated the efficacy and safety of weekly long-acting growth hormone replacement therapy compared to daily growth hormone in children with short stature.
The study size ranged from 25 to 440 children with short stature treated with growth hormone replacement therapy at the start of the study. The details of the 11 studies are shown in Table 2. Ten studies reported data stratified to height velocity; seven studies reported data stratified to height standard deviation scores chronological age; five studies reported data stratified to height standard deviation scores bone age; six studies reported data stratified to insulin-like growth factor 1 standard deviation scores; two studies reported data stratified to insulin-like growth factor binding protein-3; and 10 studies reported data stratified to the incidence of adverse events. Several studies adjusted data stratified analysis based on the dose given and were extracted separately. Their comparison results are shown in Figures 2-7.
-5, 7.
Selected studies stratified analysis that adjusts for children age, and ethnicity was not performed since no studies reported or adjusted for these factors.
Based on the visual examination of the funnel plot as well as on quantitative measurement by the Egger regression test, there was no indication of publication bias (p = 0.86). Though, most of the comprised studies were evaluated to be of a low methodological quality. All studies did not have selective reporting bias, and no articles had incomplete result data and selective reporting.
Long-acting growth hormone replacement therapy had significantly lower height standard deviation scores chronological age, and insulin-like growth factor binding protein-3 compared to daily growth hormone in children with short stature.
However, long-acting growth hormone replacement therapy had no significant difference in height velocity, height standard deviation scores bone age, insulin-like growth factor 1 standard deviation scores, and incidence of adverse events compared to daily growth hormone in children with short stature (25)(26)(27)(28)(29)(30)(31)(32)(33)(34)(35). However, the analysis of outcomes should be done with caution because of the small sample size of most of the selected studies in our meta-analysis (eight studies out of 11 <100 subjects); proposing the need for more studies to confirm these results or possibly to significantly affects confidence in the outcome assessment, e.g., height standard deviation scores bone age comparison with its relatively low p-value = 0.16.
The insulin-like growth factor 1 standard deviation scores levels shown in these trials were within the normal range. Earlier studies have also shown a rise of serum insulin-like growth factor FIGURE 5 | Forest plot of the change in insulin-like growth factor 1 standard deviation scores in long-acting growth hormone replacement therapy compared to daily growth hormone. 1 levels after long-acting growth hormone therapy, but the mean insulin-like growth factor 1 standard deviation scores levels were all preserved within +2 standard deviation values (36)(37)(38). This result may be associated with the slow release property of longacting growth hormone, which encourages liver-production of insulin-like growth factor 1. There was a significant rise in the free insulin-like growth factor 1 level, which represents its biological activity after growth hormone therapy. As an outcome, the level of free insulin-like growth factor 1 is a good predictor of recombinant human growth hormone management response (39). Guidelines established by the Pediatric Endocrine Society in 2016 suggested that determining the serum insulinlike growth factor 1 levels could be used to monitor patient compliance and the production of insulin-like growth factor 1 when the growth hormone dose is changed. The growth hormone dose is ought to be reduced if the levels of serum insulin-like growth factor 1 are exceeding the laboratory-defined upper limit of the normal range for the age or stage of puberty of the subject (40). The Growth Hormone Research Society states that there is no evidence that after long-acting growth hormone injection, the impact of possible supra-physiological and continued insulin-like growth factor 1 level is related to an increased risk of adverse side effects (15). However, men in the highest quartile of plasma insulin-like growth factor 1 levels indicated a higher risk of prostate cancers compared with men in the lowest quartile (41). Also, insulin-like growth factor 1 levels in the highest tertile or quintile were related to a higher risk of breast cancer or colorectal cancer (42,43). Moreover, high insulin-like growth factor 1 standard deviation scores during growth hormone therapy are associated with the risk of diminished insulin sensitivity (44). So, management with growth hormone needs close monitoring of the insulin-like growth factor 1 levels. Long-acting growth hormone formulae vary in the induction kinetics of serum insulin-like growth factor 1. Studies require considering the pharmacokinetics and pharmacodynamics of each formula for assessing the clinically ideal time point to determine insulin-like growth factor 1 levels (15). Also, the ideal dose and the frequency of administration of long-acting growth hormone with diverse mechanisms should be measured based on the insulin-like growth factor 1 levels.
Long-acting growth hormone replacement therapy had no significant difference in height velocity, height standard deviation scores bone age, insulin-like growth factor 1 standard deviation scores, and incidence of adverse events compared to daily growth hormone in children with short stature; suggesting that both are equivalent. However, the significant difference found in height standard deviation scores chronological age, and insulin-like growth factor binding protein-3 should be further studied since the number of studies used to determine these significant differences were small. This meta-analysis showed the relationship between the efficacy and safety of long-acting growth hormone replacement therapy and daily growth hormone in children with short stature. However, further studies are needed to validate these potential relationships. Also, further studies are needed to deliver a clinically meaningful difference in the results. These studies must comprise larger with more homogeneous samples. This was also suggested before in a similar meta-analysis study that showed a similar effect of weekly long-acting growth hormone replacement therapy compared to daily growth hormone in children with short stature treated with growth hormone replacement therapy (45). Wellconducted studies are also required to evaluate these factors and the combination of different subject-level data, children's age, and ethnicity; since our meta-analysis study could not answer whether they are related to the outcomes. In summary, long-acting growth hormone replacement therapy had significantly lower height standard deviation scores chronological age, and insulin-like growth factor binding protein-3 compared to daily growth hormone in children with short stature. However, growth hormone replacement therapy had no significant difference in FIGURE 7 | Forest plot of the incidence of adverse events in long-acting growth hormone replacement therapy compared to daily growth hormone.
height velocity, height standard deviation scores bone age, insulinlike growth factor 1 standard deviation scores, and incidence of adverse events compared to daily growth hormone in children with short stature. Further studies are required to validate these findings.
Limitations
There may be selection bias in this study because many studies selected were excluded from the meta-analysis. However, the excluded studies did not fulfill the inclusion criteria of our metaanalysis. Also, whether the results are associated with age and ethnicity or not could not be answered. The study designed to evaluate the relationship between the efficacy and safety of weekly long-acting growth hormone replacement therapy compared to daily growth hormone in children with short stature was based on data from previous studies, which might cause bias induced by incomplete details. The meta-analysis was based on a small number of studies (11 studies); eight studies were small, ≤100. Variables including children's age, ethnicity, and nutritional status of subjects were also the possible bias-inducing factors. Some unpublished articles and missing data might lead to a bias in the pooled effect. Subjects were using different treatment schedules, dosages, and health care systems. Lastly, none of the selected studies showed data on thyroid function, so the meta-analysis of this result was not accessible. The selected studies included patients with short stature, but they did not define properly which criteria were used to conceptualize short stature. The inclusion or exclusion criteria of the selected studies also did not mention information on whether or not patients with genetic syndromes were included, whether systemic diseases were excluded, and whether patients with idiopathic short stature, short family stature, or constitutional short stature were included. These criteria vary widely in the populations studied. It is not recorded whether or not patients with growth hormone deficiency were included, either isolated or associated with multiple pituitary deficiencies. The impact of puberty and hormone replacement with glucocorticoids and sex steroids was also not assessed in most of the selected studies to the extent that did not allow us to analyze its effect in our metal analysis. Final we height was not evaluated properly in the selected studies, an outcome that should always be the consistent objective of studies involving growth.
CONCLUSIONS
Long-acting growth hormone replacement therapy had significantly lower height standard deviation scores chronological age, and insulin-like growth factor binding protein-3 compared to daily growth hormone in children with short stature. However, growth hormone replacement therapy had no significant difference in height velocity, height standard deviation scores bone age, insulin-like growth factor 1 standard deviation scores, and incidence of adverse events compared to daily growth hormone in children with short stature. Further studies are required to validate these findings. However, the analysis of outcomes should be done with caution because of the small sample size of most of the selected studies in our metaanalysis; proposing the need for more studies to confirm these results or possibly to significantly affects confidence in the outcome assessment.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary material. Further inquiries can be directed to the corresponding author.
AUTHOR CONTRIBUTIONS
Conception and design: XZ. Administrative support: All authors. Provision of study materials or subjects: All authors. Collection and assembly of data: JC, LL, WP, CH, and LL. Data analysis and interpretation: All authors. Manuscript writing: All authors. Final approval of manuscript: All authors. All authors contributed to the article and approved the submitted version. | 2021-12-01T14:30:06.271Z | 2021-11-29T00:00:00.000 | {
"year": 2021,
"sha1": "b48410f33d87d50cbc8aff0fde63df15c1da250b",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fendo.2021.726172/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "9e5a1966904d0712020d19d8de654e3a3a77b2e2",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
259297124 | pes2o/s2orc | v3-fos-license | KR4SL: knowledge graph reasoning for explainable prediction of synthetic lethality
Abstract Motivation Synthetic lethality (SL) is a promising strategy for anticancer therapy, as inhibiting SL partners of genes with cancer-specific mutations can selectively kill the cancer cells without harming the normal cells. Wet-lab techniques for SL screening have issues like high cost and off-target effects. Computational methods can help address these issues. Previous machine learning methods leverage known SL pairs, and the use of knowledge graphs (KGs) can significantly enhance the prediction performance. However, the subgraph structures of KG have not been fully explored. Besides, most machine learning methods lack interpretability, which is an obstacle for wide applications of machine learning to SL identification. Results We present a model named KR4SL to predict SL partners for a given primary gene. It captures the structural semantics of a KG by efficiently constructing and learning from relational digraphs in the KG. To encode the semantic information of the relational digraphs, we fuse textual semantics of entities into propagated messages and enhance the sequential semantics of paths using a recurrent neural network. Moreover, we design an attentive aggregator to identify critical subgraph structures that contribute the most to the SL prediction as explanations. Extensive experiments under different settings show that KR4SL significantly outperforms all the baselines. The explanatory subgraphs for the predicted gene pairs can unveil prediction process and mechanisms underlying synthetic lethality. The improved predictive power and interpretability indicate that deep learning is practically useful for SL-based cancer drug target discovery. Availability and implementation The source code is freely available at https://github.com/JieZheng-ShanghaiTech/KR4SL.
Introduction
Cancer is a group of human diseases for which interactions among many genes play crucial roles (Huang et al. 2020;Jariyal et al. 2020).Therefore, identifying genetic interactions is important for the discovery of anticancer drug targets.Synthetic lethality (SL) is a type of genetic interaction between two genes such that the inhibition of either gene alone leaves cells viable while the inhibition of both genes leads to cell death (Kaelin 2005).This offers a promising strategy of cancer treatment.By targeting a gene that is nonessential in normal cells but synthetic lethal with a gene with cancer-specific alterations, we can kill the cancer cells without harming the normal cells.Besides, while many cancer-driver genes are not directly druggable (e.g. with loss-of-function mutations in tumor cells), we can instead target their SL partner genes, thereby expanding the anticancer drug target space (Setton et al. 2021).Given the advantages of SL-based cancer therapy, extensive efforts have been made to identify SL interactions in cancer cells.Some wetlab techniques have been developed for large-scale SL screening, such as RNA interference and genome editing with CRISPR/Cas9 (O'Neil et al. 2017).However, the lab-screening techniques face some problems such as high cost and off-target effects (Jariyal et al. 2020).Additionally, so far only a few SL-based drug targets have been validated to be clinically relevant, partly because the mechanisms of many SL pairs identified in the screenings remain unclear (Huang et al. 2020).To address these issues and speed up SL-based drug target discovery, many bioinformatic methods for SL prediction and analysis have been developed over the past decade.
Computational methods for SL prediction can be categorized into three types: statistical inference, network-based methods, and supervised machine learning methods (Wang et al. 2022a).The statistical methods are based on pre-defined hypotheses or rules to mine SL pairs.For instance, Jerby-Arnon et al. (2014) developed a data mining pipeline for SL inference named DAISY, assuming that two genes with SL relationship should be co-inactivated less than expected but co-expressed significantly more frequently in cancer cells.Yang et al. (2021a) proposed a model named SiLi to predict SL pairs in liver cancer, utilizing five inference strategies (i.e.functional similarity, differential gene expression, pairwise gene co-expression, pairwise survival, and rank aggregation).Network-based methods predict SL pairs by constructing biological networks and analyzing the topological features of genes in the networks.For example, Liu et al. (2018) simulated a human cancer signaling network and predicted SL pairs based on the distances between cancer genes and non-cancer genes in the network.Ku et al. (2020) used a clustering method to predict KRAS-related SL clusters from a protein-protein interaction (PPI) network and identified vital pathways for the clusters.Both types of methods have good interpretability since the hypotheses or network topologies can shed light on biological mechanisms underlying SL.However, the manual selection of hypotheses or topological features can be subjective, and these methods cannot make use of the known SL pairs identified by wet-lab experiments.
Another type of methods is supervised machine learning, which can be divided into traditional machine learning and deep learning methods.For instance, DiscoverSL uses multiomics data (copy number alteration, gene expression, mutation, etc.) to extract gene features and employs random forest to predict SL pairs.SL 2 MF (Liu et al. 2020b) is a matrix factorization (MF) model which encodes the SL graph as a matrix and factorizes the matrix to learn the representations of gene pairs.Most traditional machine learning methods need to select gene features manually, which relies on prior knowledge and may cause biases.MF-based methods cannot fully explore the structures of an SL graph for predicting novel SL pairs.The deep learning methods mostly learn and combine the latent representations of two genes to predict if they have the SL relationship.DDGCN (Cai et al. 2020) is the first graph neural network model for predicting SL pairs, and it uses graph convolution network (GCN) and a dual-dropout mechanism to address the issue of data sparsity and overfitting.GCATSL (Long et al. 2021) is a graph attention network incorporating known SL pairs and gene features derived from the PPI network and gene ontology (GO).Although these methods have achieved good prediction accuracy, they are mostly black boxes lacking interpretability, i.e. offering little insight into biological mechanisms underlying the predicted SLs.
Incorporating prior knowledge into the above supervised learning models can improve their interpretability.Knowledge graph (KG) is a graph-based data model to integrate structured knowledge using a network of entities (nodes) and relations between entities (edges of various types), which is a type of heterogeneous graph.The nodes and edges in a KG usually have well-defined descriptions which are easy to understand (Ji et al. 2022).KGs (and heterogeneous graphs in general) have been popular in some application domains such as recommender systems, precision medicine, and drug discovery (Rotmensch et al. 2017;Chen et al. 2021;Zeng et al. 2022).By exploring the graph structures of gene pairs in a KG, the semantic relationships between genes as nodes in the KG can be captured to predict SL interactions.KG4SL (Wang et al. 2021) is the first method for SL prediction that is based on KGs.It uses a graph neural network (GNN) which samples the 1-hop neighbors of genes from a KG to capture the local graph structures around the genes.PiLSL (Liu et al. 2022) learns the representation for a pair of genes based on an enclosing subgraph extracted from a KG.SLGNN (Zhu et al. 2023) extracts shared factors (entity types) from a KG and designs a factor-based GNN model to learn gene representations.
However, the above KG-based methods could fail to find some structures or patterns in a KG that are important for the predictions and interpretability for the following reasons.First, KG4SL and PiLSL randomly sample the neighbors for a gene or a pair of genes, and thus some important semantic structures of the KG may be excluded when learning the gene representations.Second, these methods are based on gene embeddings, i.e. two genes with similar embeddings are likely to have SL relationship.As such, KG structures are implicitly encoded into the latent embeddings, which makes it hard to find important graph structures directly.Although PiLSL has used the learned attention weights to extract important subgraphs as explanations, the attention weights of different layers may lead to different explanations.SLGNN selects the important entity types as explanations, whereas such entity types cannot explain the specific SL mechanisms.Therefore, it is urgently needed to develop an explainable AI model that is able to capture the crucial KG structures for SL prediction and interpretation.
Path-based reasoning on KG has been widely used in recommendation systems, KG completion and drug-disease prediction (Wang et al. 2019a;Liu et al. 2021;Geng et al. 2022).A "relational path" connects two nodes in a KG through sequential relations (i.e.triples), which can be regarded as a rule to infer the relationship between the two nodes (Lao et al. 2011).As such, the path-based connectivity between a pair of nodes derived from KGs empowers the model to perform predictions and explanations.Recently, Zhang and Yao (2022) introduced a graph structure named "relational directed graph" (relational digraph for short), composed of overlapping relational paths, and they proposed a GNN-based method named RED-GNN for KG reasoning.Using dynamical programming, RED-GNN can efficiently extract and learn from relational digraphs for KG reasoning.Inspired by this study, we harness relational digraphs from a biomedical KG focused on SL to learn the semantic representations for SL predictions.
Here, we propose Knowledge Graph Reasoning for Synthetic Lethality (KR4SL), an end-to-end framework for explainable prediction of SL interactions.First, we design an encoder which automatically constructs multiple relational digraphs from a KG, and then performs reasoning within these graphs starting from the primary gene.Second, during the reasoning process at a layer, we combine the structure information of the relational digraphs and textual semantics of entities as propagated messages, and we further enhance the sequential semantics of the relational paths within each relational digraph.In this way, we can obtain powerful semantic representations for gene pairs to support SL predictions.Third, an attentive aggregation is conducted to distinguish the importance among different edges, so we can select the most important edges to form paths as explanations.Experimental results under two scenarios show that KR4SL achieves the best performance compared with all the baselines.By analyzing the explanations in two case studies, we find that our model can not only interpret the prediction process but also provide explanatory evidence for the predicted SLs consistent with biological prior knowledge.In summary, our contributions are as follows: • We highlight the importance of deploying the structural information of relational digraphs from a heterogeneous graph for SL prediction.• We present a novel KG reasoning-based model named KR4SL which learns the semantic representations of gene pairs by efficiently encoding the structural information of relational digraphs in the KG and the textual semantics of entities and enhancing the sequential semantics using a gated recurrent unit (GRU).In addition, it uses attention mechanisms to identify important subgraphs as explanations.
In short, KR4SL integrates the strengths of path-based methods and GNN-based methods for KG reasoning and offers interpretability.• We conduct experiments on a real dataset, under two different settings.The experimental results demonstrate that KR4SL has superior performance for SL prediction compared with multiple state-of-the-art models.In addition, through case studies, we show that KR4SL can provide intuitive explanations by revealing the prediction process and providing evidence that sheds light on biological mechanisms of SL.
Methods
In this section, we first briefly exhibit the preliminaries and problem formulation, and then we introduce the detailed implementations of our framework.
Problem statement
SL graph.We represent the observed SL interactions as a graph G 1 , defined as a set of SL pairs fðg u ; g v Þjg u ; g v 2 V and u; v Mg, here M is the number of genes and V ¼ fg 1 ; g 2 ; . . .; g M g is a set containing all the genes in G 1 .
Knowledge graph.In addition to the SL interactions, we also have external knowledge about the functions of genes, such as pathways and biological processes.We represent these information as a KG G 2 , composed of a set of triples fðe h ; r; e t Þje h ; e t 2 E; r 2 Rg where E is a set of entities and R is a set of relations.G is a directed graph, and each triple in G 2 represents a type of semantic meaning.For example, (TP53, involved_in, double-strand break repair) means that the gene TP53 is involved in the biological process of double-strand repair.
Relational directed graph.Suppose that we have a heterogeneous graph G, constructed by merging the known SL graph G 1 and KG G 2 .To do this, we directly map genes from G 1 to entities in G 2 , and we add new edges for corresponding gene pairs according to G 1 .The new edges are bidirectional and represent SL relationships.Thus G is a directed graph.A "relational path" is defined as a sequence of relations in G, connecting a source entity and a target entity.The concept of "relational directed graph" is introduced by Zhang and Yao (2022).A relational directed graph (relational digraph) G K gq;gp is a graph in G with a source entity g q and a target entity g p .The graph has K layers, and the nodes at a same layer are different from each other.In this graph, any path from g q to g p is a relational path with length K, as a form of g p , here r k is an edge pointing from a node in the ðk À 1Þ-th layer to a node in the k-th layer.
Problem statement.Given a primary gene g q and the heterogeneous graph G, we assume that V q V is the set of all known SL partners of g q , and V u q ¼ V À V q À fg q g is the set of genes having unknown SL relationships with g q .Our task is to infer N genes from V u q which are mostly likely to be the SL partners for g q .To achieve this, we can cast all genes in V as candidate partners of g q and predict the scores for them.Then from the predicted scores, we select the scores of genes in V u q and rank them in descending order to determine the top-N candidate SL partners.
Overview of KR4SL
In Fig. 1a, the known SL graph and the KG are combined as a heterogeneous graph G, which is a directed graph.We then augment the graph with reverse edges (having opposite directions with original edges) and identity edges (connecting nodes to themselves).Our model is composed of an encoderdecoder framework.As shown in Fig. 1b, for a primary gene g q , the designed encoder automatically finds relational paths and conducts reasoning along these paths.At each layer, the structural semantics of KG and the textual semantics (textual description of entities) are fused as the passed messages, and sequential semantics of relational paths are enhanced through a GRU module (Cho et al. 2014).We regard the genes at the last layer as the candidate SL partners of g q .In Fig. 1c, The Figure 1.The Overall framework of our method.(a) A heterogeneous graph is constructed by a known SL graph and a KG, and our task is to predict novel SL partner genes (e.g.ATM and CDK1) for a primary gene (e.g.TP53).(b) The semantic information encoder of KR4SL.The top row is the union of 3-hop relational digraphs with the same source node TP53.Dashed arrows mean the reverse relations of corresponding colors.At each layer, the information is propagated from source entities to target entities.The bottom row is the detail of encoding process at the k-th layer.For each triple, the embeddings of textual semantics (extracted from a pre-trained BERT model) and embeddings of structural semantics are used to calculate the message, then attentive aggregation is performed among different triples with the same target.The sequential semantics is enhanced through a GRU and finally we get the semantic embedding for a target at this layer.(c) In the scoring decoder, the semantic embedding at the preceding layer is passed through a feed-forward layer to generate the score, measuring the possibility of a gene being a SL partner of TP53.All the candidates are ranked and top-N candidates are selected as the predicted SL partners of TP53.(d) For a predicted gene pair, the explanation subgraph is extracted from the relational digraph by choosing the edges with high attention weights learned during attentive aggregation. i160 Zhang et al.
semantic representations of these candidates are passed through a scoring decoder to generate scores.We rank all the candidates according to the generated scores in descending order and select top-N candidates as the predicted partners for g q .
Semantic information representation encoder
Before diving into the details of our model, we introduce the overall reasoning procedures.For a primary gene g q (the initial source node) and one of its candidate gene g p (the target node), we repeat the following processes within G K g q ;g p for K times: • For a source node, find all the neighbors as the target nodes; • Propagate the message from the source node to its target nodes and only calculate the representations of target nodes; • Take target nodes as the source nodes for the next layer; 2.3.1 Constructing relational digraphs for gene pairs with the same primary gene Since the reasoning process is conducted within relational digraphs, we need to find the relational digraphs between a primary gene and all its candidate genes.A common approach is to extract subgraphs or paths for multiple gene pairs separately (Teru et al. 2020;Yang et al. 2021b;Liu et al. 2022).However, this may be time-consuming if there are many gene pairs in a large-scale graph.Here, we adopt an efficient method that can collectively derive relational digraphs between a primary gene and all its candidate gene (Zhang and Yao 2022).
Suppose that G K gq;ep 1 is the K-hop relational digraph for ðg q ; e p1 Þ, where all the relational paths have the length K. Similarly, G K gq;ep i is the K-hop relational digraph for the source node g q and an arbitrary entity e pi .We can find that different relational digraphs share overlapping triples (refer to the exemplar graphs in Fig. 1b), thereby we can represent the union of all K-hop relational digraphs with source entity g q as where fe p1 ; e p2 ; . . .; e pN g ¼ E K gq denotes all the entities at the K-th layer in G K gq .Therefore, for the gene pairs with the same source gene g q , we do not need to extract the relational digraph.Instead, we can recursively construct the union of all K-hop relational digraphs G K gq for g q , using the strategy of dynamic programming (Xu et al. 2020).Specifically, we take the primary gene g q as the initial source node.Based on the graph G kÀ1 gq , we find all the neighbors at the k-th layer to obtain G k gq .This process is similar to the breadth-first-search algorithm.At the K-th layer, we can get G K gq with a tree-like structure.The entities at the K-th layer are reachable nodes from g q within K steps, in which the genes are the candidate partners for g q .
Message propagation along relational paths
Formally, starting from g q , we first construct G k gq based on G kÀ1 gq , then the information is propagated from the ðk À 1Þ-th layer to the k-th layer.Let h kÀ1 gq;ei 2 R d be the semantic representation propagated from g q to e i with k À 1 steps, where e i 2 E kÀ1 gq is an entity at the ðk À 1Þ-th layer and d is the size of latent dimension.For a triple ðe i ; r io ; e o Þ connecting the ðk À 1Þ-th layer with the k-th layer, we denote h k rio 2 R d as the representation of r io at the k-th layer, then we compute the semantic information passed from e i to e o as where W k 1 2 R dÂd is a linear transformation matrix at the k-th layer.
Since semantic information is mainly learned from the KG.When the structure of the KG is very sparse, the reasoning process cannot be fully supported (Chen et al. 2020;Malaviya et al. 2020), and the model cannot provide informative interpretations.To enrich the semantics of KG for reasoning, we integrate the textual information into our model.The textual information is usually learned from large-scale unstructured corpora (e.g.articles), and it has been widely used in natural language processing (Liu et al. 2020a;Ju et al. 2022).Here, we take the entity descriptions in KG as the input and use a BERT model named CODER (Yuan et al. 2022) to generate textual embedding for each entity.Since CODER is pre-trained on a biomedical corpus (UMLS, which includes biomedical terms such as GO and gene), the textual embedding contains rich biological semantics related to our task.Suppose T i 2 R d is the textual embedding of e i and T o 2 R d is the textual embedding of e o , we modify Equation (2) as: In this way, the textual semantics and the structural semantics from KG are fused into the reasoning process.
We then generate the semantic representation for e o .Since there may be multiple source nodes connected with e o , to distinguish the importance of different nodes and the corresponding relations, we employ an attention mechanism to aggregate information from all source nodes (Veli ckovi c et al. 2018): where W k 2 2 R dÂd and a k ei;rio;eo is the attention weight for the edge ðe i ; r io ; e o Þ calculated by: where W k a1 2 R d and W k a2 2 R dÂd are two linear transformation matrices.z k gq;eo is the intermediate representation encoding information propagated from g q to e o .
Learning sequential semantics
Next, we further enhance the sequential information from the ðk À 1Þ-th layer to the k-th layer, which can be also deemed to enhance the sequential dependencies of relational paths.We adopt a GRU (Cho et al. 2014) at the k-th layer to calculate the final semantic representation of e o : where r k ; f k ; n k are the reset, update, and new gates at the k-th layer, respectively.r is the sigmoid function, and * is the After K steps, we can obtain the representations of all entities at the K-th layer.From these entities we choose the genes to be the reachable candidate partners of g q .For instance, g p is one of the reachable candidate genes starting from g q with K steps, and h K g q ;g p represents the semantic information propagated from g q to g p .
Scoring decoder
Given the representation for a primary gene g q and its candidate g p , we then calculate the scores for g p , indicating the possibility of gene g p becoming the SL partner of g q .We leverage a feed-forward layer as the scoring function: where W ff 2 R d and b ff are learnable parameters.Note that for those candidate genes that do not appear at the last layer, we set their scores as zeros.In this way, we can get the scores between g q and any genes in the SL graph.
Optimization
As we mentioned before, in our task, since all the genes in the SL graph can be the candidate SL partners for a primary gene, we regard each candidate partner as a class and use a multiclass loss function (i.e.cross-entropy) to train our model, through stochastic gradient descent strategy:
L ¼ À X
ðgq;gpÞ2C log e sðgq;gpÞ P 8g2Cg q e sðgq;gÞ (8 where C is the set of gene pairs for training, and C gq is particularly denoted as all the training gene pairs with the same primary gene g q .
3 Results We performed experiments on SynLethDB dataset under two scenarios, i.e., transductive setting and inductive setting (see more details in Section 3.1.2).Our empirical study aims to answer the following research questions: • RQ1: How does KR4SL perform compared to the current start-of-the-art methods?• RQ2: How does the modeling of semantic information affect KR4SL?• RQ3: Can KR4SL provide explanations about predicting gene pairs, and are these explanations reasonable to reflect SL mechanisms?
3.1 Experimental setup
Dataset description
The SL gene pairs we used are derived from SynLethDB-v2.0 (Wang et al. 2022b).The gene pairs in SynLethDB-v2.0are collected from multiple sources, including wet-lab screenings (e.g.shRNA, RNAi, and CRISPR screenings), statistical screening (DAISY), text mining, etc.Additionally, a biomedical KG named SynLethKG is included in the database.Here, we focused on explaining SL pairs from the perspective of genetic functions.Thus, the knowledge about genes and their functions is extracted from the original SynLethKG.In the resulting KG, five types of entities include "Gene", "Pathway", and three types of GO terms, i.e. biological processes (BP), molecular functions (MF), and cellular components (CC).Four types of relations are included to describe if a gene is involved in a pathway or annotated with a GO term, namely, "Participate_in_PW, Participate_in_BP", "Participate_in_MF", and "Participate_in_CC".Hence, the triples in this KG are all between a gene and other entities.To further enrich the semantic information of the KG, we complemented the knowledge about gene-GO and GO-GO, using the data from OntoProtein (Zhang et al. 2022).The final KG contains more types of relationships, and the overall statistics about the SL graph and the final KG are shown in Table 1.Details about the final KG are summarized in Supplementary Table S1.
Experimental settings
To fully evaluate the performance of our model, we conducted experiments under the following scenarios.
• Transductive reasoning: Given a SL graph and a KG, the model infers the missed SL partners (or SL interactions) for a gene to complement the SL graph.In this setting, we split the dataset by gene pairs, thus the genes in the test set may exist in the training set.
• Inductive reasoning: The model infers SL partners for genes unseen during training.In this setting, the training set and the test set have disjoint sets of genes.This setting could further examine the generalization ability of the model.
Recall that our model has no constraint on the number of negative gene pairs.During inference, for the primary gene, the model outputs the scores for all the candidate partner genes, and ranks the scores in descending order.Therefore, we leverage ranking metrics to evaluate the model performance, namely, NDCG@N (Normalized Discounted Cumulative Gain), Precision@N, and Recall@N (He et al. 2017;Wang et al. 2022c).Here, N is the number of candidate genes in the ranking list.The final metrics are averaged over all primary genes in the test set.
For transductive reasoning, we randomly split gene pairs as T train ; T val , and T test with the ratio of 7:1:2.60% of T train are further sampled to be the known SL pairs.In this way, the KG and the known SL pairs are combined to construct the heterogeneous graph G, which is used to extract paths for training and inference.Note that all the undirected SL pairs are considered as bidirectional relations in our experiments.
For inductive reasoning, we follow the general setting of link prediction tasks to split the datasets (Teru et al. 2020;Zhang and Yao 2022).The set of SL genes is first divided into training genes and testing genes with the ratio of 6:4.Then, according to the two gene sets we obtained, we sample the corresponding subgraphs from the original KG: training sub-KG and testing sub-KG.The two sub-KGs have a disjoint set of genes, so that the genes used for testing are unseen during the training process.Next, for the SL pairs composed of training genes, we randomly split them as K train ; T train ; T val with the ratio of 4:3:3.K train is taken as the known SL pairs.We combine the training sub-KG with K train to construct G train , which is used to be the input for T train and T val .Similarly, for the SL pairs composed of testing genes, we generate K test and T test with the ratio of 4:6.K test and the testing sub-KG are used to construct G test , which is the input for T test .Same as in the transductive setting, all the SL pairs are augmented with reverse relations.
Implementation details
In our model, we use three GNN layers, which means that all the relational paths have a length of three.The Adam algorithm (Kingma and Ba 2015) is used as the optimizer, and L 2 regularization is adopted.We tune the weight decay rate in [10 À5 ; 10 À3 ] and the learning rate in [10 À4 ; 10 À3 ] with the maximum epoch 50.The batch size is set to 50, the latent embedding size is 48, and the dropout rate is 0.5.We run each experiment five times to compute the mean and standard deviation of results using an Nvidia Tesla V100 GPU.
Baselines
We compared our proposed method with the following methods: • DDGCN (Cai et al. 2020) designs a GCN model with dual dropouts to learn gene representations from the SL graph.• SL 2 MF (Liu et al. 2020b) utilizes logistic MF to learn the gene similarities for SL prediction, and the learning process is constrained by GO and PPI annotations.
• SLMGAE (Hao et al. 2021) integrates the known SL graph, GO and PPI and designs a multi-view graph autoencoder (GAE) to complete the SL interaction graph.• GRSMF (Huang et al. 2019) reconstructs the SL interaction graph based on graph regularized self-representative MF.PPI and GO are utilized to regularize the learning process.
• GCATSL (Long et al. 2021) takes the known SL graph, PPI and GO as input graphs and incorporates a dual attention mechanism to complete the SL graph.• KG4SL (Wang et al. 2021) adopts a GCN to learn the structural information of genes from a KG and predict SL interactions.Among the above baselines, for fair comparisons, we use the same GO and PPI annotations for SL 2 MF, SLMGAE, GRSMF, and GCATSL.Note that PiLSL (Liu et al. 2022) also uses a KG for SL prediction.Different from KG4SL and our model, it requires extracting an enclosing graph from KG for each gene pair and then taking the enclosing graph as input.According to the setting of our task, all possible candidate genes of a primary gene are ranked according to their predicted scores.The extraction of the enclosing graphs for all possible gene pairs ($40 million) is thus very time consuming.
As such, we did not implement PiLSL as a baseline.In addition to the above baselines designed for SL prediction, we also implemented four methods designed for standard link prediction.ComplEx (Trouillon et al. 2016) andTransE (Bordes et al. 2013) are graph embedding methods, AnyBURL (Meilicke et al. 2019) is a rule-based method, HAN (Wang et al. 2019b) is a GNN model learning node embeddings from heterogeneous graph based on meta-paths.
Performance comparison (RQ1)
We conducted experiments under transductive and inductive settings.Tables 2 and 3 report the performance of various methods evaluated by seven ranking metrics.The complete results are reported in Supplementary Tables S2, S3, and S5.We categorized the baseline methods into two groups according to their input data.The first group of baselines (DDGCN, SL2MF, SLMGAE, GRSMF, and GCATSL) uses known SL pairs and gene features (GO and PPI) as input without using KGs, while the second group (KG4SL, SLGNN, and NSF4SL) take a KG as the input, in which SLGNN also makes use of known SL interactions.
For transductive reasoning, both GCATSL and NSF4SL have good performance, indicating that the known SL pairs and KG help predict new SL pairs.For inductive reasoning, most baselines fail to predict SL pairs with unseen genes.Basically, these methods are based on gene embeddings learned by capturing the topology of the genes in the SL graph or KG.However, we cannot obtain accurate embeddings for those genes unseen in the training data, as their topology in the graph is unknown.Meanwhile, NSF4SL does not rely on the graph topology to make predictions, thereby it can maintain good prediction performance.
Compared with all the baselines, KR4SL consistently yields superior performance in two scenarios.For example, under transductive setting, KR4SL improves 78.1%, 35.47%, 34.54% over the best baseline (NSF4SL) in terms of NDCG@50, Precision@50, and Recall@50.This indicates that it is beneficial to consider multi-hop structures of the heterogeneous graph without sampling neighbors, and integrating the information of known SL pairs and the semantics of KG is helpful for SL reasoning.Moreover, since KR4SL infers SL partners by learning the relation rules rather than gene embeddings, our model still performs very well under inductive setting.
Model analysis for KR4SL (RQ2)
We conducted additional experiments to investigate the impact of semantic information on model performance.We first performed ablation studies with four types of model variants.Then we analysed how the size of directed relational digraphs and the dimension of latent embeddings affect the model performance.
Effect of semantic information
We derived seven variants from the original model.Instead of using all the neighbors at each layer, KR4SL-16n randomly samples 16 neighbors to be the target entities for each source entity, and similar notions for KR4SL-32n, KR4SL-64n, and KR4SL-128n.Here, the mean and median of total number of neighbors per source entity are 33 and 12, respectively (the two values are rounded up to integers), and the standard deviation is 94.52.KR4SL w=o text is a variant without using textual embeddings in Equations ( 3) and ( 2) is used to calculate KG reasoning for SL prediction i163 the propagated message.KR4SL w=o att is designed by removing the attention weights when performing aggregation in Equation ( 4).Additionally, we introduced a variant named KR4SL w=o gru by removing the GRU module in Equation ( 6) and we let z k g q ;e o ¼ h k g q ;e o at the k-th layer.Table 4 shows the results evaluated by seven metrics under the transductive setting.The complete results are shown in Supplementary Table S4.We compared the results of these variants with the original KR4SL, and we summarized our observations as follows.
• Compared with KR4SL, the performance of four neighbor sampling variants decrease dramatically.These variants only use partial paths to propagate the information, and both the structural semantics of KG and the textual semantics are not fully leveraged.From KR4SL-16n to KR4SL-128n, the model performance increases with the increasing number of sampled neighbors, demonstrating that more semantic information is exploited.KR4SL performs the best since it fully utilizes all the neighbors at each layer without sampling.• KR4SL w=o text underperforms KR4SL in terms of all the metrics.This result demonstrates that integrating the textual semantics of entities is valuable when reasoning SL pairs on KG. • KR4SL w=o att underperforms KR4SL, suggesting that the attention mechanism is necessary to capture the critical structures of the heterogeneous graph.
• The performance of KR4SL w=o gru drops significantly compared with KR4SL, indicating that the sequential semantics learned by GRU is crucial for prediction.One reason could be that our model relies on the relational paths (acting as rules) to perform reasoning, and thus encoding the sequential dependencies of the relational paths would play important roles.Here, the GRU is able to capture sequential semantics, by memorizing the historical information along the relational paths and thereby enhancing the relational knowledge.
Effect of the length of relational paths
To examine how the length of relational paths (i.e. the model depth) affects the model performance, we selected the number of GNN layers in the range of [1,2,3,4].KR4SL-1h represents that the 1-hop neighboring genes of a primary gene are deemed as candidate partner genes, and similar notions for others.Figure 2a shows that increasing the model depth can boost the prediction performance.In particular, KR4SL-3h achieves a significant improvement compared with KR4SL-2h.Such results suggest that a directed relational digraph (or relational paths) should have at least 3-hop neighbors, so as to provide informative semantics for reasoning.4. Effect of semantic information, N@N, P@N, and R@N are short for NDCG@N, Precision@N, and Recall@N, respectively.The best performance in each column is marked in bold.
Effect of the dimension of latent embeddings
To test how the dimension of latent embeddings affects the model performance, we selected the embedding dimension in the range of [16,32,48,64].KR4SL-16d means that the size of latent semantic embeddings d is set to 16, and similar notions for others.From Figure 2b, we can see that there is no obvious difference among all the models, except that the performance of KR4SL-16d is slightly lower than other models.This indicates our model is relatively stable when changing the size of the latent embeddings.Meanwhile, if the embedding size is too small, then the latent embeddings may not be informative enough to make good predictions.
Case study of explainability (RQ3)
Since KR4SL is to infer SL pairs along relational paths within relation digraphs, the whole reasoning process can be unraveled.Furthermore, benefiting from the attention mechanism, we can select the important paths (or subgraphs) from original relational digraphs for the predicted SL pairs.Such paths (or subgraphs) can provide the reasoning evidence and also reveal the corresponding SL mechanisms.Specifically, for a primary gene and one of its predicted partners, we first extracted the 3-hop relational digraph for the two genes.To find paths as complete as possible, for the first two layers, we chose the edges with attention weights higher than a threshold (which is set to 0.9 according to the distribution of attention weights).We further selected edges at the last layer with attention weights within the top five.As such, we can obtain the complete paths or a subgraph with multiple paths.Here we show two cases drawn from the learned KR4SL so as to demonstrate the explainability of KR4SL.We first analysed the gene pair of BRCA1 and USP1.As a tumor suppressor gene, BRCA1 has an established role in DNA repair process.It can promote double-strand break repair through homologous recombination.USP1 is also crucial for DNA repair (Yoshida and Miki 2004).By controlling the PCNA monoubiquitination, USP1 is able to regulate the homologous recombination and repair the broken DNA strands (Huang et al. 2006).A previous study has shown that inhibiting USP1 can reduce the viability of BRCA1/2 mutant cancer cells (e.g.ovarian cancer cells), thereby USP1 is a SL partner of BRCA1 and has been a promising drug target in clinical cancer therapy (Simoneau et al. 2023).In our model, USP1 is predicted to be USP1 is the 25th SL partner among the top 50 candidates of BRCA1, and the top 50 candidates of BRCA1 are provided in Supplementary Table S6.In the explanation subgraph shown in Fig. 3a, we can see that there are multiple paths from BRCA1 to USP1, and these paths follow the same schema of BRCA1 !SL GsG a known SL partner of BRCA1 !involved in GO !involved in inv USP1.The schema can be cast as a rule which we used to reason the SL partners for BRCA1, and it shows that USP1 is a newly predicted partner for BRCA1 because USP1 and the known partner have GO terms in common.Specifically, two known SL partners (FANCA, PARP1) share the same biological processes (DNA repair and cellular response to DNA damage stimulus) with USP1.USP11 and USP1 are both involved in ubiquitin-dependent protein catabolic process.The three highlighted GO terms are closely related to the process of DNA repair, as we mentioned above.Therefore, the explanation provides reasonable evidence of why USP1 is responsible for the SL with BRCA1.Another case is for ATM and ATR.ATR and ATM are signaling kinases that sense DNA damage and are major activators of DNA damage response (Kantidze et al. 2018).For ATM-defective cancer cells (e.g.CLL cells), ATR becomes the main activator to regulate DNA replication stress, and inhibition of ATR can induce unrepaired DNA damage and result in the cells death.ATR is a very promising target for treating ATM-defective tumors (Kwok et al. 2016;Wilson et al. 2022).Here, ATR is predicted to be the 34th SL partner among the top 50 candidates of ATM, and the top 50 candidates of ATM are reported in Supplementary Table S7.From the explanation subgraph in Fig. 3b, we observed a schema similar to that of BRCA1 and USP1, i.e.ATM !SL GsG a known SL partner of ATM !involved in GO !involved in inv ATR.
Here, five function nodes (three pathways and two biological processes) are shared by ATR and known SL partners of ATM.These functions (especially Activation of ATR in response to replication stress and negative regulation of DNA replication) are closely related to the SL mechanism we mentioned above.Hence, they can be key evidence explaining why ATR and ATM have the SL relationship.
Conclusion and discussion
In this article, we proposed an interpretable deep learning pipeline named KR4SL to recommend novel SL partner genes for primary genes using KG reasoning.Using dynamical programming to explore the subgraph structures in the KG, our model efficiently extracts the relational digraphs for a primary and all its candidate partners from a heterogeneous graph composed of known SL gene pairs and a biomedical KG.Each path within a relational digraph consists of a sequence of relations (i.e.triples in the KG), representing a rule
Figure 3 .
Figure3.Explanations drawn from the learned KG4SL, which unveil the reasoning paths and evidence of inferring SL interactions.In each graph, the primary gene (source node) and the predicted partner (target node at the last layer) are highlighted.(a) Explanation subgraph for BRCA1 and USP1 and (b) explanation subgraph for ATR and ATM.
Figure 2 .
Figure 2. Parameter analyses: (a) effect of the length of relational paths and (b) effect of the dimension of latent embeddings.
Table 1 .
The statistics of the SL graph and the knowledge graph.
Table 2 .
Performance comparison among various methods under transductive setting.Each experiment is repeated for five times to calculate the mean value and standard deviation.The best performance in each column is marked in bold.
Table 3 .
Performance comparison among various methods under inductive setting.Each experiment is repeated for five times to calculate the mean value and standard deviation."-" means that the value is zero, indicating the method is not able to make predictions.The best performance in each column is marked in bold. | 2023-07-01T06:16:10.011Z | 2023-06-01T00:00:00.000 | {
"year": 2023,
"sha1": "32738453dab6e67e2359736040dbf43f0583013b",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/bioinformatics/article-pdf/39/Supplement_1/i158/50741638/btad261.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f208925f84142091660556f9e3af8df6c1990920",
"s2fieldsofstudy": [
"Computer Science",
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16924903 | pes2o/s2orc | v3-fos-license | A NOVEL SHAPE BASED FEATURE EXTRACTION TECHNIQUE FOR DIAGNOSIS OF LUNG DISEASES USING EVOLUTIONARY APPROACH
Lung diseases are one of the most common diseases that affect the human community worldwide. When the diseases are not diagnosed they may lead to serious problems and may even lead to transience. As an outcome to assist the medical community this study helps in detecting some of the lung diseases specifically bronchitis, pneumonia and normal lung images. In this paper, to detect the lung diseases feature extraction is done by the proposed shape based methods, feature selection through the genetics algorithm and the images are classified by the classifier such as MLP-NN, KNN, Bayes Net classifiers and their performances are listed and compared. The shape features are extracted and selected from the input CT images using the image processing techniques and fed to the classifier for categorization. A total of 300 lung CT images were used, out of which 240 are used for training and 60 images were used for testing. Experimental results show that MLP-NN has an accuracy of 86.75 % KNN Classifier has an accuracy of 85.2 % and Bayes net has an accuracy of 83.4% of classification accuracy. The sensitivity, specificity, F-measures, PPV values for the various classifiers are also computed. This concludes that the MLP-NN outperforms all other classifiers.
INTRODUCTION
Lung diseases are one of the major health problems in the world that refers to disorders effect on the lungs.Inhalation prevents the body from getting sufficient oxygen.It can be caused by infection, an exposure at the workplace, medications and various disorders.Some of the lung diseases are emphysema, Asthma, chronic bronchitis Infections such as pneumonia, Lung cancer and Sarcoidosis.The causes of all types of lung disease, includes: smoking, radon, asbestos and air pollution.Lung disease has different signs and symptoms.some of the signs are cough, pain while breathing, trouble in breathing etc. imaging techniques like X-rays, CT scans provide images of tissue types like lung, bone, tissue etc., A computed tomography (CT) scan or computerized axial tomography which uses X-rays and computer to create detailed images of body's inner parts.For the diagnosis of the lung diseases computed tomography is used for investigation.
Bronchitis can be characterized by a bronchial wall damage which is associated with a significant increase in airway diameter.A patient with symptomatic bronchitis has chronic bacterial or fungal infection leading to inflammation of the airways.The manifestation of the disease is due to excessive mucus production causing impaired clearance of the same.The symptoms of bronchitis are expectoration of abnormal mucus, cough, airflow obstruction and respiratory tract infection.Pneumonia is inflammation of the parenchyma of the lung.Streptococcus pneumonia, are the most common cause of Pneumonia.Organisms settle in small air sacs called alveoli and continue multiplying and affect the body's immune system.Pneumonia are caused by white blood cells attacking the infection, the sacs get filed with fluid and pus.
In this work, a computerized approach for classification of the lung diseases such as bronchitis, pneumonia and normal lung images using CT images are presented.The dataset of the lung CT are collected from the radiology department at "Sri Manakula vinayagar medical college and hospital", Madagadipet, Puducherry.The input image is the lung CT obtained from the Philips MX16 EVO which uses exclusive technologies such as the EVO Eye algorithm and MAR technology make it possible for the MX16 EVO for excellent image quality with reduced noise and deliver performance and productivity that are at the top of its class.The evaluation of the proposed automated diagnosis system of lung diseases are performed by using a set of 300 Lung CT images which is a combination of bronchitis, pneumonia and normal lung images.The original image is converted to gray scale image.Preprocessing technique is applied to enhance the image quality, where three filtering techniques are applied namely the ordered, median and wiener filter.The results obtained from the PSNR values shows median filter results has more PSNR value than the other methods therefore the median filter is applied for preprocessing.The feature extraction is done by the proposed multiscale filter based technique which extracted the filtered coefficient values.The output of the extracted features are given as an input for feature selection using the evolutionary algorithm technique and classifiers namely MLP-NN, KNN, Bayes net classifiers are used for the classification of the images in to three groups namely bronchitis, pneumonia and normal lung images.The block diagram of the proposed work is represented in Fig. 1.
The remainder of this paper is organized as follows.Section 2 describes the related work.Section 3 explains the preprocessing of images.Section 4 deals with the proposed feature extraction technique.Section 5 gives the details of the feature selection using evolutionary algorithm.Section 6 describes the classification of lung diseases using various classifiers.Section 7 explains the results and discussion.Section 8 gives the conclusion.Fig. 1 gives the block diagram for the proposed work.
RELATED WORK
During the recent period, there are many studies on diagnosis of lung diseases using several features extraction and selection techniques.The model-based methods are based on learning the lung anatomy and developing a model of the lung.Based on this idea, Zrimec and Busayarat [1] first extracted the anatomical features and landmarks.Many modalities of diagnostic imaging (conventional X-ray, computed tomography, nuclear medicine, magnetic resonance imaging, ultrasound scans, etc.) are especially useful because they help to provide effective diagnoses in a noninvasive way.Besides, during the past several years there has been an important increase in the use of diagnostic medical imaging [2].Computer-aided diagnosis system is very helpful for radiologist in detection and diagnosing abnormalities earlier and faster [3].The computer aided diagnosis is a second opinion for radiologists before suggesting a biopsy test [4].In recent research literature, it is observed that principles of neural networks have been widely used for the detection of lung cancer in medical images [5].The methodology developed by Xiaomin et al. [6] is divided into three stages.In the first stage, a 2D multi-scale filter is used.In stage 2, blob-shaped nodules and non-nodules are differentiated.In stage 3, the shape features of each region are extracted, and a classifier based on automated rules to reduce false positives is applied.The methodology developed by Sivakumar and Chandrasekar [7] aims to develop a lung nodule detection system by means of segmentation, using fuzzy clustering models and classification with SVM.This methodology uses three types of kernels (linear, poly-nomial and radial basis function (RBF)) for SVM.In recent years, artificial neural networks (ANNs), fuzzy logic and genetic algorithms have been proposed as auxiliary tools in medicine.Fuzzy logic could increase the sensitivity and specificity of biomarkers for lung cancer [8,9]), whereas ANNs may play an important role in lung cancer, in morphologically differentiating malignant from benign cells [10] and in detection of pulmonary nodules from computed tomography chest images [11].Lung nodules detection by ensemble classification suggested by Kouzani, et al., [12] achieved lung nodule detection through classification of nodule / non-nodule patterns.It was based on random forests which are ensemble learners growing classification trees.Each tree produces a classification decision.An integrated output was calculated.
PREPROCESSING
Preprocessing is needed to improve the quality of the images which to remove noise from the images.Gray scale image conversion is the first step in preprocessing.It makes the feature extraction phase more trustworthy.
After the conversion of the grayscale the filter is applied.The filters that are applicable for the grayscale images are median filter, ordered filter and wiener filter.The output of the various filters is represented as follows.The output of the various filters Peak signals noise ratio (PSNR) represents the median filter has the higher value than other filter.By this justification and the literature the median filter is found to be more appropriate method for the preprocessing technique to remove the salt and the pepper noises from the CT images.
FEATURE EXTRACTION
Feature extraction recognizes and extracts remarkable features for a challenging task in order to decrease the complication of processing.It refers to making a division of new features by grouping the existing features.The purpose of feature extraction is to reduce original data set by measuring certain features that distinguish one region of interest from another.
PROPOSED MULTISCALE FILTER BASED FEATURE EXTRACTION
Modifying the pixels in an image based on some function of a local neighborhood of the pixels refers to image filtering.Gaussian removes high frequency components from the image to low frequency components.Larger σ removes more details.It involves various properties such as commutative, associative, linear, shift variance and differtiation.It is also possible to use second order derivatives to extract the features.A very popular second order operator is the Laplacian operator.The laplacian of a function f(x, y), denoted by, 2 f(x, y) is defined by: The discrete difference approximations are used to estimate the derivatives and represent the Laplacian operator with the convolution mask.
Multiscale filters are just filter banks created using Gaussian functions (whenever e power minus something is Gaussian).Where the goal is to evaluate the capability of different filter banks, based on Gaussian functions.They help to represent image features by giving the response of the image to a particular filter bank.In the proposed work the second order Gaussian derivates are performed followed by the laplacian derivates.
The formula for the second order Gaussian derivatives is given by, and the laplacian transform is given by, The proposed method is a combination of Gaussian second order derivatives and laplacian transform.The second order Gaussian derivatives (with aspect-ratio equals 0.25) are oriented at 0, 45, 90 and 135.All the filter banks contain 16 scales .The standard deviation used for the Gaussian-based filter banks is equal to a quarter of the filter-mask size.
FEATURE SELECTION
Feature Selection is a process of choosing minimized applicable features which improves the classification accuracy.Wrapper methods use the classifier as a black box process depending on their predictive power.It removes insignificant features from subsets of features by backward feature elimination.It eliminates the bottom ranked feature from the results.
GENETIC ALGORITHM
Genetic Algorithm is a population based search methods and it moves from one set of points (population) to another set of points in a single iteration with likely improvement using set of control operators.It improved fitness in the course of evolution.Chromosome consists of a set of elements, called 'genes', which holds a set of values for the optimization variables that provides the solution to a given problem.For the initial population the individuals are assessed through a fitness function, for optimizing task which can be minimization or maximization in a genetic algorithm.Based on this fitness function, chromosomes are selected and genetic operators like mutation, crossover is applied to the selected chromosomes.Offspring for the next generation are formed by using two main genetic operators, crossover and mutation.Crossover combines the features of two individuals to create two similar offspring by randomly selecting a point in the two selected parents gene structures and exchanging the remaining segments.Mutation operates by randomly changing one or more components of a selected individual.This operator prevents any stagnation that might occur during the search process.Genetic algorithms have established extensive improvement over a variety of random and local search methods.GA initially starts with unknown search space in order to bias subsequent search into promising subspaces.These chromosomes evolve, better individuals till it reaches the global optimum solution.
The proposed genetic algorithm states the initial population is assigned as 30 which is combined via a crossover operator to produce offspring.Two point crossover with crossover rate of 0.7.The individuals in the population are then evaluated via a fitness function and the process of crossover, evaluation, and selection is repeated for a predetermined number of generations or until a satisfactory solution has been found or condition fails.In the feature selection formulation of the genetic algorithm, individuals are composed of bit strings: a 1 in bit position indicates that feature should be selected 0 indicates this feature should not be selected.The mutation rate is 0.01 and the stopping condition is RMSE threshold of 0.001 or 500 operations with generation of 500.Fitness condition for RMSE and the crossover operator is two points.The top ranking 30 features are selected and fed into the classifiers.
CLASSIFIERS
A classifier assigns one class to each point of the input space.The input space is thus partitioned into disjoint subsets, called decision regions, each associated with a class.
MULTILAYER PERCEPTRON -NEURAL NETWORKS
A multilayer perceptron (MLP) is a feed forward neural network with input layer of source neurons, one or more hidden layers or computational neurons, and an output layer of computational neurons.The input signals are spread in a forward direction on a layer-by-layer basis.A hidden layer "hides" its desired output.Each layer can contain from 10 to 1000 neurons.Neurons in the hidden layer cannot be practical through the input/output behaviour of the network.A training set of input patterns is presented to the network.The network computes its output pattern, if it is an error free or difference between actual and desired output patterns the weights are adjusted to reduce this error.In a back-propagation neural network, the learning algorithm has two stages.First, a training input pattern is given to the network input layer.The network propagates the input pattern from layer to layer until the output pattern is generated by the output layer.If the pattern is different error is propagated from the output layer to the input layer.The weights are modified as the error is propagated.
1) Initialisation: Initialise the weights and threshold levels of the association to random numbers uniformly 2) Activation: Activate the back-propagation neural network by applying inputs x 1 (p), x 2 (p)…, x n (p) and desired outputs where, n is the number of inputs of neuron j in the hidden layer, and sigmoid is the sigmoid activation function.Calculate the actual outputs of the neurons in the hidden layer: where, m is the number of inputs of neuron k in the output layer.
3) Weight training: a) Update the weights in the back-propagation network propagating backward the errors associated with output neurons, calculate the weight corrections, update the weights at the output neurons.b) Calculate the error gradient, weight corrections and update the weights at the hidden neurons.4) Iteration: Increase iteration p by one, go back to step 2 and repeat the process until the selected error criterion is satisfied.In this work the number of features is equal to number of input neurons, the number of neurons in the hidden layer is one third of the number of neurons in the input layer, the number of classes constituted the number of neurons in the output layer, the Sigmoidal activation function with the initial random weights between 0 and 1 were used and the bias value was set to zero finally the output will result in 3 classes of labels namely bronchitis, pneumonia and normal lung.
BAYES NET
A Bayesian network, B =< N, A, ⨀ > is a directed acyclic graph (DAG) <N, A> with a conditional probability distribution for each node, collectively represented by ⨀.Each node n N represents a domain variable, and each arc a A between nodes represents a probabilistic dependency.The pair (G, CPD) encodes the joint distribution.It is a combination of the bayes network and decision tree based method.In general, a BN can be used to compute the conditional probability of one node, given values assigned to the other node, hence, a BN can be used as a classifier that gives the posterior probability distribution.
RESULTS AND DISCUSSIONS
The input CT images are preprocessed and the features are extracted and the selection of the features are done by the evolutionary approach and fed in to the classifiers.The classification accuracy is found for each classifier.The testing and the training of the dataset are done in tenfold cross validation.
True positive refers to correctly identified instances in a dataset.False positive refers to incorrectly identified instances in a dataset.True negative refers to correctly rejected instances in a dataset.False negative refers to incorrectly rejected instances in a dataset.
Classification Accuracy refers to the number of accurately classified instances.The classifiers such as MLP NN, KNN and Bayes net and the classification accuracy are represented in the Fig. 9.
Fig.9. Classification Accuracy
Sensitivity measures the proportion of actual positives which are identified as such.Specificity measures the proportion of negatives which are correctly identified as such.F-measure -is a measure of a test's accuracy.It considers both the precision p and the recall r of the test to compute the score: p is the number of correct results divided by the number of all returned results and r is the number of correct results divided by the number of results that should have been returned.Positive predictive value is the probability that subjects with a positive screening test.The results and the discussion shows the various performance measures and the results obtained for the dataset.
CONCLUSION
Lung diseases are one of the major health issues worldwide.The research work "A novel shape based feature extraction technique for diagnosis of lung diseases using evolutionary approach", inputs the lung CT images of three dataset namely bronchitis, pneumonia and normal lung.The preprocessing to enhance the image are done by median filter and the feature are extracted by proposed multiscale filter based technique and the feature selection is evolved by the genetic algorithm and the classifiers such as MLP-NN, KNN and Bayes net are used to classify the dataset in to 3 classes, the results ensures that the MLP-NN, KNN, Bayes net has an accuracy of 86.75 %, 85.2 %, 83.4% of classification accuracy.Thus MLP-NN classifier acts as a best classifier for classifying the lung diseases in this work.The enhancement of the proposed work can be done by adding more features extraction, selection techniques and different classifiers can also be included.Hence this work assists the doctors in the medical field for diagnosing the lung diseases using soft computing and evolutionary computing techniques.
Fig. 1 .
Fig.1.Block diagram of the proposed work
Nearest 2 )
Neighbor was proposed by Fix and Hodges (1951).It has a set of stored records, Distance Metric to compute distance between records, the value of k, the number of nearest neighbors is to be retrieved.Distance (k) is computed to classify an unknown record and use class labels of nearest neighbors to determine the class label of unknown record.It is done by Euclidean space, classification is belated till a new instance arrives, classification is done by comparing feature vectors of the different points, final function may be discrete or realvalued.The Advantage is Simple, Powerful, Requires no training time and nonparametric architecture.The KNN classifier works as follows: 1) Compute distance between two points: In cartesian coordinates, if a = (a 1 , a 2 ,..., a n ) and b = (b 1 , b 2 ,..., b n ) are two points in Euclidean n-space, then the distance from a to b, or from a to b is given by Euclidean distance, Find the class from nearest neighbor list The k-nearest neighbors Weigh the vote according to distance where weight factor, w = 1/d 2 3) Choosing the k value of: If k is too small, sensitive to noise points If k is too large, neighborhood may include points from other classes 4) Kd-tree construction algorithm Select the x or y dimension Partition the space into two Repeat recursively in the two partitions as long as there are enough points The 100 attributes and class index of 125 are taken.The values of the column are taken as 1 to 16, 17 to 32, 33 to 48, 49 to 64, 65 to 80 ,81 to 100 order and the values of the class label are computed.
Fig. 10 .
Fig.10.Performance Measures The dataset of the three diseases are taken and the classification of the diseases are represented as
Table . 1
. PSNR value of filter Median filter eliminates the outcome of input noise standards with exceptionally large magnitudes is its major advantage.The median filter formula is y(t) = median(x(t-T/2), x(t-T1+1),…,x(t),…,x(t+T/2)) where, t represents size of window. | 2015-12-07T17:26:25.782Z | 2014-07-01T00:00:00.000 | {
"year": 2014,
"sha1": "5a9428d8a12381293871dacb7b7f8742eb5331cc",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.21917/ijsc.2014.0115",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "c68ce1e53a6b83f6e822aecaf9c3d7f7f692a181",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
238857860 | pes2o/s2orc | v3-fos-license | Engineered Fibroblast Extracellular Vesicles Attenuate Pulmonary Inflammation and Fibrosis in Bleomycin-Induced Lung Injury
Pulmonary fibrosis is a progressive disease for which no curative treatment exists. We have previously engineered dermal fibroblasts to produce extracellular vesicles with tissue reparative properties dubbed activated specialized tissue effector extracellular vesicles (ASTEX). Here, we investigate the therapeutic utility of ASTEX in vitro and in a mouse model of bleomycin-induced lung injury. RNA sequencing demonstrates that ASTEX are enriched in micro-RNAs (miRs) cargo compared with EVs from untransduced dermal fibroblast EVs (DF-EVs). Treating primary macrophages with ASTEX reduced interleukin (IL)6 expression and increased IL10 expression compared with DF-EV-exposed macrophages. Furthermore, exposure of human lung fibroblasts or vascular endothelial cells to ASTEX reduced expression of smooth muscle actin, a hallmark of myofibroblast differentiation (respectively). In vivo, intratracheal administration of ASTEX in naïve healthy mice demonstrated a favorable safety profile with no changes in body weight, lung weight to body weight, fibrotic burden, or histological score 3 weeks postexposure. In an acute phase (short-term) bleomycin model of lung injury, ASTEX reduced lung weight to body weight, IL6 expression, and circulating monocytes. In a long-term setting, ASTEX improved survival and reduced fibrotic content in lung tissue. These results suggest potential immunomodulatory and antifibrotic properties of ASTEX in lung injury.
Pulmonary fibrosis is a progressive disease for which no curative treatment exists. We have previously engineered dermal fibroblasts to produce extracellular vesicles with tissue reparative properties dubbed activated specialized tissue effector extracellular vesicles (ASTEX). Here, we investigate the therapeutic utility of ASTEX in vitro and in a mouse model of bleomycin-induced lung injury. RNA sequencing demonstrates that ASTEX are enriched in micro-RNAs (miRs) cargo compared with EVs from untransduced dermal fibroblast EVs (DF-EVs). Treating primary macrophages with ASTEX reduced interleukin (IL)6 expression and increased IL10 expression compared with DF-EV-exposed macrophages. Furthermore, exposure of human lung fibroblasts or vascular endothelial cells to ASTEX reduced expression of smooth muscle actin, a hallmark of myofibroblast differentiation (respectively). In vivo, intratracheal administration of ASTEX in naïve healthy mice demonstrated a favorable safety profile with no changes in body weight, lung weight to body weight, fibrotic burden, or histological score 3 weeks postexposure. In an acute phase (short-term) bleomycin model of lung injury, ASTEX reduced lung weight to body weight, IL6 expression, and circulating monocytes. In a long-term setting, ASTEX improved survival and reduced fibrotic content in lung tissue. These results suggest potential immunomodulatory and antifibrotic properties of ASTEX in lung injury.
INTRODUCTION
Pulmonary fibrosis (PF) is a progressive lung disease characterized by alveolar epithelial damage and the accumulation of collagen in the pulmonary interstitium (Barratt et al., 2018). Globally, PF is on the rise with incidence and prevalence nearly doubling (Hutchinson et al., 2015;Strongman et al., 2018) with commensurate increases in resource use and cost (Diamantopoulos et al., 2018). Current strategies include antifibrotic compounds and tyrosine kinase inhibitors such as pirfenidone and nintedanib, respectively (Raghu et al., 2015). Despite these advances, options remain limited as current treatment strategies only function to decelerate disease progression.
Extracellular vesicles (EVs) are nanosized lipid bilayer particles secreted by nearly all cell types and function as a versatile mode of paracrine and endocrine signaling (Sahoo et al., 2021). EVs derived from therapeutic cell types are the principal signaling mediators of cell therapy in a variety of models (Marban, 2018b;Sahoo et al., 2021). Therapeutic EVs have the potential to repair tissue in ways conventional therapeutics cannot. This is due, in part, to the plethora of bioactive signals in their payload which affect multiple pathways (as opposed to a single target) (Marban, 2018a;Popowski et al., 2020). Our group has recently identified pathways involved in cell (and EV) potency in a therapeutic cell type, cardiosphere-derived cells (CDCs), a cardiac progenitor cell type with preclinically and clinically demonstrated tissue-reparative capacity (Smith et al., 2007;Makkar et al., 2012;Gallet et al., 2016). This finding led to engineering these therapeutic properties in an otherwise nontherapeutic cell type, skin fibroblasts. Constitutive activation of beta-catenin and gata4 in skin fibroblast turned them into producers of therapeutic EVs.
We call these engineered fibroblasts and the EVs they produce activated specialized tissue effector cells (ASTECs) or extracellular vesicles (ASTEX). Preclinical data suggest that ASTECs and ASTEX recapitulate the therapeutic effects of CDCs and CDC-EVs. Previous work demonstrates that ASTECs and ASTEX trigger functional improvement in injured mouse hearts (Ibrahim et al., 2019). Furthermore, in a mouse model of Duchenne muscular dystrophy (DMD), ASTEX trigger exercise improvement, attenuation of fibrosis, and preservation of skeletal muscle myofibers (Ibrahim et al., 2019). Further investigation implicated enrichment of miR-92a in ASTEX and downstream signaling of the bone morphogenic protein among the protissue reparative pathways. Bone morphogenic protein (BMP) signaling plays an active role in lung tissue recovery. For instance, upregulation of BMP signaling through derepression of noggin was protective in a mouse model of bleomycin-induced lung injury (De Langhe et al., 2015). Furthermore, single-cell analysis of the injured lung epithelium identified BMP repression and its rescue-attenuated epithelial metaplasia and fibrosis (Cassandras et al., 2020). Finally, epigenetic analysis of lung tissue from pulmonary fibrosis patients identified significant methylation of the miR-17-92 cluster (of which miR-92a is included) (Dakhlallah et al., 2013). Among the targets of miR-92a is WNT1inducible-signaling pathway protein 1 (WISP1) that signals to the profibrotic transforming growth factor-beta (TGFβ) pathway in pulmonary fibrosis (Berschneider et al., 2014). Given this mechanistic rationale, we investigated the therapeutic utility of ASTEX in vitro and in a bleomycin mouse model of lung injury.
Engineered Fibroblasts Secrete Higher Quantities of EVs With Wider Size Distribution and Distinct Marker Expression From Primary Fibroblast EVs
Following transduction of dermal fibroblasts with beta-catenin and gata4, ASTECs exhibited features of cell immortalization including faster growth rate and extended population doubling and increased telomerase (Ibrahim et al., 2019). Nanosight tracking analysis demonstrated that ASTEX had a broader diversity of EV sizes compared with EVs from primary dermal fibroblasts (DF-EVs) as shown by nanosight tracking analysis (DF-EV: Figure 1A). Despite this, the average size difference between DF-EVs and ASTEX was nominal ( Figure 1B). By quantity, ASTECs secrete nearly double the amount of EVs compared with primary fibroblasts ( Figure 1C). This was expected as immortalization has been shown to increase EV secretion (Yu et al., 2006). Analysis of EV preparations confirmed the presence of conserved EV markers including CD9, CD81, HSP90, and TSG101 and the absence of the contaminating endoplasmic reticulum marker calnexin ( Figure 1D).
The ASTEX Proteome Is Enriched in Transcription Factors and Deficient in FGF Signaling and Complement Activators Compared to Fibroblast EVs
To compare the protein content of DF-EVs and ASTEX, we performed proteomics on both EV populations. Using FunRich (Pathan et al., 2015), a software tool for functional enrichment and network analysis, we identified clear differences in protein content. ASTEX contained a wider diversity of proteins (Figure 2A). ASTEX had lower levels of extracellular matrix and complement factors and were enriched in proteins with enzymatic activity and chaperone proteins ( Figure 2B). Among the most bioactive protein cargo in EVs are transcription factors as they can affect multiple gene targets in host cells. Analysis of transcription factors in both groups showed that ASTEX contained a wider diversity of transcription factors activated by beta-catenin and gata4 including MAFK (Katsuoka et al., 2000;Cao et al., 2019) and RUNX1 (Su and Gudas, 2008;Ugarte et al., 2015) which are involved in antioxidant response (Nguyen et al., 2000) and regulation of hematopoiesis (Bowers et al., 2010; Figure 2C). ASTEX were also deficient in homeobox proteins most notably ALX1, which are downregulated by betacatenin (Ettensohn et al., 2003). Analysis of the predicted biological activity of EV proteins demonstrated suppression of pathways downregulated by beta-catenin signaling including vascular endothelial growth factor (VEGFR) (Zhang et al., 2001) and fibroblast growth factor signaling (FGFR) (Wang et al., (D) Western blot of conserved EV markers including tetraspanins (CD9 and CD81), heatshock protein 90 (HSP90), tumor susceptibility gene 101 (TSG101), and absence of the ER protein calnexin (CANX). Statistical comparison was done using independent Student's t-test with 95% CI; *p < 0.05, **p < 0.01, ***p < 0.001. 2017) in ASTEX compared with DF-EVs. The most prominent biological activity in ASTEX was relevant to mRNA transcription and translation ( Figure 2D). Beta-catenin activation is associated with broad changes in the transcriptome with a corresponding increase in protein translation (Valenta et al., 2012). Therefore, ASTEX had reduced complement fixation capacity (and immune activation), less extracellular matrix burden, and increased catalytic activity and mRNA transcription. These results suggest fundamental proteomic changes in the beta-catenin/gata4engineered fibroblasts and, by extension, their EVs.
ASTEX Exert Immunomodulatory and Anti-Fibrotic Effects in vitro
Macrophages including tissue-resident and monocyte-derived (circulating) macrophages play distinct roles in lung tissue homeostasis and disease (Grabiec and Hussell, 2016). Tissueresident macrophages are depleted in injury and are replaced by circulating monocytes that differentiate into macrophages with a spectrum of phenotypes (Morales-Nebreda et al., 2015). We decided to focus on macrophages derived from circulating monocytes as these drive pulmonary inflammation and fibrosis and persist in pulmonary tissue throughout the disease (Misharin et al., 2017). Therefore, we sought to assess the effect of ASTEX on bone marrow-derived macrophages (BMDMs). Exposure of BMDMs to ASTEX reduced expression of the proinflammatory cytokine IL6 and increased expression of the anti-inflammatory cytokine IL10 (Figures 4A,B). Indeed, reduction in IL6 alone is therapeutic in pulmonary fibrosis (Le et al., 2014). Another driver of pulmonary fibrosis is fibroblast activation followed by myofibroblast transdifferentiation (Kendall and Feghali-Bostwick, 2014). A primary signal for myofibroblast transdifferentiation is transforming growth factor β (TGFβ) (Chen et al., 2009). Treatment of human primary lung fibroblasts (HLF) with recombinant TGFβ induced expression of the myofibroblast marker smooth muscle actin (SMA) (Zhang et al., 1996). We decided to use SMA as a functional readout as it is necessary for focal adhesion maturation of myofibroblasts. Studies of smooth muscle actin inhibition demonstrate decreased myofibroblast development (Hinz et al., 2003;Sousa et al., 2007). Cotreatment of HLF with TGFβ and ASTEX reduces SMA expression (Figures 4C,D and Supplementary Figure 1A). Endothelial cells are another cell type that transdifferentiates into myofibroblasts upon exposure to insult; a process called endothelial to mesenchymal transition (EndMT). This pathogenic cell type is also characterized by SMA expression. To investigate the effect of ASTEX on EndMT, we exposed human umbilical vein endothelial cells (HUVECs) to IL1b and TGFβ. HUVECs exposed to these cytokines upregulated the expression of SMA (Figures 4E,F). This effect was attenuated in cells cotreated with ASTEX. Morphologically, HUVECs treated with IL1b and TGFβ had a hypertrophied tube-shaped structure typical of myofibroblasts which were attenuated in cells treated with ASTEX (Supplementary Figure 1B). Taken together, these findings demonstrate that ASTEX inhibit IL6, a major driver of inflammation in lung inflammation. Furthermore, ASTEX attenuate fibroblast activation and EndMT as seen by inhibition of smooth muscle actin. By targeting markers of inflammation and fibrosis, these findings suggest that ASTEX may exert therapeutic bioactivity in pulmonary injury.
ASTEX Exhibit a Favorable Safety Profile in Healthy Animals
Before evaluating the therapeutic efficacy of ASTEX in lung injury, we sought to evaluate the safety of intratracheal instillation of ASTEX into healthy C57BL/6 mice ( Figure 5A). At 3 weeks post-ASTEX instillation, animals exposed to a range of doses showed no weight loss ( Figure 5B). Furthermore, no changes were seen in lung weight to body weight ratio indicating lack of inflammatory reaction (e.g., pulmonary edema; Figure 5C). Animals treated with ASTEX developed no fibrotic burden in the lung compared with naïve animals ( Figure 5D). Finally, histological analysis including hematoxylin and eosin (H&E) and Masson's trichrome staining showed no remarkable signs of inflammation or fibrosis as evaluated by the Ashcroft score in a blinded manner (Figures 5E,F). These findings demonstrate a highly favorable safety profile of ASTEX.
ASTEX Exert Immunomodulatory and Antifibrotic Effects in a Mouse Model of Bleomycin-Induced Lung Injury
Having shown that ASTEX contain immunomodulatory and antifibrotic miRs, exert salutary effects in three relevant cell types in vitro, and exert no apparent toxic effects in vivo, we then investigated the therapeutic bioactivity of ASTEX in bleomycin-induced lung injury. Intratracheal (IT) instillation of bleomycin is a well-established model of lung injury. The hyperinflammatory period sustains until day 5 postexposure with an active profibrotic and injury resolution phase that sustains for 3 weeks thereafter (Kolb et al., 2020). We sought to investigate the effect of ASTEX on both short-term inflammation and long-term fibrotic development. In the first experiment, animals were given intravenous injections (retroorbital administration) of ASTEX, DF-EV, or vehicle at days 2 and 5 postbleomycin exposure and sacrificed on day 7 for analysis ( Figure 6A). None of the treatments affected the weight loss seen with bleomycin exposure (Supplementary Figure 2). However, animals receiving ASTEX had attenuated lung inflammation as demonstrated by lower lung weight to body weight ratio ( Figure 6B). Cell blood count (CBC) measurements from blood smears at baseline (day 2) and endpoint (day 7) showed no significant difference in overall leukocyte count (though all bleomycin-treated groups trended toward lower leukocyte count; Supplementary Figure 3A). Similarly, no differences were observed in neutrophil count (Supplementary Figure 3B). However, ASTEX-treated animals had significantly lower levels of circulating monocytes compared with vehicle-treated animals at day 7 and comparable with uninjured animals (Supplementary Figure 3C). As observed in the macrophage in vitro assay (Figure 4A), IL6 expression was reduced in lung tissue compared with ASTEX-treated animals compared with DF-EV or vehicle ( Figure 6C). Inflammatory cytokine array analysis of lung tissue lysates identified no statistically significant differences between the groups. However, a trend toward lower proinflammatory cytokine levels including eotaxin 2, IL1a, and IL6 was observed in ASTEX-treated lung tissue compared with vehicle and DF-EV (Supplementary Figure 4). The lack of statistical significance may be due, in part, to the patchy nature of bleomycin injury in lung tissue and that in some animals one lobe is preferentially affected over the other giving rise to high variability. It may also be due to the later time point of lung collection (day 8) where inflammation is resolved, and the differences are more subtle. In a second study, animals received a single IT administration of ASTEX or vehicle at the beginning of the fibrotic phase and followed for 28 days postbleomycin exposure ( Figure 6D). Animals receiving ASTEX had significantly better survival ( Figure 6E) and lower fibrotic burden as shown by hydroxyproline content in lung tissue ( Figure 6F). Taken together, this suggests an immunomodulatory and antifibrotic capacity of ASTEX in a bleomycin model of lung injury.
DISCUSSION
Pulmonary fibrosis is a progressive and deadly disease for which no curative strategy exists. EV therapy represents an advance in regenerative medicine as it is a cell-free therapeutic. ASTEX were developed based on previous investigation into the mechanism by which cells and their EVs exert therapeutic potency. Augmenting therapeutically inert fibroblasts with betacatenin and gata4 alters the proteomic and RNA content, and the therapeutic utility of the EVs. Here, we demonstrate in vitro and in vivo evidence of the therapeutic bioactivity of ASTEX in models relevant to lung injury. As discussed earlier, the miR-92a-BMP signaling axis has been implicated in the mechanism of action of ASTEX. BMP activation has been shown to drive the salutary effects observed in cardiac, skeletal, and pulmonary injury models where ASTEX were tested. BMP activation in macrophages inhibits inflammatory activation through, in part, IL6 inhibition (Xia et al., 2011;Talati et al., 2014;Wei et al., 2018). It has been proposed that the onset of familial pulmonary arterial hypertension involves a two-hit event comprising a BMP receptor loss of function mutation and dysregulation of IL6 (Hagen et al., 2007). These findings corroborate our observation of the inhibitory effect of ASTEX on IL6 in macrophages and SMA expression in TGFβ-activated fibroblasts. Of course, miR-92a operates in a combination of other miRs and small RNAs which might function additively or synergistically.
While the present findings and mechanistic rationale suggest a therapeutic utility of ASTEX in PF, this study remains preliminary. The proposed mechanism for ASTEX function (via miR-92a transmission and BMP activation) has yet to be validated. Furthermore, the full anti-inflammatory effect of ASTEX in the inflammatory response remains to be fully elucidated. Another limitation of this study is the limited scope of the investigation. For instance, we have not investigated the effect of ASTEX on resident (bronchioalveolar) macrophages. These findings will also need to be validated in vivo to ensure the generalizability of in vitro findings. Finally, while we have investigated the anti-inflammatory and antifibrotic properties of ASTEX, the effect of this therapeutic on the alveolar progenitor niche has not been investigated. Therefore, future studies will explore the current findings in-depth and broaden the scope of the effect of ASTEX on pulmonary tissue and, more specifically, therapeutically relevant pathways. Increasing understanding of how EVs like ASTEX function to repair pulmonary tissue offers
Animal Subjects
All procedures related to animals have been conducted under approved Institutional Animal Care and Use Committee protocols.
Activated Specialized Tissue Effector Extracellular Vesicles Preparation
To produce EVs from ASTECs (ASTEX), cells are expanded in a tri-layer flask (Thermo Fisher Scientific, Waltham, MA, United States) in complete media. When cells reach confluence, complete media are removed, and cells are washed with Iscove's modified Dulbecco's medium (IMDM) to remove FBS contaminants. Cells are then conditioned in IMDM for 15 days. Prior to ASTEX isolation, conditioned media are cleared from cellular debris through a 0.45-µm vacuum filter. ASTEX are isolated using ultrafiltration using a 100-kDa centrifugal filter.
Bleomycin Model
Fourteen-week-old male C57BL/6 mice were instilled with 0.7 U/kg (retro-orbitally for short-term study) or 1.5 U/kg (intratracheally) for long-term study of bleomycin (MilliporeSigma, Burlington, MA, United States) exposure. Animals were sacrificed at day 7 or 28 for the long-and short-term studies, respectively.
Lung Fibroblast Model
Primary adult human lung fibroblasts [American Type Culture Collection (ATCC)] were maintained in fibroblast basal medium (ATCC) and supplemented with fibroblast growth kit-low serum (ATCC). Cells were expanded for three passages before performing TGFβ exposure assay. Briefly, cells were exposed to 5 ng/ml of recombinant TGFβ (Thermo Fisher Scientific). Cells were also cotreated with ASTEX at a cell to ASTEX ratio of 1:100. Cells were incubated for 48 h prior to Western blot or flow cytometry analysis.
Human Umbilical Vein Endothelial Cell Endothelial to Mesenchymal Transition Assay
Human umbilical vein endothelial cells (HUVECs: ATCC) were maintained in endothelial cell medium supplemented with endothelial growth kit (ATCC). Cells were expanded for three passages before exposure to 5 ng/ml of recombinant TGFβ (Thermo Fisher Scientific) and 1 ng/ml of IL1b (Thermo Fisher Scientific) for 5 days. Cells were treated at day 0 with vehicle or ASTEX at a cell to ASTEX ratio of 1:100.
Macrophage Isolation Assay and Model
Macrophages were prepared by isolating bone marrow cells from 3-month-old female Wistar Kyoto rats. Monocytes were purified and differentiated into BMDM by culturing with recombinant M-CSF (20 ng/ml; Life Technologies, Carlsbad, CA, United States). Whole bone marrow cells were collected via aspiration with ice-cold phosphate-buffered saline (PBS) and filtered using a 70-µm cell strainer and centrifuged at 400 × g for 10 min. Red blood cells were lysed out using 10 ml ACK buffer (Gibco, Waltham, MA, United States) for 30 s followed by quenching with complete media (IMDM + 10% FBS + 20 ng/ml M-CSF). Cells were centrifuged and resuspended in complete media and cultured for 5 days at 37 • C with 20% O 2 and 5% CO 2 . Cells were then seeded into six-well plates at 6-8.0e10 (Diamantopoulos et al., 2018) cells/well. Cells were incubated. Fresh complete media were exchanged on day 5, and cells were monitored for confluence. EVs were administered once BMDM cultures reached ∼75% confluence. Serum concentration was reduced to 1% during assays to allow EV uptake. BMDMs were cultured with EVs over night before RNA was isolated for analysis.
Sample Preparation
(1) Take 100 µg protein per sample in the ultrafiltration tube. Transfer the solution into Microcon devices YM-10 (Millipore). The device was centrifuged at 12,000 rpm at 4 • C for 10 min. Subsequently, 200 µl of 50 mM ammonium bicarbonate were added to the concentrate followed by centrifugation and repeat once.
(2) After reduced by 10 mM DTT at 56 • C for 1 h and alkylated by 20 mM IAA at room temperature in the dark for 1 h, the device was centrifuged at 12,000 rpm at 4 • C for 10 min and wash once with 50 mM ammonium bicarbonate. (3) Add 100 µl of 50 mM ammonium bicarbonate and free trypsin into the protein solution at a ratio of 1:50, and the solution was incubated at 37 • C overnight. (4) Finally, the device was centrifuged at 12,000 rpm at 4 • C for 10 min; 100 µl of 50 mM ammonium bicarbonate was added into the device and centrifuged, and then repeat once. (5) Lyophilize the extracted peptides to near dryness.
Resuspend peptides in 20 µl of 0.1% formic acid before liquid chromatography coupled to tandem mass spectrometry (LC-MS/MS) analysis.
Total flow rate: 250 nl/min. LC linear gradient: from 2 to 8% buffer B in 3 min, from 8 to 20% buffer B in 56 min, from 20 to 40% buffer B in 37 min, then from 40 to 90% buffer B in 4 min.
Mass Spectrometry
The full scan was performed between 300 and 1,650 m/z at the resolution 60,000 at 200 m/z, the automatic gain control target for the full scan was set to 3e6. The MS/MS scan was operated in Top 20 mode using the following settings: resolution 15,000 at 200 m/z; automatic gain control target 1e5; maximum injection time 19 ms; normalized collision energy at 28%; isolation window of 1.4 Th; charge sate exclusion: unassigned, 1, >6; dynamic exclusion 30 s.
Data Analysis
Six raw MS files were analyzed and searched against human protein database based on the species of the samples using Maxquant (1.6.2.14). The parameters were set as follows: the protein modifications were carbamidomethylation (C) (fixed), oxidation (M) (variable); the enzyme specificity was set to trypsin; the maximum missed cleavages were set to 2; the precursor ion mass tolerance was set to 10 ppm; and MS/MS tolerance was 0.6 Da.
EVs were lysed using RIPA buffer followed by acetone precipitation and reduction using 1 mM tris(2-carboxyethyl) phosphine (TCEP). Samples were then alkylated with 5 mM iodoacetamide and digested with ∼1:40 trypsin-LyC (Promega, Madison, WI, United States), desalted, and speed vac to total dryness. Liquid chromatography MS/MS was done using a Dionex Ultimate 3000 Nano-LC coupled with an Orbitrap Q Exactive TM mass spectrometer (Thermo Fisher Scientific, USA) with an ESI nanospray source. Mobile phase A comprised 0.1% aqueous formic acid and mobile phase B of 0.1% formic acid in acetonitrile. Peptides were then loaded onto the analytical column [100 µm × 10 cm in-house-made column packed with a reversed-phase ReproSil-Pur C18-AQ resin (3 µm, 120 Å, Dr. Maisch GmbH, Germany)] at a flow rate of 600 nl/min using a linear AB gradient composed of 6-30% B for 38 min, 30-42% B for 10 min, 42-90% B for 6 min, and 90% B elution for 6 min. The temperature was set to 40 • C for both columns. Nanosource capillary temperature was set to 270 • C and spray voltage set to 2.2 kV. The Orbitrap Q Exactive TM operated in a data-dependent mode, where the five most intensive precursors were selected for subsequent fragmentation. Resolution for the precursor scan (m/z 300-1,800) was set to 70,000 at m/z 400 with a target value of 1 × 106 ions. The MS/MS scans were also acquired in the orbitrap with a normalized collision energy setting of 40 for HCD.
Raw MS files were then analyzed and cross-referenced against the human protein database based on the species of the samples using Maxquant (1.5.6.5). The parameters were set as follows: the protein modifications were carbamidomethylation (C) (fixed) and oxidation (M) (variable); the enzyme specificity was set to trypsin; the maximum missed cleavages were set to 2; the precursor ion mass tolerance was set to 10 ppm; and MS/MS tolerance was 0.6 Da. Only high confident identified peptides were chosen for downstream protein identification analysis. The listed common secreted proteins were classified using Panther (Protein Analysis Through Evolutionary Relationships 1 ). To glean relevant biological processes, transcription factors, and FunRich (Functional Enrichment Analysis Tool) to explore protein classification, molecular function, and pathways.
Flow Cytometry
Cells were harvested and counted (2 × 10 5 cells per condition) and washed with 1% bovine serum albumin (BSA) in 1 × PBS and stained with the appropriate antibody (BD Pharmingen, San Diego, CA, United States) for 1 h at 4 • C. The cells were then washed again and resuspended in 1% BSA in 1× PBS. BD Cytofix/Cytoperm TM kit was used for cell permeabilization before staining. Flow cytometry was performed using the Sony SA3800 Spectral Cell Analyzer.
Exosome Preparation and Collection
Exosomes were harvested from transduced fibroblasts at passage 16 under hypoxia conditioning. Briefly, cells were grown in 636 cm 2 CellStack chambers (Corning, Corning, NY, United States) until confluence. The flask was then washed three times with phenol-free IMDM and conditioned in the same for 15 days under 20% O 2 . Conditioned media were collected, pooled, and filtered through 0.45 µm filter to remove apoptotic bodies and cell debris and frozen at −80 • C for later use. Extracellular vesicles were isolated using centrifugebased ultrafiltration with a molecular weight cutoff of 100 kDa (MilliporeSigma). Isolated EVs were quantified using Malvern Nanosight NS300 Instrument (Malvern Instruments, Malvern, UK) with the following acquisition parameters: camera levels of 15, detection level less than or equal to 5, number of videos taken 4, and video length of 30 s.
Hydroxyproline Assay
Hydroxyproline content in lung tissue was quantified chemically using a Hydroxyproline Assay Kit (Sigma-Aldrich) per manufacturer's instructions.
Mouse Inflammatory Cytokine Array
Expression of inflammatory markers in lung tissue lysates from animals exposed to bleomycin (including those also treated with ASTEX, DF-EVs, or vehicle) or sham control (day 8 postexposure) were measured using a Quantibody R Mouse Inflammation Array 1 Kit (RayBiotech, Peachtree Corners, GA, USA) per manufacturer's instructions.
Western Blot
Membrane transfer was performed using the Turbo Transfer System (Bio-Rad, Hercules, CA, United States) after gel electrophoresis. The following antibody staining was then applied and detected by SuperSignal TM West Pico PLUS Chemiluminescent Substrate (Thermo Fisher Scientific). To prove for SMA, antimouse human/mouse/rat antibody (MAB1420-SP; R&D Systems, Minneapolis, MN, United States) was used.
Exosome RNA Sequencing
Library Preparation and Sequencing: miRNAs The miRNA sequencing library was prepared using the QIASeqTM miRNA Library Kit (Qiagen). Total RNA was used as the starting material. A preadenylated DNA adapter was ligated to the 3 ends of miRNAs, followed by ligation of an RNA adapter to the 5 end. A reverse-transcription primer containing an integrated Unique Molecular Index (UMI) was used to convert the 3 /5 -ligated miRNAs into cDNA. After cDNA cleanup, indexed sequencing libraries were generated via sample indexing during library amplification, followed by library cleanup. Libraries were sequenced on a NextSeq 500 (Illumina, San Diego, CA, United States) with a 1 × 75 bp read length and an average sequencing depth of ∼10 M reads/sample.
Data Analysis
The demultiplexed raw reads were uploaded to GeneGlobe Data Analysis Center (Qiagen) at https://www.qiagen.com/ us/resources/geneglobe/, for quality control, alignment, and expression quantification. Briefly, 3 adapter and low-quality bases were trimmed off from reads first using cutadapt (version 1.13) with default settings, then reads with less than 16 bp insert sequences or with less than 10 bp UMI sequences were discarded (Martin, 2011). The remaining reads were collapsed to UMI counts and aligned to miRBase (release v21) mature and hairpin databases sequentially using Bowtie v1.2 (Langmead et al., 2009). The UMI counts of each miRNA molecule were counted, and the expression of miRNAs was normalized based on total UMI counts for each sample.
Total RNA Sequencing
Library construction was performed using the SMARTer R Stranded Total RNA-Seq Kit (Takara Bio USA, Inc., Mountain View, CA, United States). Briefly, total RNA samples were assessed for concentration using a Qubit fluorometer (Thermo Fisher Scientific) and for quality using the 2100 Bioanalyzer (Agilent Biotechnologies, Santa Clara, CA, United States). Total RNA was converted to cDNA, and adapters for Illumina sequencing (with specific barcodes) were added through PCR using only a limited number of cycles. The PCR products were purified, and then ribosomal cDNA was depleted. The cDNA fragments were further amplified with primers universal to all libraries. Lastly, the PCR products were purified once more to yield the final library. The concentration of the amplified library was measured with a Qubit fluorometer, and an aliquot of the library was resolved on the Bioanalyzer. Sample libraries were multiplexed and sequenced on a NextSeq 500 platform (Illumina) using 75 bp single-end sequencing. On average, 50 million reads were generated from each sample.
Data Analysis
Raw reads obtained from RNA-Seq were aligned to the transcriptome using STAR (version 2.5.0) (Dobin et al., 2013)
Statistical Analysis
Statistical comparisons between two groups were made using an independent two-tailed independent Student's t-test with a 95% CI. Comparisons between three or more groups were done using one-way ANOVA with Tukey's or Dunnett's test for multiple comparisons. Calculations were done using Graphpad Prism 6.0 software.
DATA AVAILABILITY STATEMENT
The data presented in this study are deposited in Exocarta under accession ID ExoCarta_312.
ETHICS STATEMENT
All procedures animal-related work was conducted under approved Institutional Animal Care and Use Committee (IACUC) protocols at Cedars Sinai Medical Center.
AUTHOR CONTRIBUTIONS
AI and AC conceived the idea, performed the experiments, and wrote the manuscript. AA, CL, KP, AMo, and KJ-U performed the experiments and analyzed the data. AMa analyzed the data. EM conceived the idea and analyzed the data. AGI conceived the idea, performed the experiments, analyzed the data, and wrote the manuscript. All authors contributed to the article and approved the submitted version.
FUNDING
This work was supported by NIH R01124074 and NIH R01HL142579.
Supplementary Figure 4 | Inflammatory cytokine array of mouse lung tissue lysates. Lysates were collected from lung tissue of bleomycin and sham-treated groups 8 days postbleomycin exposure (n = 4 mice per group). Statistical analysis was done using one-way ANOVA with Tukey's multiple comparisons test. | 2021-10-15T05:16:11.512Z | 2021-09-23T00:00:00.000 | {
"year": 2021,
"sha1": "2b031bb87465afda6383ff49bc4ee22f19bee8c7",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fcell.2021.733158/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2b031bb87465afda6383ff49bc4ee22f19bee8c7",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
11689188 | pes2o/s2orc | v3-fos-license | PET/CT for staging and monitoring non small cell lung cancer
Abstract Correct staging of non small cell lung cancer (NSCLC) is vital for appropriate management. Initial staging is usually performed with computerised tomography (CT), but increasingly functional imaging using integrated positron emission tomography and CT (PET/CT) is being used to provide more accurate staging, guide biopsies, assess response to therapy and identify recurrent disease.
Introduction
Lung cancer is the third commonest cancer in the European Union with 386,300 new cases diagnosed in 2006. It is the most common cause of cancer death with an estimated 334,800 deaths in 2006. Computed tomography (CT) is usually used for the initial staging and functional imaging using 2-[ 18 F]fluorodeoxyglucose positron emission tomography (FDG-PET) is used to identify both mediastinal nodal involvement and distant metastases and also to assess response to therapy. The integration of PET and CT in PET/CT provides accurate anatomic localisation and improved staging over PET alone.
Staging
The majority of lung cancer (80%) is non-small cell lung cancer (NSCLC) and the treatment and prognosis depend on the anatomic extent of the disease at presentation and are defined using the TNM system. The International Association for the Study of Lung Cancer (IASLC) has recommended changes to the TNM staging, which will be incorporated into the 2009 edition [1] . In the revised classification T1 tumours will be divided into T1a (52 cm) and T1b (23 cm). T2 tumours less than 5 cm will be T2a and those between 5 and 7 cm, T2b. T2 tumours greater than 7 cm will be reclassified as T3 tumours. Tumour nodules in the same lobe (previously T4) will now be T3 and malignant pleural effusions and nodules and pericardial effusions will become M1 (previously T4). Malignant parenchymal nodules in an ipsilateral, but separate lobe will be classified as T4 (previously M1), with nodules in the contralateral lung being M1a and distant metastases M1b. The nodal classification is unchanged.
Primary tumour (T status)
Twenty to thirty percent of patients present with a solitary pulmonary nodule (T1) and differentiation of benign from malignant disease may be difficult. If a standardised uptake value (SUV) of greater than 2.5 is used to indicate malignancy, the sensitivity, specificity and accuracy of FDG-PET for differentiating between benign and malignant nodules are 94%, 71% and 86% with positive predictive value (PPV) of 90% and negative predictive value (NPV) of 85% [2] . False positives will occur in tuberculosis, aspergillomas, rheumatoid nodules, Wegeners granulomatosis and amyloidosis. False negatives occur in small early stage adenocarcinoma and squamous cell carcinomas, bronchoalveolar cell carcinoma and some carcinoid tumours.
The exact extent of tumours is difficult to assess using CT for both chest wall and mediastinal invasion. Gross invasion can readily be identified but CT is inaccurate in differentiating contiguity from subtle invasion.
PET/CT has no real advantage over CT for chest wall or mediastinal invasion but is excellent at differentiating tumour from adjacent collapsed lung ( Fig. 1) and is helpful in planning radiotherapy portals and reducing toxicity by the sparing of normal tissue [3] . Overall PET/CT is more accurate than contrast enhanced CT for the T staging of tumours (86% versus 79%) but not significantly so [4] .
Nodal status (N)
The presence of regional node metastases significantly alters prognosis in NSCLC. When disease has progressed outside the ipsilateral hemithorax the outcome is poor with less than 3% of patients with N3 disease surviving 5 years. In patients with N2 disease, the size, number and nodal levels involved influence survival. CT provides good anatomic definition but with significant over and under staging with a reported accuracy of 6288% (Fig. 2).
FDG-PET has been shown to be more accurate than CT for staging mediastinal nodes in multiple studies and is cost effective. Detection is dependent not on size but on metabolic activity and an SUV max of greater than 2.5 is often used as the cut off value, although increasing the SUV max to 5.3 gives an accuracy for malignancy of greater than 92% [5] . In a meta-analysis by Gould et al. [6] PET was more sensitive and specific than CT for large nodes (85% and 90% versus 61% and 79% respectively) but with small nodes the specificity of PET decreased. PET is better than CT for N0, N2 and N3 disease but not N1disease and the accuracy for staging nodal uptake appears to be less for smokers compared to non-smokers particularly for N2 disease (72% versus 96%) [7] . The results comparing CT and PET from recent studies are shown in Table 1 [4,812] .
False negatives are also a problem. Lee et al. [13] in a study of patients who were clinically stage 1 found the incidence of unsuspected N2 disease was 6.5% in T1 tumours and 8.7% in T2 tumours. The risk factors for unsuspected N2 disease included a high SUV in the primary tumour, adenocarcinoma histology and central rather than peripheral tumours. Meyers et al. [14] performed a cost analysis study in patients with stage 1 disease, and found 5.6% had unsuspected N2 disease, and that mediastinoscopy was not cost effective in this group. Many groups would therefore suggest that mediastinoscopy is not required in patients who are N0 on PET/CT. In patients who are N1 on PET/CT the incidence of unsuspected N2 disease in much higher (23.5%) and mediastinoscopy may be appropriate for this group.
Mediastinoscopy has been considered the gold standard for pre-operative staging but provides limited access to posterior mediastinal node groups. Endoscopic techniques either via the oesophagus for posterior mediastinal groups (levels 79) or via the bronchus for access to superior mediastinal, paratracheal, subcarinal and hilar nodes (levels 1, 2, 4, 7, 10 and 11) is increasingly being used. Eloubeidi et al. [15] found endoscopic ultrasound (EUS)fine needle aspiration (FNA) positive N2 and N3 nodes in 37% of patients with negative mediastinoscopy and it was more accurate than CT or PET (98% vs. 41.5% vs. 40%). Similarly endobronchial ultrasound (EBUS) with guided biopsy is more accurate than CT or PET (98% vs. 60.8% vs. 72.5%) [16] .
Metastatic disease (M status)
The commonest sites for metastatic disease in NSCLC are the brain, bone, liver and adrenals (in decreasing order) at presentation. FDG-PET is a whole body imaging system that will identify unsuspected metastases and PET/CT is significantly better than CT or PET alone for extra thoracic metastases [17] , although it is limited in assessing brain metastases.
Metastases to the adrenals are not uncommon and FDG-PET can differentiate incidental adrenal adenomas, which have low uptake from adrenal metastases that exhibit increased uptake with a reported accuracy of 99%. PET/CT is more specific than PET alone for adrenal masses [18] .
Metastases to the central nervous system are common and detected in 18% of patients with M1 disease at presentation. FDG-PET may not be very useful as the brain always shows increased metabolic activity and other isotopes such as [ 11 C]methionine may be more sensitive.
FDG-PET is more sensitive and accurate than isotope bone scans for bone metastases (91% and 94% versus 75% and 85%), with a very high PPV of 98% if the findings are concordant on PET/CT although this decreases to 61% if the CT is negative [19] (Fig. 3).
Prognosis
There are numerous reports on the prognostic value of FDG-PET using the SUV of the primary tumour although the values used are very variable. Cerfolio et al. [20] found tumours with an SUV max of greater than 10 were more likely to be poorly differentiated and of advanced stage at diagnosis and the SUV was the best predictor of disease free survival. Goodgame et al. [21] using a median SUV of 5.5, found a recurrence rate of 14% in those with an SUV less than 5.5 compared to a 37% recurrence rate if the SUV was greater than 5.5.
Response to treatment
Patients with stage 1 disease are the most suitable candidates for surgery with survival rates of 5785%. Patients with stage 111A NSCLC who have bulky N2 disease have a very poor prognosis, however in patients with unexpected N2 disease the prognosis is much better with 2030% 5-year survival if complete surgical resection can be performed. This is the rationale for attempted down staging of bulky N2 disease with induction chemotherapy prior to resection. Lorent et al. [22] found that patients who either had a partial response or stable disease in mediastinal nodes after induction chemotherapy had a 5-year survival of 35% following surgery compared to 9.4% in the non-responders.
FDG-PET has been shown to be better than CT in identifying responders in other tumours and re-staging with PET/CT appears to offer some advantages over CT or mediastinoscopy.
De Leyn [23] compared PET/CT with CT and mediastinoscopy for pre-surgical re-staging and found PET/CT was more accurate than CT (83% versus 60%) and mediastinoscopy (83% versus 60%) with a low sensitivity for mediastinoscopy (29%) as 60% of patients had incomplete mediastinoscopy due to fibrosis and adhesions. Although FDG-PET appears to be good at assessing the response in both the primary and metastases, it is less accurate for the response in the mediastinal nodes with a 20% false negative and 25% false positive rate and FDG positive nodes should undergo biopsy prior to definitive surgery.
FDG-PET can be used to stratify patients into prognostic groups following therapy. Eschmann et al. [24] found the change in SUV in the primary tumour following chemotherapy was predictive for long-term survival with a decrease of more than 60% predicting a much longer survival compared to those whose SUV decreased by less than 60%.
Recurrent disease
Approximately 3050% of patients who undergo surgery will develop recurrent disease, with most (90%) occurring in the first 2 years. The recurrence rate is highest for T4 or N2 tumours (70%). Recurrences may be loco-regional (2040%), distant (6674%) or both (914%) (Fig. 4). FDG-PET is both sensitive and specific for recurrent disease with a high NPV and will alter treatment plans in up to 63% of patients.
Conclusion
FDG-PET/CT is now an established method for staging lung cancer and provides additional information compared to conventional imaging. Positive nodes may require biopsy either via mediastinoscopy or increasingly, endoscopic guided biopsy. FDG-PET also provides prognostic information on both initial and recurrent tumours. | 2016-05-04T20:20:58.661Z | 2008-10-04T00:00:00.000 | {
"year": 2008,
"sha1": "9a7b19a1d12d1591f0e29c725383a14c9595edb1",
"oa_license": null,
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2582498/pdf/ci089006.pdf",
"oa_status": "BRONZE",
"pdf_src": "Anansi",
"pdf_hash": "9a7b19a1d12d1591f0e29c725383a14c9595edb1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
266026303 | pes2o/s2orc | v3-fos-license | Effects of recombinant OVGP1 protein on in vitro bovine embryo development
Previously, our group demonstrated that recombinant porcine oviductin (pOVGP1) binds to the zona pellucida (ZP) of in vitro-matured (IVM) porcine oocytes with a positive effect on in vitro fertilization (IVF). The fact that pOVGP1 was detected inside IVM oocytes suggested that this protein had a biological role during embryo development. The aim of this study was to evaluate the effects of pOVGP1 on bovine in vitro embryo development. We applied 10 or 50 µg/ml of pOVGP1 during IVF, embryonic in vitro culture (IVC), or both, to evaluate cleavage and embryo development. Blastocyst quality was assessed by analyzing the expression of important developmental genes and the survival rates after vitrification/warming. pOVGP1 was detected in the ZP, perivitelline space, and plasma membrane of blastocysts. No significant differences (P > 0.05) were found in cleavage or blastocyst yield when 10 or 50 µg/ml of pOVGP1 was used during IVF or IVC. However, when 50 µg/ml pOVGP1 was used during IVF + IVC, the number of blastocysts obtained was half that obtained with the control and 10 µg/ml pOVGP1 groups. The survival rates after vitrification/warming of expanded blastocysts cultured with pOVGP1 showed no significant differences between groups (P > 0.05). The use of pOVGP1 during IVF, IVC, or both, increased the relative abundance of mRNA of DSC2, ATF4, AQP3, and DNMT3A, the marker-genes of embryo quality. In conclusion, the use of pOVGP1 during bovine embryo in vitro culture does not affect embryo developmental rates but produces embryos of better quality in terms of the relative abundance of specific genes.
T he oviduct is the female organ in which fertilization and early embryo development take place.During the first 3-4 days of development, the embryo remains in the oviduct where the conceptus is exposed to the oviductal fluid (OF).A 16-cell stage bovine embryo then enters the uterus.Although the embryo in its early stages of development does not need contact with the maternal tract to regulate its own cell division and differentiation, the reproductive tract, and its secretions, have important roles in providing the optimal environment for embryo development [1].
Preimplantation embryos are able to develop in vitro and to produce normal offspring after embryo transfer.However, the development of preimplantation mammalian embryos in vitro is compromised compared with those grown in vivo.Studies comparing bovine oocyte maturation, fertilization, and embryo culture in vivo versus in vitro have shown that the post-fertilization environment determines blastocyst quality, measured in terms of cryotolerance [2,3] and relative transcript abundance [4].Deprivation of some in vivo-produced maternal factors could be responsible for impaired in vitro development and decreased viability [4], as well as some pathological alterations associated with in vitro-produced embryos [5,6].Therefore, oviductal secretions are thought to be key factors for improving embryo quality during in vitro embryo production in the context of assisted reproductive techniques.
The OF is composed of simple and complex carbohydrates, ions, lipids, phospholipids, and proteins [7].Individual oviductal secretions have an effect on oocyte and sperm function [8], and the possible role of proteins such as oviductin (presently termed as OVGP1), osteopontin, glycodelins, and lactoferrin on gamete interaction has been described [9].Proteomic analysis of the OF and gene expression analysis of oviductal cells pointed to OVGP1 and its codifying gene as the main protein/gene in the oviduct in quantitative terms, whose abundance increases at estrus [10,11].OVGP1 has particularly been detected in the perivitelline space in fertilized eggs, where it is endocytosed by the blastomeres of 2-cell, 4-cell, and 8-cell embryos, as well as blastocysts, in hamster [12,13].This suggests that in hamster, OVGP1 could play a role in supporting the growth and differentiation of the embryo.Indeed, the positive effect of recombinant OVGP1 on embryo development has been described in mammalian species such as goat [14] and domestic cat [15].Furthermore, antibodies against the C-terminal peptide of rabbit OVGP1 inhibit early embryo development in mouse [16].
It is clear that a study of the oviductal environment, especially the effect of OVGP1, is crucial to further our understanding of the underlying regulatory mechanisms controlling embryo development.Most functional studies in ovine, porcine, and bovine species have used native OVGP1 purified from OF or complete OF [17][18][19][20].However, OF undergoes continuous renewal in the oviduct in vivo as the reproductive tract modifies its activity to provide the optimal environment for the gametes and embryos at each step of the reproductive process [1].As a result, the amount of proteins is modified by the oviduct according to each embryo-stage or the different phases of the estrous cycle, potentially hindering a successful transfer to in vitro conditions.Moreover, as native OVGP1 is difficult to obtain in sufficient amounts, recombinant OVGP1 represents a reasonable alternative.
Previously, we have shown that recombinant porcine OVGP1 (pOVGP1) has a positive effect on porcine fertilization efficiency, when it is added to the IVM or IVF medium and we attribute an order-specific role in modulating sperm binding, penetration, and fertilization, to the C-terminus of OVGP1 [21].This allows heterologous OVGP1 proteins (porcine and bovine) sharing the same conservative regions at their C-terminal regions (Supplementary Fig. 1: online only) [21] to exert the same positive effect on IVF [22].However, to our knowledge, the effects of recombinant OVGP1 on bovine embryo development have not yet been studied.Based on the amino acid sequence, porcine (Q28990) and bovine (Q28042) OVGP1 are 78% homologous and, more importantly, both proteins share the same C-terminal regions which are involved in the specific ZP-binding patterns that remodel the ZP matrix and allow protein endocytosis [21].
In vitro bovine embryo production is already well established and is used on a global scale for the cheap production of calves.The bovine species is an economically important livestock animal and investigations into cow reproduction have recently generated renewed interest due to its importance as a model animal in transgenesis and stem cell research.Therefore, the main objective of the present study was to analyze in vitro the effects of purified recombinant pOVGP1 supplementation on bovine embryo development and embryo quality.For that purpose, we used two different concentrations of pOVGP1 (10 and 50 µg/ml) during the period when, in vivo, the embryo is still in the oviduct: IVF (from day 0 to day 1) (D0-D1), early IVC (from day 1 to day 3.5) (D1-D3.5),or both (from day 0 to day 3.5) (D0-D3.5).
Materials and Methods
Unless otherwise stated, all chemicals were purchased from Sigma Aldrich Quimica S.A Company (Madrid, Spain).
Recombinant OVGP1 protein production
Recombinant pOVGP1 was expressed in the Human Embryonic Kidney (HEK 293T) cell line and purified as described previously [21].HEK 293T cells were grown in Corning® Roller Bottles (Sigma-Aldrich, St. Louis, MO, USA) (37°C, 5% CO 2 , and 95% humidity) for 48-72 h to 80-90% confluence in DMEM medium (Dulbecco's Eagle Medium, Thermo Fisher Scientific, Rockford, IL, USA) supplemented with 10% fetal bovine serum and 4 mM glutamine (both from Gibco BRL-Life Technologies, Gaithersburg, MD, USA).Cells were transfected using polyethylenimine (PEI, Sigma-Aldrich) to express pOVGP1.Conditional media were harvested at 72 h after transfection and adjusted with a buffer resulting in a final composition of 10 mM imidazole, 20 mM HEPES, and 150 mM NaCl, pH 7.8, and incubated with Ni-NTA Superflow (Qiagen, Hilden, Germany) overnight at 4ºC.The nickel beads were washed and the proteins were eluted with a 500 mM imidazole, 20 mM HEPES, 150 mM NaCl, pH 7.8 buffer.A buffer exchange was made by dialysis to obtain the protein in a 150 mM Tris HCl, 200 mM NaCl, 10% glycerol buffer.The authenticity of pOVGP1 was verified as described previously [21].
Immunoblotting
Purified protein was separated by SDS-PAGE and transferred to PVDF membranes, which were probed with Penta-His mouse monoclonal antibody (Qiagen) and visualized by chemiluminescence.Proteins were also stained with Simply Blue TM Safe Stain (Invitrogen, Carlsbad, CA) after SDS-PAGE.
Confocal microscopy
Bovine presumptive zygotes and embryos supplemented with pOVGP1 during IVF, IVC, or both processes, at two different concentrations of pOVGP1 (10 and 50 µg/ml), were analyzed by confocal microscopy to detect the presence of the protein.
After treatment, presumptive zygotes (D1) and embryos (D3.5 and D9) were washed and fixed with 2% paraformaldehyde (Electron Microscopy Sciences, Hatfield, PA) in PBS.Half of the D3.5 embryos and all of the D1 presumptive zygotes and D9 embryos were permeabilized using Triton TM X-100 at 1% in PBS for 5 min (Sigma-Aldrich) to enable detection of pOVGP1.Half of the D3.5 embryos were left unpermeabilized in order to check whether pOVGP1 had bound to the blastomere membranes.D3.5 embryos were stained for 15 min in 1% Hoechst in PBS.Permeabilized and non-permeabilized presumptive zygotes and embryos were stained with Penta-His mouse monoclonal antibody diluted 1:100 in PBS.Stained embryos were mounted on a microscope slide.Samples were analyzed with a DM IRE2 confocal microscope (True Confocal Scanner TCS-SP2, Leica Microsystems, Barcelona, Spain).
Oocyte collection and IVM
Immature cumulus oocyte complexes (COCs) were obtained by aspirating follicles (2-8 mm) from ovaries of heifers collected at a commercial abattoir.COCs were matured for 24 h in 500 µl of TCM 199 supplemented with 10% (v/v) fetal calf serum (FCS) and 10 ng/ml epidermal growth factor in a four-well dish, in groups of 50 COCs per well at 38.5ºC, under an atmosphere of 5% CO 2 in air with maximum humidity.
Sperm preparation and IVF
Frozen semen from a single Asturian Valley bull, was thawed at 37ºC in a water bath for 1 min and centrifuged for 5 min at 290 × g through a gradient of 1 ml of 40% and 1 ml of 80% Bovipure, according to the manufacturer's specification (Nidacon Laboratories AB, Göthenborg, Sweden).The sperm pellet was isolated and washed in 3 ml of Boviwash by centrifugation at 290 × g for 5 min.The pellet was resuspended in the remaining 300 µl of Boviwash.Sperm concentration was determined and adjusted to a final concentration of 1 × 10 6 sperm/ml for IVF.Gametes were co-incubated for 18-22 h in droplets of 100 µl of fertilization medium (Tyrode's medium with 25 mM bicarbonate, 22 mM Na lactate, 1 mM Na-pyruvate, and 6 mg/ml fatty acid-free bovine serum albumin (BSA) supplemented with 10 mg/ml heparin sodium salt, Calbiochem, San Diego, CA, USA), in groups of 50 COCs per droplet under an atmosphere of 5% CO 2 in air with maximum humidity at 38.5ºC.Depending on the experimental group (Fig. 1), the fertilization medium was supplemented with 10 or 50 µg/ml of recombinant pOVGP1.
Assessment of embryo development and quality
Embryo development: Cleavage rate was recorded at Day 2 (48 h p.i.) and cumulative blastocyst yield was evaluated at Days 7, 8, and 9 p.i.
Embryo quality assessed in terms of cryotolerance: The ability of the produced embryos to survive after vitrification was used as an indicator of embryo quality.BSA was used as a control as this implies a delay in embryo development compared with embryos cultured with FCS, resulting in a lower proportion of blastocysts on D7 and an increase on D8 (Tables 1 to 3).For that reason, D7 and D8 expanded blastocysts were vitrified following the two-step protocol described previously [2,24], using a Cryoloop device (Hampton Research, Aliso Viejo, CA, USA).Briefly, the blastocysts were first exposed to Holding Medium (HM; TCM 199 supplemented with 10% (v/v) FCS) with 7.5% ethylene glycol and 7.5% dimethyl sulfoxide (DMSO) for 3 min.In the second step, the embryos were exposed to HM with 16.5% ethylene glycol, 16.5% DMSO, and 0.5 M sucrose for 20-25 sec, directly transferred to the cryoloop, and plunged into liquid nitrogen.The blastocysts were then warmed in two steps, being first exposed to HM with 0.25 M sucrose for 5 min, followed by an incubation in HM with 0.15 M sucrose for 5 min.Finally, the blastocysts were left in HM for 5 min before they were put in 25-ml droplets of SOF with 5% FCS.The definition of embryo survival was based on re-expansion of the blastocoel, its maintenance for 24, 48, and 72 h, and its ability to hatch from the ZP.
Embryo quality assessed in terms of gene expression pattern: As another indicator of embryo quality, genes related to glucose metabolism, epigenetics, cell-to-cell communication, endoplasmic reticulum homeostasis, and aquaporins were studied.Gene expression was analyzed in three pools of 10 expanded blastocysts recovered from D7-D8 for each experimental group.
Poly (A) RNA was extracted using a Dynabeads® mRNA DIRECT™ Micro Kit (Ambion®, Thermo Fisher Scientific, Oslo, Norway) following the manufacturer's instructions with minor modifications [25].After a 10-min incubation in lysis buffer with Dynabeads, the poly (A) RNA attached to the Dynabeads was extracted with a magnet and washed twice with washing buffer A and washing buffer B. The RNA was then eluted with Tris-HCl.Immediately after extraction, the reverse transcription (RT) reaction was carried out following the manufacturer's instructions (Epicentre Technologies, Madison, WI, USA) using poly (T) primers, random primers, and MMLV High Performance Reverse Transcriptase enzyme in a total volume of 40 µl, to prime the RT reaction and to produce cDNA.The tubes were heated at 70ºC for 5 min to denature the secondary RNA structure and the RT mix was then completed with the addition of 50 units of reverse transcriptase.The reactions were then incubated at 25ºC for 10 min to favor the annealing of random primers, followed by 37ºC for 60 min to allow the RT of RNA, and finally 85ºC for 5 min to denature the enzyme.
All primers were designed using Primer-BLAST software (www.ncbi.nlm.nih.gov/tools/primersblast/) to span exon-exon boundaries when possible.All qPCR reactions were carried out in duplicate on a Rotor-Gene TM 6000 Real Time Cycler (Corbett Research, Sydney, Australia) by adding a 2 µl aliquot of each sample to the PCR mix (GoTaq® qPCR Master Mix, Promega, Madison, WI, USA) containing the specific primers selected to amplify activating transcription factor 4 (ATF4), DNA-damage-inducible transcript 3 (DDIT3), DNA (cytosine-5-)-methyltransferase 3 alpha (DNMT3A), lysine (K)-specific demethylase 1A (KDM1A), desmocollin 2 (DSC2), gap junction protein, alpha 1 (GJA1, formerly CX43), insulin-like growth factor 2 receptor (IGF2R), aquaporin 3 (AQP3), and solute carrier family 2 (facilitated glucose transporter) member 1 (SCL2A1, formerly GLUT1).Primer sequences and the approximate sizes of the amplified fragments of all transcripts are shown in Supplementary Table 1 (online only).Cycling conditions were 94ºC for 3 min followed by 35 cycles of 94ºC for 15 sec, 56ºC for 30 sec, 72ºC for 10 sec, and 10 sec of fluorescence acquisition.Each pair of primers was tested to achieve efficiencies close to 1 and the comparative cycle threshold (CT) method was used to quantify expression levels, as described by Schmittgen et al. in 2008 [26].To avoid primer dimer artifacts, fluorescence was acquired in each cycle at a temperature higher than the melting temperature of the primer dimers (specific for each product, 80 to 86°C).The threshold cycle, or the cycle during the log-linear phase of the reaction, at which fluorescence increased above background was determined for each sample.The ΔCT value was determined by subtracting the endogenous control (an average of H2AZ and ACTB) CT value for each sample from each gene CT value of the sample.Calculation of ΔΔCT involved using the highest sample ΔCT value (i.e., the sample with the lowest target expression) as a constant to subtract from all other ΔCT sample values.Fold-changes in the relative gene expression of the target were determined using the equation 2 -ΔΔCT .
Experimental design
Based on previous studies [14,15], two different concentrations of pOVGP1 were used in vitro: 10 and 50 µg/ml.The developmental capacity and quality of the blastocysts obtained was assessed after the supplementation with pOVGP1 in three different steps: during IVF (D0-D1), early IVC (D1-D3.5),or during IVF and early IVC (D0-D3.5)(Fig. 1).In all experiments, embryo development was assessed at D2 and D7-D9.D7 and D8 embryos were used for gene expression analysis.Three replicates were carried out, after which a further four replicates were carried out, but only using the group supplemented with pOVGP1 during early IVC (D1-D3.5).The embryos obtained at D7 and D8 were vitrified/warmed to evaluate their survival rate.
The chosen species to test recombinant porcine OVGP1 was bovine because the embryo production system is more optimized than in porcine species.In addition, based on the amino acid sequence, porcine (Q28990) and bovine (Q28042) OVGP1 are 78% homologous and, more importantly, both proteins share the same C-terminal regions (Supplementary Fig. 1), which are involved in the specific ZP-binding patterns to remodel the ZP matrix and allow protein endocytosis [21].These data suggest that pOVGP1 would be able to bind to the bovine ZP to be then endocytosed.
Statistical analysis
All statistical analyses were carried out using the SigmaStat software package (Jandel Scientific, San Rafael, CA, USA).Cleavage rate, blastocyst yield, embryo survival after vitrification, and relative mRNA abundance were analyzed using one-way ANOVA.Tukey's post-hoc test was used after one-way ANOVA.Values were considered significantly different when P was lower than 0.05.
Recombinant pOVGP1 binds to the ZP of presumptive bovine zygotes and early embryos
As expected, bright immunofluorescence was detected through the entire thickness of the ZP, when bovine mature oocytes were incubated in IVF medium containing purified pOVGP1 (Fig. 2A), displaying the highest intensity at 50 µg/ml (Fig. 2B).In contrast, the fluorescent ZP-signal on D9 blastocysts was only detected when IVC medium was supplemented with 50 µg/ml of pOVGP1 (Fig. 3).This observation confirms that OVGP1-ZP binding is reversible [22], but the protein remains attached to the ZP when IVC is supplemented with 50 µg/ml of pOVGP1, even though the medium lacks the recombinant protein from Day 3.5 post-insemination.
Bovine D3.5 embryos cultured in 50 µg/ml of pOVGP1 showed specific labeling in the perivitelline space and plasma membrane between blastomeres when they were fixed and unpermeabilized.When embryos were permeabilized, a strong signal was detected within the blastomere cytoplasm (Fig. 4).
The effects of pOVGP1 on in vitro embryo development
Supplementation with pOVGP1 during IVF did not have any effect on the cleavage rate or blastocyst yield at Days 7, 8, or 9.In addition, no dose-dependent effect was observed.The cleavage rate ranged from 77.89 to 82.45%, while the blastocyst rate at D9 ranged from 24.51 to 26.08% (Table 1, P > 0.05) When pOVGP1 was added during early IVC (D1-D3.5), the supplementation had no effect on the cleavage rate or blastocyst yield (Table 2).The cleavage and blastocyst rates were similar in all groups, ranging from 82.45 to 84.01% and from 21.32 to 29.53%, respectively (P > 0.05).These results were confirmed when four more replicates were carried out to evaluate the ability of the blastocyst to survive vitrification (data not included).
Supplementation with pOVGP1 during fertilization and culture was detrimental to embryo development at the higher dose.Thus, although the cleavage rate was similar in the control, IVF+IVC ± 1.66%, respectively) the blastocyst rate at D7, D8, and D9 was much lower in the IVF+IVC 50 group compared with the control and IVF+IVC 10 groups (13.28 ± 0.34 vs. 25.72 ± 4.05 and 27.21 ± 1.73% at D9, respectively; P < 0.05) (Table 3).
pOVGP1 during early IVC does not affect embryo survival after vitrification
No significant differences were observed on survival or hatching rate when D7 or D8 expanded blastocysts were vitrified, so the results presented in Table 4 N = total number of presumptive zygotes placed in culture.Different letters (a, b) in the same column indicate differences between groups (One way ANOVA, P < 0.05).
pOVGP1 affects specific genes related to embryo quality
Supplementation with pOVGP1 during IVF significantly increased the expression of AQP3, at both 10 and 50 µg/ml, compared with the control (P < 0.05).ATF4, a gene involved in endoplasmic reticulum homeostasis, was upregulated in the IVF 50 group (P < 0.05), while the effect in the IVF 10 group only showed borderline significance (P = 0.054) compared with the control.In the cell-to-cell communication category, GJA1 was downregulated in the IVF 10 group and upregulated in the IVF 50 group compared with the control (P < 0.05) (Fig. 5).
In IVF supplementation, the use of pOVGP1 during IVC showed similar differences in the gene expression of AQP3.In this case, ATF4 was upregulated in both pOVGP1 concentrations compared with the control (P < 0.05).In addition, during IVC supplementation, DSC2, which corresponds to the cell-to-cell communication category, was upregulated in both groups supplemented with pOVGP1 compared with the control (P < 0.05) (Fig. 6).
With regards to AQP3, the use of pOVGP1 during IVF+IVC showed a similar gene expression pattern to that observed after individual supplementation during IVF or IVC.DSC2 presented the same distribution to that observed after supplementation during IVC, suggesting that the strongest effect on this gene is when pOVGP1 is used during early embryo culture.In addition, GJA1 showed similar expression to IVF supplementations, being downregulated in the IVF+IVC 10 group compared with the control but not showing differences in the IVF+IVC 50 group.ATF4 was also upregulated when both doses of pOVGP1 were used compared with the control group, as occurred with supplementation during IVC.Furthermore, one gene related with epigenetics, DNMT3A, was upregulated in the IVF+IVC 50 group compared with the other groups (P < 0.05; Fig. 7).
Discussion
It has been shown that co-culturing embryos with oviductal cells or the in vivo culture of embryos in oviducts of recipient animals increases the success of development and mimics the pattern of transcript abundance of some genes observed in in vivo derived embryos [27,28].Therefore, it seems reasonable to include secreted oviductal components in the in vitro culture medium.Many studies have tested the OF effect on early development [19,20,24].However, in such studies, the complete OF is usually recovered by aspiration from whole oviducts so that all the components are mixed in the same fluid, though a gradient has been observed in the biosynthetic activity of the functional oviductal segments [29].In addition, there are also variations in the components of the OF during the estrous cycle [30].Thus, the components of OF should be added (i) gradually to the in vitro system to be able to study their involvement and influence in early development and (ii) at the appropriate time.In the present study, pOVGP1 was added during IVF and early IVC, the time period when the embryo remains in the bovine oviduct in vivo.Recently, we showed that pOVGP1 added during IVF remodels porcine ZP, leading to a significant increase in monospermy rates compared with the control, without affecting penetration rates and resulting in a general increase in the final output of the IVF system [21].In an attempt to elucidate the role of OVGP1 in the early development, we describe here for the first time the application of recombinant pOVGP1 to an in vitro culture system to produce bovine embryos.Similarly, as we showed previously in porcine IVF [21], during the present study pOVGP1 was bound to the ZP of presumptive bovine zygotes, when protein was added to the IVF medium.However, the ZP signal disappeared after 9 days of incubation in OVGP1-free medium or medium with a low concentration of OVGP1 in the IVC, suggesting reversible and dose-dependent OVGP1-ZP binding [22].The presence of OVGP1 in multivesicular-like bodies in IVM oocytes [21] and in blastomeres of developing embryos [12] could explain the influence of OVGP1 in the early stages of development, as has been reported for many species [16-18, 31, 32].In accordance with these data, we report that D3.5 embryos cultured with recombinant pOVGP1 showed specific labeling in the plasma membrane, perivitelline space, and inside blastomeres, confirming protein activity beyond the ZP of bovine embryos.Together, these data confirm that the C-terminal region of OVGP1 regulates its activity and accounts for the reproductive role of OVGP1 in different mammalian orders [21].Therefore, OVGP1 proteins containing same C-terminal region (Supplementary Fig. 1) could be used in heterologous systems.
A beneficial effect of purified native OVGP1 on the cleavage rate and number of embryos that reach the blastocyst stage was demonstrated in goat when the protein was used at 10 µg/ml during IVF and IVC.However, using higher concentrations of OVGP1 (50 or 100 µg/ml) had an inhibitory effect on embryo production [14].Indeed, in the porcine system, treatment with OVGP1 (10 µg/ml) during preincubation/IVF increased the number of oocytes reaching the blastocyst stage, while OVGP1 added during both IVF (10 µg/ml) and IVC (50 or 100 µg/ml), tended to diminish the above effect [18].
In the present study, the use of pOVGP1 during IVF or IVC during bovine in vitro embryo production maintained the number of oocytes reaching the blastocyst stage compared to control, while pOVGP1 at a higher dose (50 µg/ml) added during both IVF and IVC, had a detrimental effect on embryo production.Similarly, when the IVC medium was supplemented with high concentrations (5, 10, or 25%) of complete bovine OF, a negative effect on blastocyst development was observed [24].In the present study, the use of pOVGP1 did not improve embryo production in any of the experimental groups (IVF, IVC, or IVF+IVC).This is consistent with a published study in which recombinant cat OVGP1 was used in IVF, with no significant effects on the cleavage or blastocyst rates [15].Furthermore, in vivo studies by null mutation of the OVGP1 glycoprotein have shown that OVGP1 is dispensable for the process of fertilization, at least in mice [33].All these data reflect the importance of establishing the optimal protein dose in in vitro assays.Histochemical and immunocytochemical studies have demonstrated regional differences in the localization of materials and proteins in the oviductal epithelium [34].Besides, the differing presence of proteins within the three functional oviductal segments (infundibulum, ampulla, and isthmus) is probably related with their individual functions [29].The segmented microenvironment in the oviduct is likely to ensure the optimal protein doses necessary for fertilization and early embryo development in vivo.Consequently, a detailed protein analysis of the oviductal content from the different segments is necessary to optimize the concentration of OVGP1 for culture conditions [35] .
The culture environment during embryo development has an effect on embryo quality in terms of gene expression [2,36,37].Our results show the overexpression of AQP3, a gene coding for aquaporin 3, in all treatments with pOVGP1.Aquaporins are proteins that form membrane channels which facilitate rapid and passive movement of water [38,39], whose presence has been detected in mouse oocytes and human embryos [40].They play a crucial role in cellular homeostasis and transport of cryopreserved oocytes and mouse blastocysts [41][42][43].Therefore, the upregulation of this gene improves the survival of mouse oocytes and embryos following cryopreservation, as well as regulating apoptosis in the same embryos [44][45][46][47].The downregulation of this gene is associated with low fertilization rates [48] and inhibits embryonic development in 2-cell mouse embryos [40], suggesting an important role in embryonic development.In mice, the knockout of Aqp3 leads to a deterioration in the formation of epidermal cells [49].Despite these reports, this was not confirmed under our experimental conditions.Regardless of AQP3 overexpression in blastocysts produced from mature oocytes and early embryos treated with either concentration of pOVGP1, their cryotolerance was not enhanced.However, it cannot be ruled out that pOVGP1 may be responsible for distinct embryo qualitative differences not associated with cryotolerance, but essential for implantation and/or fetal development.
Our results also point to a significant upregulation of the gene encoding transcription factor 4 (ATF4) when either concentration of pOVGP1 was used during IVC.This factor has been described as being critical for normal cell proliferation, especially during peak proliferation, and is required in fetal hematopoiesis, while its absence results in severe fetal anemia [50].In mouse embryos, Atf4 deficiency produces a severe defect in the formation of the lens [51].
Another gene showing differences after treatment with both doses of pOVGP1 during IVC is DSC2 (desmocollin 2), a member of the superfamily of cadherins.DSC2 is a cell adhesion protein present in desmosomes [52], where it is involved in the formation of desmosomal junctions, which stabilize the expansion of the blastocyst [53,54].
Desmosomal junctions play an important role in integrity and signaling activity in tissues [55].DSC2 upregulation has been described in the morula and blastocyst stage [56] and it has been observed that the relative abundance of mRNA is significantly greater in high quality blastocysts than in those of low quality [57].The expression of this gene affects embryo quality, and could be used as a marker in IVC.In our study, both 10 and 50 µg/ml of pOVGP1 used during IVC upregulated DSC2, indicating that supplementation of bovine embryos with pOVGP1 during early embryonic culture favors the production of high quality embryos.
In the present study, the experimental group that produced the fewest embryos was IVF+IVC 50.The embryos obtained in this group overexpressed a gene related with epigenetic embryo markers, DNMT3A (DNA methyltransferase 3 alpha).The methyltransferase family (DNMTs) mediates the establishment and maintenance of dynamic patterns of DNA methylation in the genome.DNA methylation is an epigenetic event of great importance in gene regulation and for maintaining an optimal DNA methylation pattern during oocyte maturation.This is essential for the viability and development of embryos [58].DNMT3A encodes a methyltransferase that has been identified as acting as a de novo DNA methyltransferase [59], establishing both maternal and paternal imprints [60,61].The expression of DNMT3A is affected by in vitro culture [62], as well as by serum supplementation [24].
Our results showed that the expression pattern of the Gap junction protein alpha 1 (GJA1) was contrary for both doses during IVF, i.e., 10 µg/ml of pOVGP1 downregulated and 50 µg/ml upregulated its expression.The use of recombinant feline OVGP1 during IVF increased the expression of GJA1 in cat blastocysts [15].In the bovine system, GJA1 was expressed to a significantly higher extent in in vivo-cultured embryos from D4 onward, compared with those cultured in vitro [63], suggesting its importance in embryo development.
In conclusion, the use of pOVGP1 at an appropriate dose during IVC of bovine embryos does not affect embryo development, but produces embryos of better quality in terms of the relative abundance of specific quality-related genes.In vitro bovine embryo production is already well established, but it is necessary to further study embryo development in order to analyze the effects of assisted reproductive technologies (ART) in bovine and other species.Recently, it has been shown that DNA methylation and gene expression changes derived from ART can be decreased by using reproductive fluids [64] as OF, highlighting the importance of searching for strategies to mitigate the adverse effects on the health of ART-derived offspring.
Fig. 4 .
Fig. 4. pOVGP1 was bound to the ZP and in the perivitelline space, plasma membrane, and cytoplasm of bovine D3.5 embryos.Upper panels: Bovine D3.5 embryos cultured in IVC medium containing 50 µg/ml of pOVGP1 were fixed and a half of them were permeabilized.The embryos were imaged by confocal fluorescence microscopy using a monoclonal antibody against the His tag (n = 56).Specific label was detected in the ZP, perivitelline space, and plasma membrane between blastomeres in unpermeabilized embryos.We also detected bright immunofluorescence within the blastomere cytoplasm when embryos were permeabilized.Lower panels: Blue signal corresponding to nucleic acid stained with Hoechst is shown merged with the immunofluorescent green signal and the bright field image.Scale bars = 50 µm.
Fig. 3 .
Fig. 3. pOVGP1 is detected in the ZP of D9 blastocyst when IVC medium is supplemented with 50 µg/ml of pOVGP1.On Day 9, blastocysts were recovered, fixed, and imaged by confocal fluorescence microscopy using a monoclonal antibody against the His tag (n = 21).Bright immunofluorescence was only detected in the ZP of the blastocysts when IVC medium was supplemented with 50 µg/ml of pOVGP1.No fluorescent signal was found in the groups supplemented with 10 µg/ml of pOVGP1, or in the untreated group.Scale bars = 50 µm.
Fig. 2 .
Fig. 2. pOVGP1 expressed by mammalian cells binds to ZP of bovine presumptive zygotes.A. pOVGP1 was expressed in HEK 293T cells, purified, separated by SDS-PAGE, and analyzed by immunoblot using a monoclonal antibody against the His tag and Coomassie blue stain.Bands of ~90 kDa correspond to reduced pOVGP1 (lanes 1 and 3) were detected.Lanes 2 and 4 correspond to the negative control.B. Bovine presumptive zygotes incubated in IVF medium containing pOVGP1 at 10 or 50 µg/ml and in IVF medium without pOVGP1 were fixed and imaged by confocal fluorescence microscopy using a monoclonal antibody against the His tag (n = 15).Bright immunofluorescence was detected throughout the entire thickness of the ZP.The intensity was higher in the group supplemented with 50 µg/ml of pOVGP1 than in the 10 µg/ml group.No fluorescent signal was found in the untreated group.Scale bars = 50 µm.
Fig. 7 .
Fig. 7. Relative mRNA abundance (normalized against H2AZ and ACTB) of selected genes for the assessment of blastocyst quality when COCs and presumptive zygotes were cultured with pOVGP1 during IVF and early IVC, respectively (D0-D3.5).For each transcript, bars with different superscripts differ significantly (P < 0.05).
Table 4 .
In vitro survival after vitrification and warming of D7+D8 expanded blastocyst that were cultured with 10 or 50 µg/ml of pOVGP1 during early in vitro culture (D1-D3.5)(four replicates)
Table 1 .
The effects of supplementation with 10 or 50 µg/ml of pOVGP1 during in vitro fertilization on embryo development (D0-D1) (three replicates)
Table 2 .
The effects of supplementation with 10 or 50 µg/ml of pOVGP1 during early in vitro culture on embryo development
Table 3 .
The effects of supplementation with 10 or 50 µg/ml of pOVGP1 during in vitro fertilization and early embryo culture on embryo development (D0-D3.5)(three replicates) | 2020-04-09T00:45:16.274Z | 2018-08-06T00:00:00.000 | {
"year": 2018,
"sha1": "dd24e1944bda7133276ca417c9f26aec7d42932b",
"oa_license": "CCBYNCND",
"oa_url": "https://www.jstage.jst.go.jp/article/jrd/64/5/64_2018-058/_pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dd24e1944bda7133276ca417c9f26aec7d42932b",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
17746882 | pes2o/s2orc | v3-fos-license | Non-Specific Root Transport of Nutrient Gives Access to an Early Nutritional Indicator: The Case of Sulfate and Molybdate
Under sulfur (S) deficiency, crosstalk between nutrients induced accumulation of other nutrients, particularly molybdenum (Mo). This disturbed balanced between S and Mo could provide a way to detect S deficiency and therefore avoid losses in yield and seed quality in cultivated species. Under hydroponic conditions, S deprivation was applied to Brassica napus to determine the precise kinetics of S and Mo uptake and whether sulfate transporters were involved in Mo uptake. Leaf contents of S and Mo were also quantified in a field-grown S deficient oilseed rape crop with different S and N fertilization applications to evaluate the [Mo]:[S] ratio, as an indicator of S nutrition. To test genericity of this indicator, the [Mo]:[S] ratio was also assessed with other cultivated species under different controlled conditions. During S deprivation, Mo uptake was strongly increased in B. napus. This accumulation was not a result of the induction of the molybdate transporters, Mot1 and Asy, but could be a direct consequence of Sultr1.1 and Sultr1.2 inductions. However, analysis of single mutants of these transporters in Arabidopsis thaliana suggested that other sulfate deficiency responsive transporters may be involved. Under field conditions, Mo content was also increased in leaves by a reduction in S fertilization. The [Mo]:[S] ratio significantly discriminated between the plots with different rates of S fertilization. Threshold values were estimated for the hierarchical clustering of commercial crops according to S status. The use of the [Mo]:[S] ratio was also reliable to detect S deficiency for other cultivated species under controlled conditions. The analysis of the leaf [Mo]:[S] ratio seems to be a practical indicator to detect early S deficiency under field conditions and thus improve S fertilization management.
Introduction
It is usually assumed that plants need to maintain a certain level of homeostasis between mineral nutrients. As a consequence, plants must constantly acquire nutrients from soil solution by a complex network of root specific transporters whose expression is usually up-regulated during deficiency (See [1] for N, [2] for K, [3] for S). However, some nutrients might compete for the active site of non-specific transporters [4]. For example, non-selective cation uptake systems that are able to transport both Na + and K + have been identified in plant root cells, such as HKT1 (a high affinity K + transporter), LCT1 (a low-affinity cation transporter) or NSC (non-selective cation channels) [5]. As a consequence, at higher levels of Na + , plants will take up Na + instead of K + [4]. Another case of interactions between competitive ions present in the rhizosphere concerns sulfate (SO ) [6,7]. Molybdate uptake has long been assumed to occur through the sulfate transporters [7,8]. Indeed, these two anions are chemical analogs and both possess a double negative charge, a tetrahedral structure, a similar size, and hydrogen-bonding properties [9,10] and therefore may compete for the binding site of the same transporters [11].
The sulfate transporter family consists of 12 genes in Arabidopsis that can be subdivided into four groups according to sequence similarities (for review see [3,12]). The members of the first group, showing a high-affinity for sulfate, are assumed to be principally responsible for primary root uptake. The members of the second group are localized in the root and shoot tissues and are considered as low-affinity sulfate transporters responsible for translocation of sulfate between roots and shoots. The transporters of the third group are localized in plastid envelopes and seem to be responsible for the import of sulfate into the organelles, but have also other specialized functions, e.g. during Rhizobia symbiosis in Lotus japonicus [13]. Transporters of group four are localized in the tonoplast and are assumed to be responsible for the efflux of sulfate from the vacuoles into the cytoplasm [12]. According to Kataoka et al. [14], AtSultr4.1 plays a major role in vacuolar efflux of sulphate while AtSultr4.2 could have a supplementary function. A fifth group comprising two isoforms in Arabidopsis (AtSultr5.1 and AtSultr5.2) has been initially annotated as sulfate transporters but their sequences are considerably shorter, around 450 amino acids, compared to nearly 650 amino acids for the other groups [3] and they do not include the sulfate transporter anti-sigma domain (STAS), which is critical in the transport of sulfate and the stability of transporters [15]. Thus, this latest group is different enough to be considered as an independent family (less than 13% identity with other groups) [15], and indeed, another function has been proposed (see below). Most sulfate transporters (for example Sultr1.1, Sultr1.2, Sultr2.1, Sultr4.1 and Sultr4.2) are strongly up-regulated by S deficiency through increased gene expression as demonstrated in numerous plant species [6,16,17,18,19]. Fitzpatrick et al. [20] found that uptake of molybdate in Stylosanthes hamata can be achieved by SHST1, a high-affinity sulfate transporter from group 1. Molybdate uptake was not reduced when challenged with a competitive anion such as sulfate, whereas the sulfate uptake via SHST1 was reduced in presence of molybdate [20]. The transporter from group 5, previously annotated SULTR5.2, was then identified as the first molybdate transporter in Chlamydomonas reinhardtii [21] and Arabidopsis [10,22,23] and was therefore renamed MOT1. However, the localization and the involvement of MOT1 in the uptake of molybdate remain unclear. Indeed, Tomatsu et al. [22], using BY2 cells of Nicotiana tabacum L. expressing MOT1 fused to GFP under control of the cauliflower mosaic virus 35S RNA promoter, suggested the localization of MOT1 in plasma membranes and endomembrane presumably in the secretory and/or endocytic pathways. Baxter et al. [23] established its localization in the mitochondria from the analysis of N-terminal mitochondrial targeting sequence further confirmed by the expression of MOT1 tagged with GFP in protoplasts and transgenic Arabidospis. A second molybdate transporter, analysis, decision to publish, or preparation of the manuscript. The Centre Mondial d'Innovation -Groupe Roullier provided support in the form of salaries for authors MA, FC and JCY as well as the PhD grant of ES, but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific roles of these authors are articulated in the 'author contributions' section. VEGENOV-BBV provided support in the form of salary for author MT, but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific role of this author is articulated in the 'author contributions' section. MOT2 (previously named SULTR5.1) was then described in Arabidopsis as being involved in molybdate export from vacuole to cytosol, especially during translocation of molybdate into maturing seeds [10,24]. Another molybdate transporter also named CrMOT2 has been described in Chlamydomonas reinhardtii [25] but it is different from its namesake described previously. The knockdown expression of this CrMot2 gene leads to a reduction of molybdate uptake proportional to the decrease in its transcript level [25]. More recently, a homolog of CrMot2, (named Asy for Abnormal Shoot in Youth) has been reported in higher plants [26]. Its localization in the plasma membrane together with the severe phenotypes in asy mutants supplied with sufficient Mo suggest that the ASY transporter is crucial for molybdate uptake [26].
Balik et al. [27] have demonstrated an antagonistic relationship between the sulfate and molybdate uptake by plants. Previous studies in Brassica napus [27] and Brassica juncea [7] showed an accumulation of Mo in response to S limitation. These authors suggested that molybdate binds to the active sites of the root sulfate transporters, which was favoured by the lack of competition with sulfate. Moreover, Shinmachi et al. [6] showed under field conditions that a significant accumulation of Mo in Triticum aestivum plants with low S fertilization was concomitant with the increased expression of root sulfate transporters. It has also been shown that mineral S application leads to a significant decrease in Mo concentration and uptake in B. juncea [7,28] and Brassica oleracea [29] or that the absence of sulfate in nutrient medium causes a significant accumulation of Mo in Solanum lycopersicum [11]. Considering the effects of S deficiency on the Mo accumulation previously described, it could be hypothesized that the leaf [Mo]:[S] ratio might serve as an indicator of S nutrition to predict S deficiency. For Sdemanding crops such as oilseed rape, S deficiency provokes multiple plant physiological changes leading to losses in yield and seed quality through modified lipid and protein composition of seeds [30,31,32,33,34]. It has already been suggested that the analysis of plants rather than soil via total S, SO 4 2- ] ratios could be used to detect plant S requirements (for review see [35,36] but these parameters are affected by growth stage and growth rates or need laboratory analyses, making them unreliable as diagnostic indicators [35,37]. It is therefore essential to find an accurate indicator of plant S status that is suitable as a reliable and relevant diagnostic tool under field conditions in order to evaluate the S requirements of B. napus and other important cultivated species in the context of the high risk of S deficiency.
The main objectives of this work were to study the interaction effects between S and Mo uptake and accumulation in oilseed rape. The first objective was to confirm that Mo accumulation occurred during S deprivation in oilseed rape under controlled conditions, with a precise kinetic description of Mo uptake together with the transcript levels of genes encoding sulfate and molybdate transporters. Additionally, the putative molybdate uptake by sulfate transporters was assessed in atsultr1;1and atsultr1;2 Arabidopsis mutants grown under S deficiency. The second objective was then to investigate how the [Mo]:[S] ratio in leaves of oilseed rape grown under field conditions was affected by S and N fertilization, and consequently if it constitutes a relevant early indicator to predict the risk of S deficiency at a larger scale. Finally, the last objective was to test the functionality of the
Hydroponic experiments and tissue sampling
Seeds of B. napus L. var. Boheme were germinated on perlite over demineralized water for seven days in the dark and then five days under natural light. Just after the first leaf emergence, seedlings were transferred to hydroponic conditions (18 seedlings per 20 L-plastic tank) in a greenhouse, between October and December, with a thermoperiod of 20˚C (day) and 15˚C (night). Natural light was supplemented with high-pressure sodium lamps (Master Greenpower T400W, Philips, Amsterdam, Netherlands) (350 μmol m -2 s -1 of photosynthetically active radiation at the canopy height) for 16 pore, Darmstadt, Germany) in order to maintain optimal nutrition conditions. After 4 weeks of growth, plants were separated into two batches supplied with a modified nutrient solution chosen in order to achieve S deficiency and to maintain the same concentration of other nutrients: (i) control plants (+S) were grown with 508.7 μM SO 4 2-, (ii) S-deprived plants (-S) were grown with 8.7 μM SO 4 2-(S1 Table). These nutrient solutions were also renewed according to NO 3 depletion by monitoring the NO 3 level in the tank (at the end of the experiment this was about every two days). Four independent samples each consisting of four individual plants were harvested at day 0 and after 1, 2, 3, 7, 13, 21 and 28 days of treatment. Leaves present at the beginning of treatment applications (d0) were identified and marked, and these organs were referred to as "emerged leaves", while leaves appearing during treatments were harvested separately as "new emerging leaves". At each date of harvest, whole roots from control and depleted plants (-S) were also harvested. Thereafter, leaves, petioles and roots were frozen in liquid nitrogen and stored at -80˚C for further analysis. It must be pointed out that roots were freeze dried without any prior rinsing and therefore the root analyses include whatever is bound outside the tissue as well as internal concentration of elements. An aliquot of each tissue was freeze-dried for dry weight (DW) determination and ground, using an oscillating grinder (MM400, Retsch, Haan, Germany), to fine powder for element analysis.
Arabidopsis plants (Col-0 as wild type, SALK_093256 as sultr1.1 and sel1-8 [38] as sultr1.2) were grown for 17 days weeks on vertical agarose plates with modified Long Ashton nutrient solution with full (0.75 mM) or limited (0.075 mM) sulfate in controlled environment room under a long day 16-h-light/8-h-dark cycle at constant temperature of 22˚C, 60% relative humidity, and light intensity of 120 μE s -1 m -2 . The sel1-8 mutant has been isolated from a genetic screen on selenate tolerance using an EMS mutagenized Col-0 seeds and contains a point mutation in the coding sequence [38] Field experiments and plant sampling Study of different fertilization rates on an S deficient field. The experimental site was selected from a previous study [39] showing an S deficiency in B. napus L. grown in 2009. This field of 11.3 ha was located at Ondefontaine, France (48˚59'18.69" N, 00˚41'56.70" W). A winter oilseed rape crop (B. napus L., 95% cv DK Exstorm and 5% cv Alicia) was sown with a density of 35-40 plants m -2 on 27 august 2013. The previous crop was spring barley (Hordeum vulgare L.). Chemical analyses of the loam soil provided an S content of 220 mg kg -1 and a pH value of 6.4. Before crop establishment, organic fertilizer (40 m 3 ha -1 of bovine manure) was applied to the whole field corresponding to 122 kg N ha -1 and 44 kg S ha -1 . The field was separated into seven randomized plots fertilized with different doses of N and S mineral fertilizer. One plot was unfertilized. Other plots were fertilized on 27 February 2014 with three different doses of S fertilizer at 0, 12 and 36 kg S ha -1 with a combined fertilizer comprising ammonium and nitrate and sulfuric anhydride, 26% N and 36% SO 3 and complemented with ammonium nitrate to obtain 65 kg N ha -1 on each plot. Then, on 27 March 2014, a second N fertilization with 60 kg N ha -1 (liquid N containing 50% urea, 25% ammonium and 25% nitrate) was applied to three plots, which thus received 125 kg N ha -1 after two fertilizer applications.
Five harvests were performed between January and July 2014: before fertilization (at the rosette stage and at the start of stem elongation (GS30), January 30 th 2014 and February 12 th 2014), 15 days after S fertilization (start of the visible bud stage (GS50), March 14 th 2014), 47 days after S fertilization (silique formation stage (GS70), April 15 th 2014) and finally after 60 days (silique formation stage (GS70), April 28 th 2014). Two batches of leaves comprising mature leaves (from the lower canopy) and young leaves (from the higher canopy), each of them consisting of three replicates of 20 leaves, were randomly collected from each plot. Leaves were freeze-dried for DW determination and ground, using an oscillating grinder (MM400, Retsch, Haan, Germany) to fine powder for further analysis.
Study of 45 commercial crops before fertilization and flowering. Forty-five commercial crops of oilseed rape were selected in France according to different locations (given in S1 Fig), different agricultural practices (dose of fertilizers, previous crop, tillage), and under contrasting soil and climate conditions that may affect SO 4 2leaching and hence SO 4 2availability. The farmers, identified with the help of DATAGRI (Lyon, France), collaborated in the present study. Crops have been numbered arbitrarily, from 1' to 45'. Two batches of leaves comprising mature leaves (from the lower canopy) and young leaves (from the higher canopy), each of them consisting of 20 leaves, were randomly collected from each of the 45 fields in February 2014, just at the end of winter corresponding to the start of stem elongation (GS30) and before fertilization. Leaves were freeze-dried before laboratory processing.
Multispecies experiment under controlled conditions
In order to assess whether data obtained with B. napus could be extrapolated to other species, experiments using B. oleracea, T. aestivum, Z. mays, S. lycopersicum and P. sativum grown under controlled conditions were conducted under different culture conditions described in S1 File with optimal and sub-optimal S alimentation. Leaves were harvested and frozen immediately in liquid nitrogen and stored at -80˚C until freeze-drying for further analysis.
Element analysis by mass spectrometry
For the analysis of total N and S contents, an aliquot of around 4 mg DW of each plant organ sample was placed in tin capsules for total N and S analysis in order to analyze between 60 and 80 μg N. The total N amounts and S amounts in plant samples were determined with a continuous flow isotope mass spectrometer (Nu Instruments, Wrexham, United Kingdom) linked to a C/N/S analyzer (EA3000, Euro Vector, Milan, Italy). The total N or S amount (N tot or S tot ) in a tissue "i" at a given time "t" was calculated as: B, Na, Mg, P, S, K, Ca, Mn, Fe, Ni, Cu, Zn and Mo in samples were quantified by High Resolution Inductively Coupled Plasma Mass Spectrometry (HR ICP-MS, Thermo Scientific, Element 2™, Bremen, Germany) with prior microwave acid sample digestion (Multiwave ECO, Anton Paar, les Ulis, France) using 800 μL of concentrated HNO 3 (Thermo Fischer, Illkirch, France), 200 μL of H 2 O 2 (SCP SCIENCE, Quebec, Canada) and 1mL of Milli-Q water for 40 mg DW, except for Arabidopsis for which only 20 mg DW were used. For the determination by HR ICP-MS, all the samples were spiked with two internal-standard solutions, gallium and rhodium (SCP SCIENCE, Quebec, Canada), for a final concentration of 10 and 2 μg.L -1 , respectively, diluted to 50 ml with Milli-Q water to obtain solutions containing 2.0% (v/v) of nitric acid, then filtered on a 40 μm teflon filtration system (Courtage Analyses Services, Mont-Saint-Aignan, France). Quantification of each element was performed using external standard calibration curves. The quality of mineralization and analysis were checked using a certified reference material of Citrus leaves (CRM NCS ZC73018, Sylab, Metz, France). The amount of each element in each tissue was then calculated as previously explained for N and S.
Reverse transcription (RT) and q-PCR analysis
Total RNA was extracted from 200 mg of root and leaf fresh matter as previously described [36]. For RT, 1 μg of total RNA was converted to cDNA with an iScript cDNA synthesis kit according to the manufacturer's protocol (Bio-Rad, Marne-la-Coquette, France). Q-PCR amplifications were performed using specific primers (Table 1) for each housekeeping gene (EF1-α and 18S rRNA) and target genes (BnaSultr1.1, BnaSultr1.2, BnaASY, BnaMot1). Q-PCRs were performed with 4 μl of 100x diluted cDNA, 500 nM of primers, and 1x SYBR Green PCR Master Mix (Bio-Rad, Marne-la-Coquette, France) in a real-time thermocycler (CFX96 Real Time System, Bio-Rad, Marne-la-Coquette, France). A program of three steps composed of an activation step at 95˚C for 3 min and a denaturing step of 40 cycles at 95˚C for 15 s followed by an annealing and an extending step at 60˚C for 40 s was used for all pairs of primers (Table 1). For each pair of primers, a threshold value and PCR efficiency was determined using a cDNA preparation diluted >10-fold. For all pairs of primers, PCR efficiency was around 100%. The specificity of PCR amplification was examined by monitoring the presence of the single peak in the melting curves after q-PCRs and by sequencing the q-PCR product to confirm that the correct amplicon was produced from each pair of primers (Eurofins, Ebersberg, Germany). For each sample, the subsequent q-PCRs were performed in triplicate. The relative expression of the genes in each sample was compared with the control sample (corresponding to control plants at d0) and was determined with the delta delta Ct (ΔΔCt) method using the following equation: relative expression = 2 -ΔΔCt , with ΔΔCt = ΔCt sample -ΔCt control and with ΔCt = Ct target gene −Ct housekeeping gene (for calculations, we considered the geometric mean between Ct of the housekeeping genes), where Ct refers to the threshold cycle determined for each gene in the exponential phase of PCR amplification. Using this analysis method, relative expression of the target gene in the control sample was equal to 1 [40], and the relative expression of other treatments was then compared with the control. Table 1. Accession number and Q-PCR primer sets. EF1-α and 18S rRNA were housekeeping genes used for relative gene expression by Q-PCR analysis.
Gene
Accession number Forward Reverse Expression analysis of Arabidopsis was performed as described previously [41] with RNA isolated from root material and using ubiquitin UBQ10 for normalization (see Table 1 for primer sequences).
Statistical analysis
For each measurement, hydroponic experiments on B. napus were conducted with four independent biological replicates constituted of four individual plants, except for q-PCR analysis, which was conducted with three biological replicates. Data are given as mean ± SE for n = 4 (or n = 3 for q-PCR data). All data of the hydroponic experiment were analyzed by Student's test (Excel software) and marked by one or several asterisks when significantly different between controls and S-deprived plants ( Ã P<0.05, ÃÃ P<0.01, ÃÃÃ P<0.001). Under field conditions, for the study of different fertilization rates on an S deficient field, three replicates corresponding to a pool of 20 leaves of 20 independent plants were collected. Data are given as mean ± SE for n = 3 and were analyzed by Student's test (Excel software) and marked by different letters when they were significantly different between treatments at a given date, at P<0.05. All experiments on different species in controlled conditions were conducted with four independent biological replicates, except for P. sativum culture with three replicates. Data are reported as mean ± SE for n = 4 (or n = 3 for P. sativum data). All data of the multispecies experiment were analyzed by Student's test (Excel software) and marked by one or several asterisks when significantly different between controls and S-deprived plants ( Ã P<0.05, ÃÃ P<0.01, ÃÃÃ P<0.001).
Results
S deprivation reduced S, N, K, Ca, B and Na uptake but strongly increased Mo uptake B. napus plant growth rates were similar for control and S-deprived plants until the 7 th day of S deprivation (Fig 1) and were then significantly reduced. For example, after 21 days, the biomass of S-deprived plants decreased by 29% compared to the biomass of control plants. To overcome the biomass reduction, a theoretical uptake by S-deprived plants was calculated for each nutrient (Fig 2) using the DW of control plants and the nutrient concentration measured in S-deprived plants. This theoretical uptake reflects the uptake expected by S-deprived plants if DW production was identical to control plants. As expected, after 21 days of S deprivation, no significant of S occurred. The uptake of some nutrients was reduced in line with the reduction in the growth rate or more strongly: for N (-8.7 ± 1.06%), K (-20.2 ± 0.88%), Ca (-22.0 ± 2.41%), Na (-23.4 ± 1.85%) and B (-52.9 ± 2.82%). Conversely, S deprivation strongly increased the uptake of Mo by 197.0 ± 10.73%. The uptake of P, Mg, Fe, Cu, Zn and Mn were not significantly affected by S deprivation.
S deprivation strongly and immediately increased uptake of Mo without induction of molybdate transporter genes
In S-deprived B. napus, plant S content (Fig 3A) remained at a nearly steady state level, revealing a lack of significant S uptake from potential trace amounts in the nutrient solution. In control plants, the plant S content steadily increased with time and was significantly higher than in S-deprived plants from 3 days of S deprivation. Plant Mo content ( Fig 3B) increased with time in S-deprived plants, similar to control plants, but at a significantly greater rate; reaching after 28 days 15.43 and 6.93 mg Mo.plant -1 in S deprived and control plants, respectively. From the first day, the Mo content was significantly increased by 28% in S-deprived plants relative to control i.e. before a significant difference in S content that occurred after only 3 days. As a consequence, the [Mo]:[S] ratio at the whole plant level (Fig 3C) significantly increased as early as one day after S deprivation, and was increased from 0.0014 (day 0) to 0.45 after 28 days of S deprivation. In control plants, the [Mo]:[S] ratio remained constant as a function of time.
The relative expression of genes encoding different molybdate (BnaMot1 and BnaAsy) and sulfate (BnaSultr1.1and BnaSultr1.2) transporters was quantified in roots during the 28 days of treatment ( Fig 3D). A very strong accumulation of BnaSultr1.1 transcripts occurred during the first 7 days and remained at a steady state level between 7 and 21 days. The BnaSultr1.2 transcript level was also significantly up-regulated, but to a lesser extent than BnaSultr1.1 in response to S deprivation from the first day of treatment. In contrast, expression of genes encoding root molybdate transporters (BnaMOT1 and BnaASY) (Fig 3D) was not affected by S deprivation despite the strong increase in Mo uptake by S-deprived plants (Fig 3B). The relative expression of BnaSultr1.1 appeared to be correlated with Mo uptake (polynomial order 2, R = 0.98) during the 21 days of S deprivation (insert Fig 3D).
To test whether the increase in [Mo] is indeed caused by increased activity of sulfate transporters, we analysed Arabidopsis thaliana mutants in Sultr1.1 and Sultr1.2. The wild type Arabidopsis (Col-0), when submitted to S deficiency strongly increased the transcript levels of AtSultr1.1 and Atsultr1.2 as well as those of marker genes for sulfate deficiency AtSDI1 and AtAPR3, which encode for a sulfur responsive gene and an adenosine 5 0 -phosphosulfate reductase, respectively (Table 2). At the same time, leaf S content was reduced by 55% and leaf Mo content was increased by 571%. In control conditions the sultr1.1 mutant shows leaf S and Mo contents identical to wild type. In S deficient conditions, S (-56%) is decreased to the same degree as in Col-0. The increase in Mo (+433%) leads to similar Mo concentration as in wild type (Table 2). Interestingly, AtSultr1.2 was not up-regulated by S deficiency in this mutant despite similar S concentrations. In the sultr1.2 mutant (Table 2), leaf S content was lower than in Columbia ecotype under ample S supply and was further decreased by S deficiency (-27%), but only to levels found in the wild type. Molybdenum was lower compared to wild type in control conditions, similar to S, and accumulated (+729%) under S deficiency but reached a lower concentration than in wild type or in sultr1.1 KO mutant. In the sultr1.2 mutant the transcript levels of the four genes, AtSultr1.1, AtSultr1.2, AtSDI1 and AtAPR3 were higher than in wild type and sultr1.1, and were up-regulated by S deficiency (Table 2). Interestingly, the ratio of [Mo] to [S] is identical in all three genotypes under control conditions, despite the differences in S content, but at S limiting conditions the ratio is higher in wild type than in both mutants. This suggests that both transporters contribute to Mo accumulation but are not the only and/or main drivers.
In leaves of S-deprived plants, Mo and S contents changed in opposite ways
Under hydroponic conditions, no matter which leaves were assayed (emerged leaves or new emerging leaves), the S content steadily increased with time in control plants (Fig 4A and 4B). Predictably and according to deprivation, in emerged leaves from S-deprived plants the S content significantly decreased as early as 2 days after S deprivation (Fig 4A). The S content decreased with the same pattern in new emerging leaves (Fig 4B). Molybdenum content steadily increased with time in the two groups of leaves of control plants but the accumulation of Mo was significantly and rapidly (from 1 day) increased by S deprivation and until the end of the experiment (Fig 4C and 4D). Compared to control plants, the Mo content in emerged leaves (Fig 4C) was increased by 197% and 84% after 7 and 28 days of S deprivation, respectively. The increase in Mo content was even more pronounced in new emerging leaves ( Fig 4D) where it increased by 637% and 253% after 7 and 28 days of S deprivation, respectively. In these two groups of leaves, the To simplify reading, only the data after 47 days are presented because the results obtained at 60 days showed similar trends. Sulfur content (Fig 5A-5C) was decreased significantly in mature leaves by a reduction in S fertilization (from 36, 12 to 0 kg S ha -1 ), whatever the level of N fertilization (65 or 125 kg N ha -1 ). The highest Mo content (Fig 5D and 5F) was found in mature leaves of plants receiving N fertilization and with no or low S fertilization (0 or 12 kg S ha -1 ). The [Mo]:[S] ratio (Fig 5G-5I Interactions between N and S fertilization were also found. It can be assumed that the growth rate was mostly dependent on the N fertilization rate. For example, 15 days after S Fig 5. (A, B,
C) S content (mg g -1 DW), (D, E, F) Mo content (μg g -1 DW) and (G, H, I) the [Mo]:[S] 10 4 ratio (to simplify reading the [Mo]:[S] ratio is presented with a multiplier factor of 10 4 ) in mature leaves of B. napus grown under field conditions after (A, D, G) 15 and (B, C, E, F, H, I) 47 days of fertilization.
Plants received no mineral fertilization (hatched bars, 0 kg S.ha -1 , 0 kg N.ha -1 ), or 0 kg S ha -1 (white bar), 12 kg S ha -1 (gray bar) or 36 kg S ha -1 (black bar) with 65 kg N ha -1 (dashed bars) or 125 kg N ha -1 (full bars). Within the same graph, letters when different between fertilization treatments indicate significant differences for P<0.05. doi:10.1371/journal.pone.0166910.g005 fertilization, the biomass of mature leaves was 5.03 ± 0.38 g DW leaf -1 in unfertilized plot (N0S0) whereas in plot with 65 kg N ha -1 and 0 kg S ha -1 (N65S0) the biomass was 6.70 ± 0.01 g DW leaf -1 (P<0.01). The supply of N mineral fertilization had a direct negative consequence on S content in leaves as a result of the increased growth rate. Indeed, without S supply, N fertilization significantly reduced the leaf S content after 15 days (Fig 5A, N0S0 versus N65S0) or after 47 days (Fig 5C, N0S0 versus N125S0). Consequently, the Mo content in leaves was increased by N fertilization of 65 (Fig 5E, N0S0 versus N65S0 or N65S12) or 125 kg.N ha -1 ( Fig 5F, N0S0 versus N125S0). As a result, the [Mo]:[S] ratio was significantly lower (Fig 5G, 5H or 5I) in the absence of N mineral fertilization in mature leaves (N0S0), than in N fertilized plots with no S (N65S0 in Fig 5H and N125S0 in Fig 5I), therefore reflecting lower S requirements of plants without N fertilization.
In the same way, plant growth was stimulated by a second N fertilization of 60 kg N ha -1 (corresponding to plots with 125 kg N ha -1 ) and the relative S content in mature leaves was reduced. For example, when comparing Fig 5B and 5C, the S content in mature leaves of plants with 125 kg N ha -1 was significantly lower (P<0.05) than for plants with 65 kg N ha -1 and for a given level of S fertilization (0, 12 or 36 kg S ha -1 ). In contrast, the N fertilization rate had no significant effect on the Mo content of mature leaves (Fig 5E compared to Fig 5F). As a result, when comparing mature leaves of plants receiving 125 kg N ha -1 (Fig 5I) to 65 kg N ha -1 ( Fig 5H), the [Mo]:[S] ratio was higher in leaves of plants with a higher N fertilization, regardless of the rate of S fertilization, which also reflected the higher requirement for S of N fertilized plants.
Threshold values of the [Mo]:[S] ratio were then determined 15 days after S fertilization, corresponding to development stage in mature leaves of oilseed rape before flowering, to conclude whether or not S fertilization was required. Threshold values of the [Mo]:[S] ratio were adjusted by factor 10 4 and calculated as the sum or difference between the means of data presented in Fig 5G and the 95% confidence intervals for S sufficient plots (36 kg S ha -1 , 1.63) and of S deficient plots (0 kg S ha -1 , 2.83), respectively. Such thresholds would allow clustering of the plots into three groups: S sufficient (ratio <1.61), at risk of S deficiency (ratio between 1.61 and 2.83) and S deficient (ratio >2.83). This classification into three groups has been tested on 45 commercial oilseed rape crops from different locations in France and for which mature leaves were harvested before flowering and S fertilization (Fig 6). The use of these threshold values suggested that 18, 24 and 58% of the crops were classified as S sufficient, at risk of S deficiency and S deficient, respectively. Because it was previously found that S deprivation strongly decreased N, K, Ca, B and Na uptake under hydroponic conditions (Fig 2), a principal component analysis was performed using the mineral composition of old leaves of theses 45 commercial oilseed rape crops (S3 Fig). While an inverse relationship between Mo and S contents in old leaves was retrieved, other relationships between S and other minerals found under hydroponic conditions, were partly (S and N) or totally hidden (S and Ca) due to multiple interactions occurring under field conditions.
Test of genericity of the [Mo]:[S] ratio as S-deficiency indicator
The [Mo]:[S] ratio was calculated in leaves of other cultivated species: B. oleracea, T. aestivum, Z. mays, P. sativum and S. lycopersicum, grown in controlled conditions. In all species, S content was significantly reduced and Mo content was significantly increased by S deficiency (Table 3). For example, in B. napus the Mo content increased as early as 1 day after deprivation, in T. aestivum as early as 2 days and in Z. mays as early as 5 days of S deficiency, whereas the S content was reduced later. Consequently, the [Mo]:[S] ratio was highly and significantly increased by S deficiency following 1 day of treatment in B. napus, 2 days in T. aestivum, 5
Perturbations of Mo homeostasis by non-specific transport of molybdate and sulfate
It is usually assumed that plant nutrient contents are tightly regulated through balanced activities of membrane transporters that mediate the uptake and distribution of nutrients so that a relative compositional homeostasis is maintained. However, under deficiencies, crosstalk induces unavoidable accumulations of some nutrients that upsets the balance and modifies the ionomic composition of plant tissues [23]. In our study, among all mineral nutrients, only Mo uptake was strongly increased by S deficiency in B. napus (Fig 2) leading to its accumulation in leaves (Fig 3B). From the first day of S deprivation, the Mo content was dramatically increased by 128% in S-deprived plants relative to plants continuously supplied with S. This accumulation of Mo in response of S deficiency has previously been reported in Brassicaceae species. Schiavon et al. [7] reported that S deficient B. juncea accumulated significantly more Mo than control plants. Conversely, the supply of S significantly decreased the uptake of Mo by B. oleracea [29], B. juncea [28] and by B. napus [27].
Two root membrane molybdate transporters have been reported so far as responsible for Mo uptake in plants: MOT1 [22] and ASY [26]. In our study, under S deprivation, the expression of BnaMot1 and BnaAsy remained similar to control plants ( Fig 3D) and consequently, it may be assumed that the increase in Mo uptake was not a result of an induction of these specific Mo transporters. This is consistent with previous work reporting that root TaeMot1 expression in T. aestivum was not significantly affected by S deficiency despite increased concentration of Mo in plant tissues [6]. Indeed, the involvement of MOT1 in plant Mo uptake remains unclear as although Arabidopsis mot1 mutants and accessions with lower expression of MOT1 accumulate less Mo, recent results established the localization of MOT1 in mitochondria [23] and not in plasma membranes and in vesicle membranes as previously proposed [22]. Alternatively, some authors have suggested that the accumulation of Mo could be due to non-specific transport by sulfate carriers [7]. Indeed, molybdate and sulfate through their similar chemical properties (charge, size, structure, bonding-properties) could compete for the same binding sites [9,10]. In our results, others genes (BnaSutr1.1 and BnaSultr 1.2) encoding high-affinity sulfate transporters of group 1 have been studied in order to clarify their ability to also transport molybdate. The increased uptake of Mo in S-deprived plants was strongly correlated with an increase in BnaSultr1.1 expression at least during the first 7 days (Fig 3D). Similarly, in T. aestivum, under S deficiency expression of genes encoding the sulfate transporters, TaeSultr1.1 and TaeSultr4.1, increased and coincided with a 3.7 fold increase in Mo contents in grain [6]. Considering the relationships between Mo uptake and the expression of Bna-Sultr1.1 (Fig 3D), single mutants of Arabidopsis thaliana were used to determine the role of AtSultr1.1 and AtSultr1.2 in the stimulated uptake of Mo during S deficiency ( (Table 2). This suggests that others root transporters, responding to S deficiency, may be involved or that activity of root sulfate transporters is also strongly regulated at the post-transcriptional level. Data on a double KO mutant for atsultr1;1 and atsultr1;2 can be found on the Purdue ionomics information management system (See [42] and http://www.ionomicshub.org/arabidopsis/ piims/showIndex.action) as well as [43]. In this mutant, S and Mo contents are reduced compared to the wild type, suggesting that these two transporters could be involved in the uptake of S and Mo. Therefore, the overall results suggest that a very early increase in Mo uptake following S deprivation is, at least partly, a consequence of the strong and early increased expression of Sultr1.1 and AtSultr1.2 encoding root sulfate transporters.
A potential indicator derived from early molecular events
Under field conditions, the Mo and S contents also changed in opposite ways in leaves of plants submitted to different S fertilization rates. The lowest Mo contents were found in leaves of plants with the highest S fertilization that had the highest S content. Reciprocally, the highest Mo and lowest S contents were found in leaves of plants supplied with the lowest S fertilizations (Fig 5A-5F and S2 Fig). Similar low Mo content in leaves in S fertilized plants and a range of leaf concentrations (i.e. between 1 and 3 μg g -1 DW) have been previously reported in oilseed rape plants [27] and in cabbage [29] under field conditions. However, the leaf Mo contents obtained in our field experiment were lower than those found under hydroponic conditions. This can easily be explained by the fact that the Mo concentration in the nutrient solution was far higher than in the soil solution, nevertheless, the trend of Mo accumulation during S deficiency was similar. Because of the opposing variations in leaf S and Mo contents, the [Mo]:[S] ratio was lower in plants with an optimal S fertilization than in plants with reduced S fertilization (Fig 5G-5I and S2 Fig). In our study, the [Mo]:[S] ratio therefore discriminated different rates of S fertilization and revealed the S status of plants.
It was also found that the increase in plant growth rates that were triggered by N fertilization increased plant S requirements, resulting in a decrease in the S content in oilseed rape leaves under field conditions (Fig 5A-5C and S2 Fig) as previously reported [44]. A second N fertilization (60 kg ha -1 ) had no major impact on the Mo content of leaves, while the leaf Mo content was lower in N unfertilized plants (Fig 5E compared to Fig 5F and S2 Fig). This lower Mo accumulation found in unfertilized plants could be explained by a reduced growth rate and thus by lower S requirements. As a consequence, N fertilization could interfere in the evolution of the [Mo]:[S] ratio. Indeed, a supplementary N fertilization increased the [Mo]:[S] ratio and the total lack of N fertilization decreased the [Mo]:[S] ratio (Fig 5H and 5I and S2 Fig). Nevertheless, this ratio still allowed differentiation of the plots with different rates of S fertilization.
To allow the effective use of the [Mo]:[S] ratio under field conditions, threshold values are required to estimate the S status of oilseed rape crops, and a first estimation has been calculated from a field experiment using different levels of S fertilization. According to these threshold values, three S status groups were defined: S deficient, at risk of S deficiency and S sufficient, with the additional intermediate class taking into account the area of uncertainty. The study of 45 commercial oilseed rape crops in France suggested that more than 50% were potentially S deficient (Fig 6), highlighting the need for optimizing S fertilization practices. However, more accurate determination of [Mo]:[S] ratio threshold values would require a larger set of experiments during which S fertilization would need to be modulated and the effects on seed yield and quality monitored.
Because oilseed rape is well known for its high S requirements, other cultivated species such as B. oleracea, T. aestivum, Z. mays, P. sativum and S. lycopersicum have been tested (Table 3) for their response to S deficiency. In all of them, the leaf [Mo]:[S] ratio was significantly increased by reduced S availability, as a result of a decrease in leaf S content coupled with an accumulation of Mo. The same was true for all three Arabidopsis genotypes ( Table 2). The general response pattern can also be supported by results from the literature with the same or different species, for example, globe amaranth (Gomphrena globosa) [45], cabbage [29], wheat [6] and tomato [11]. A decrease in the [Mo]:[S] ratio resulting from an inadequate S supply seems to be a general response of higher plants, whatever their intrinsic requirements for S.
Overall, the analysis of the leaf [Mo]:[S] ratio is a practical alternative to detect early S deficiency under field conditions, given that molecular indicators require laboratory analysis and use of control plants, which are not easily performed under field conditions. Alternatively, new breakthrough technologies using X ray fluorescence or laser metal analysis, also available as portable field tools, would allow an easy quantification of the [Mo]:[S] ratio. | 2018-04-03T05:35:47.291Z | 2016-11-21T00:00:00.000 | {
"year": 2016,
"sha1": "63ffca0ceca7b5cdf5bcd05fda0615d8e9f0250e",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0166910&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "63ffca0ceca7b5cdf5bcd05fda0615d8e9f0250e",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
202734171 | pes2o/s2orc | v3-fos-license | PAGAN: Portfolio Analysis with Generative Adversarial Networks
Since decades, the data science community tries to propose prediction models of financial time series. Yet, driven by the rapid development of information technology and machine intelligence, the velocity of today's information leads to high market efficiency. Sound financial theories demonstrate that in an efficient marketplace all information available today, including expectations on future events, are represented in today prices whereas future price trend is driven by the uncertainty. This jeopardizes the efforts put in designing prediction models. To deal with the unpredictability of financial systems, today's portfolio management is largely based on the Markowitz framework which puts more emphasis in the analysis of the market uncertainty and less in the price prediction. The limitation of the Markowitz framework stands in taking very strong ideal assumptions about future returns probability distribution. To address this situation we propose PAGAN, a pioneering methodology based on deep generative models. The goal is modeling the market uncertainty that ultimately is the main factor driving future trends. The generative model learns the joint probability distribution of price trends for a set of financial assets to match the probability distribution of the real market. Once the model is trained, a portfolio is optimized by deciding the best diversification to minimize the risk and maximize the expected returns observed over the execution of several simulations. Applying the model for analyzing possible futures, is as simple as executing a Monte Carlo simulation, a technique very familiar to finance experts. The experimental results on different portfolios representing different geopolitical areas and industrial segments constructed using real-world public data sets demonstrate promising results.
I. INTRODUCTION
Financial markets play an important role on the economical and social organization of modern society. Trigger by recent machine intelligence, investment industry is experiencing a revolution. Despite this, nowadays financial portfolio management is still largely based on linear models and the Markowitz framework [1], known as Modern Portfolio Theory (MPT). The MPT aims to achieve portfolio diversification while minimizing specific risks and determining the risk-return tradeoffs for each asset [2]. Despite the significant and long lasting impact, MPT has been criticized for its ideal assumption on the financial system and data. The MPT heavily relies on the accurate estimate of the future returns and volatilities for each asset and their correlations. However, market price forecast is one of the main challenges in the time series literature due to its highly noisy, stochastic and chaotic nature [3]. More importantly, the velocity of today's information systems enables traders and investors to take decisions based on realtime market updates. This fact leads to high market efficiency, i.e. the price agreed between the different parties involved in the trade accounts for all available information about the present as well as all the forecasts about the future that are inferable today. Under high efficient market, as soon as a trusted long-term forecast is provided, this forecast would be consumed by traders in the short term and have a direct impact already on current price whereas future price variations would again be uncertain [4]. With the continuous advancement of the financial transactions and the information systems, the market becomes increasingly efficient, so does the challenge of forecasting market price.
To address the challenges presented in portfolio management, in this paper, we present a pioneering study about portfolio analysis with generative adversarial networks (PAGAN). PAGAN directly models the market uncertainty, the main factor driving future price trend, in its complex multidimensional form, such that the non-linear interactions between different assets can be embedded. We propose an optimization methodology that utilizes the probability distribution of real market trends learnt from training PAGAN to determine the best portfolio diversification minimizing the risk and maximizing the expected returns observed over the execution of multiple simulations.
The main contributions of the paper are summarized below.
• Problem Setting. Different from existing work and practice, in this paper, we aim to model the market uncertainty conditioned with the most recent past, allowing automatically learning nonlinear market behavior and non-linear dependencies between different financial assets and generating many realistic future trends given today situation; • Optimization methodology. We propose a generative model on real time series named PAGAN and an optimization methodology to use the PAGAN model for solv-ing the portfolio diversification problem. The probability distribution learnt by PAGAN enables us to play with different diversification options to trade off risk for expected returns. The final diversification to be implemented is the one realizing the sweet point on the predicted risk-return efficient frontier, given a target user-selected risk level; • Experiments on Real Data Sets. We evaluate the proposed methodology on two different portfolio representative of different markets and industrial segments. Results demonstrate that the proposed approach is able to realize the risk-return trade off and significantly outperform the traditional MPT.
The rest of the paper is organized as follows. After introducing the MPT in Section 2, we brief review of the related work in Section 3. We introduce the proposed PAGAN methodology in Section 4, including market uncertainty modeling, the generative network architecture, as well as the optimization approach for determining portfolio diversification. Section 5 provides some experimental results demonstrating the effective of the PAGAN on real-world finance assets and data. Finally, we conclude the paper in Section 6.
II. THE MARKOWITZ'S FRAMEWORK
Let us consider a fully invested long only portfolio consisting of a set of financial assets and define a portfolio diversification strategy as a vector x where x i is the amount of capital we invest in the ith asset. We optimize the portfolio diversification x to maximize the expected portfolio returns and minimize the portfolio risk by taking an educated guess on the probability distribution of future assets' returns. According to Markowitz's framework [2], the educated guess on the probability distribution of assets' returns r is given by the following assumption. Assumption 1. r ∼ N (µ, Σ) where r is the returns vector (r i is the return for asset i), µ is the expected mean returns vector, and Σ is the covariance matrix. The expected mean returns µ, and the returns covariance matrix Σ are estimated from the past observations and assumed constant in the future.
Given a portfolio diversification x, the portfolio future returns r p (x) are originated by the linear combination of the returns of individual assets. Given Assumption 1: Equation 2 defines the portfolio returns expectation µ p , to be maximized by selecting x. Equation 3 defines the portfolio risk factor σ 2 p to be minimized. Equation 4 defines the optimization constraints. This optimization problem can be solved in closed form [5]. Figure 1 shows with a continuous line an example of efficient frontier that one can identify in the risk-returns ). Whereas Markowitz's identification of the efficient frontier has a very solid mathematical background, Assumption 1 hardly applies in the real world for the following reasons: a) returns for an individual asset are not normally distributed, b) interactions between different assets may be non-linear whereas the covariance Σ captures only linear dependencies, and c) future probability distribution of assets' returns may deviate from the past ( Figure 2). In this work, we use generative models to solve all these three issues in one go with the proposed PAGAN model as follows: a) PAGAN does not rely on any preliminary assumption about the probability distribution of individual asset returns, b) the non-linearities of the neural network implicitly embed nonlinear interactions between different assets, and c) PAGAN is explicitly designed to take input as the current market situation and to model the future probability distribution of returns.
A. Portfolio Management
To address the limitation of MPT, a main breakthrough is the introduction of conditional volatility models that allow returns' volatility to vary through time [6]. These sophisticated statistical models assumes that relationships shift will eventually return to normal, and therefore tend to fail when the shifts in returns or correlations are more permanent [7]. Recently, the application of machine learning to finance has drawn interests from both investors and researchers. However, the current work has been focused on adapting reinforcement learning (RL) frameworks to trading strategy [8]- [10]. To be specific, the market action of the RL agent defines portfolio weights. An asset with an increased target weight will be bought in with additional amount, and that with decreased weight will be sold. These works focus on intra-day decision, while we tackle the challenge of medium-long term portfolio diversification. In addition, all these RL algorithms assume a single agent and ideal trading environment, i.e., each trade is carried out immediately and has no influence on the market. In fact, the trading environment is essentially a multiplayer game with thousands of agents acting simultaneously and impact the market in a complicated way. Considering the complexity of financial market, combining machine learning with portfolio management remains relatively unexplored.
B. GANs on Time Series
Recently generative adversarial networks (GAN) has become an active research area in learning generative models. GAN is introduced by Goodfellow et al. [11], where images patches are generated from random noise using two networks training simultaneously. The discriminative net D learns to distinguish whether a given data instance is real or not, and a generative net G learns to confuse D by generating high quality data. Powered by the learning capabilities of deep neural networks, GANs have gained remarkable success in computer vision and natural language processing. However, to date there has been limited work in adopting the GAN framework for continuous time series data. One of the preliminary works in the literature produces polyphonic music with recurrent neural networks as both generator and discriminator [12], and the other one uses conditional version of recurrent GAN to generate real-valued medical time series [13]. In these methods, the multiple sequences are treated as i.i.d. and fed to a uniform GAN framework. Stock price is largely driven by the fundamental performance of an individual company and the dynamic interactions of different stocks are embedded in their prices. Thus the i.i.d. assumption of multiple sequences is too restricted for portfolio analysis. Li et al. [14] adopt Long Short Term-Recurrent Neural Networks (LSTM-RNN) in both the generator and discriminator of GAN to capture the temporal dependency of time series. Zhou et al. [15] apply the superposition of an adversarial loss and a prediction loss to improve a traditional LSTM-based forecast network. Yet, in Zhou there is no actual generative model in the sense that, once trained, the prediction model of Zhou returns a single deterministic forecast given an input price sequence whereas in the proposed PAGAN model we provide the capability of simulating possible future situations by sampling future prices from a posteriori probability distribution learnt with the adversarial training process.
C. Stock Market Forecast
Stock market forecast is one of the most challenging issues among time series forecasting [3] due to chaotic dynamics of the markets. Traditionally, statistical methods, such as autoregressive model, moving average model, and their combinations, are widely used. However, these methods rely on restricted assumptions with respect to the noise terms and loss functions. During the past decades, machine learning models, such as artificial neural networks [16], [17] and support vector regression [18], [19] have been applied to predict future stock prices and price movement direction. More recently, a deep convolutional neural network is applied to predict the influence of events on stock movements [20]. Kuremoto et al. [21] present a deep belief network with restricted Boltzmann machines and Bao et al. [22] investigate autoencoders and LSTM for short-term stock price forecast. However, with the continuous advancement of the financial transactions and the information systems, financial market becomes increasingly efficient. This leads to increased market uncertainty and challenge of forecasting market price. Instead of predicting market trend, we for the first time learn modeling the uncertainty of the marketplace in its sophisticated multidimensional form for portfolio management.
A. Overview
In this work we deal with deep-learning networks for time series data. As observed by other authors [23], 1D convolutional networks are an effective tool to process time series and can outperform traditional recurrent networks in terms of both result quality and performance. For this reason we process time series by means of convolutional networks and represent the asset-price trends as a matrix M with k rows (financial assets) and w columns (days), M ∈ R k×w . The deep-learning networks process the time information by convolving along the time dimension.
Our aim in this work is to model the probability distribution of the asset-price trends for the future f days given the current market situation represented by the latest observed b days. We consider the matrix M to span the whole analysis length: Thus, M is composed of two parts: a) the known past M b of length b, and b) the unknown future M f of length f . We apply a generative deep-neural network G to learn the probability distribution of future price trends M f within the target future horizon f given the known recent past M b , and a prior distribution of a random latent vector.
where λ is the latent vector sampled from a prior distribution. In practice, λ represents the unknown future events and phenomena impacting the marketplace. The known past M b is used to condition the probability distribution of the futurê M f based on the most updated market situation. The generator G is a generative network which weights are learnt to letM f match the probability distribution of M f given the past M b on a set of training data. The generator G is trained in adversarial mode against a discriminator D with the goal of minimizing the Wasserstein distance between synthetic dataM f and real data M f , based on historical observations. The training process has the goal to approximate the real posterior probability distri- given the prior distribution of λ. In this work we use the normal distribution for the prior λ.
To implement the adversarial training process we consider a discriminator network D to take as input the overall price matrix M , that is the concatenation of the conditioning M b and either the synthetic dataM f or the real data M f . The discriminator output is a critic value c = D(M ). The discriminator is trained to minimize c for the real data and maximize it for synthetic data, whereas the generator G training goal is to minimize c for the synthetic data. In this work we apply WGAN-GP methodology [24].
B. Deep-learning architecture
Data normalization. We consider the adjusted close price p for each financial asset. During training, given a time window of w = b + f days, we normalize the prices p for each asset to fit in the range [−1, 1] for the initial b days. The normalization output is the daily asset price variation p(t)−p(t−1) computed in this normalized scale. Whereas normalizing to the range [−1, 1] enables us to expose to the neural networks values limited within a reasonable range, the normalization removes from the data information about the price-variability within the given window w. Since the normalized values of M b always range between [−1, 1], the presence of long tails in the data is normalized out. To feed into the neural network information about the observed price variability and possible presence of long-tails, during the normalization procedure we also compute an analysis value a for each asset: where p max , p min , and p mean are respectively the maximum, minimum and mean values of the price p in M b for a given asset. Let's define the analysis vector A as the vector representation of a to consider multiple assets. Figure 4 shows the architecture of PAGAN generator and discriminator and explicitly clarify the presence of this non-traditional normalization process (Norm). Generator. The generator G takes as input the price sequence M b and the latent vector λ (Figure 4a). G is composed of two consecutive parts: a) a conditioning network to compute an inner representation of the past price sequence M b , and b) a simulator network to generate the simulation of future price trends.
The conditioning input is the most recent market trend M b . After the normalization, we apply a set of 1D convolution and dense layers as described in details in Appendix A.
The conditioning output depends only on M b and is used to condition the probability distribution of the synthetic data The simulator network takes as input the conditioning output and the latent vector λ, and generates as output a simulation of future market pricesM f . The conditioning, and the simulator together implement the generator G and are trained at once against the discriminator D with the traditional adversarial approach.
Discriminator. The discriminator takes as input either a) the real data M , concatenation of M b and M f , or b) the synthetic dataM , concatenation of M b andM f . The discriminator output is computed by the network shown in Figure 4b.
Architectural parameters. Appendix A in the supplementary material describes all parameter and other details of the generator and discriminator networks.
C. Portfolio optimization
Once the training process is completed, the generator G is able to synthesize realistic future trendsM f = G(M b , λ). We use these synthetic simulations to numerically estimate the expected risks and returns for different portfolio diversification options x. We thus execute a portfolio optimization on the estimated posterior probability distribution: given the known prior distribution of λ and the conditioning M b . For a given conditioning M b , let us consider a set S of n simulationsM f ∈ S sampled from P M f | M b by evaluating the generative model G(M b , λ) on different extractions of λ. Let us define also the return vector function r(M f ) where the ith element r i is the return obtained by the ith asset at the end of the simulation horizon for one simulationM f given by r i = e i /s i − 1, where e is the asset price at the end of the simulationM f , and s is the price at the beginning of the simulation. Since the constant in the definition of r i does not impact the optimization results, in the work, we use The portfolio returns achieved with the diversification x for a given simulationM f is: The simulationsM f ∈ S sampled from the probability The portfolio optimization problem is defined as in the traditional Markowitz' optimization approach (Section II), yet it is executed on the predicted future probability distribution x that is non-normal and includes nonlinear interactions between the different assets. For instance, the optimization goal is to identify the configurations of x that maximize the expected returns and minimize a risk function θ(M b , x). Both θ and µ p are estimated on the base of the simulation samplesM f ∈ S. In this framework, the risk function θ(x) can be any metric such as the value at risk [25], or the volatility. Without loss of generality, we use the estimated volatility (variance) that enables us to evaluate the approach directly with respect to the traditional Markowitz's methodology. The optimization problem is thus formalized as: where Equations 10, and 11 are the target objectives. We solve the optimization problem by means of a multi-objective genetic algorithm, the NSGA-II [26] to provide a trade-off between expected returns and risk. The output of the NSGA-II is a set X(S) that depends on the simulations S. Elements x ∈ X are Pareto-optimal diversifications trading off returns and risk. The decision of what diversification strategy x ∈ X to use is left to the end user depending on its own goals.
A. Portfolios setup
We apply public available data Yahoo Finance [27] to back-testing the proposed PAGAN approach. A matrix M representing the price trend for a window of length w, i.e. 2006-05-01 to 2015-01-01, is considered train data, and 2015-01-012 to 2018-06-30 is considered as test data. The proposed PAGAN methodology learns the generative model G from train data and applies it for optimizing portfolio for the test data. In this work, we investigate two different portfolios representing different geopolitical areas and industrial segments. To select the assets to be included in the portfolios, we mainly looked at the following three criteria. a) Data availability: we only include assets for which data is available from at least 2006, given Yahoo Finance data source. b) Currency homogeneity: whereas we consider different portfolios with different currencies, in a single portfolio we include assets traded in a single currency. This is is not an actual limitation, yet it facilitates our evaluation process. c) Data correctness: we identified some erroneous data from Yahoo Finance, e.g. NaN values or fluctuation of 10× in asset price lasting a single day, etc.. Whereas these errors are rare, we systematically discard the associated assets.
In the considered portfolios we include a set of lower-risk securities (e.g. the overall market index). This is a common approach in portfolio optimization and it enables us a wide range of options to trade off between low-risk securities and high-returns ones. The considered portfolios are detailed in Table I
B. Benchmarks
Markowitz. We benchmark the proposed PAGAN approach with respect to the Markowitz' modern portfolio optimization [5]. Since the genetic algorithm in PAGAN returns a discrete set of optimal diversifications x ∈ X whereas the Markowitz methodology solves the optimization problem in a continuous form, to apply a simple and fair comparison we define a set of discrete risk levels ζ ∈ {1, .., Z}, with Z an arbitrary Integer. In this work, Z = 25. We define the target returnr(ζ) for the risk level ζ as follows. The lowest risk level has a return goal ofr(1) = 1. Given Equation 8, this goal defensively aims not to lose. We set an relatively large value for the maximum return levelr(Z). In this work we user(Z) = 2r max − 1, where r max is the maximum returns observed for any asset in the portfolio along the training period. The factor 2 enables us to search for higher returns than those observed during training if these are considered possible by the generator G.
The constant −1 is introduced to compensate the fact that r = 1 is a non-loosing nor-winning policy (Equation 8). Target returns for other risk levelsr(ζ) are uniformly spread: Once the efficient solutions X are found by the genetic algorithm, we select an optimal diversification x ζ for each risk level ζ as: Default. Given the continuously changing market situations, Assumption 1 used by the Markowitz methodology hardly applies in general. Yet, there are limited other wellestablished approaches to predict the probability distribution of future market returns, that makes the Markowitz's approach a widely-used and well-accepted method. The proposed PA-GAN methodology is explicitly meant to cope with this lack of sound alternative methodologies. Though, in the recent years, markets have performed very differently from the past because of ending of QE, geopolitical situations, trade wars, and scandals such as the Volkswagen emission scandal in September 2015. The return probability distribution differs significantly between the train and test periods. For this reason we include as second benchmark (default) a simple random optimization to verify how well one could perform by investing in randomly-selected assets. Every day we sample Z diversifications x at random. We assign a risk level ζ to each of these diversification by sorting and enumerating them accordingly to their expected returns as defined in Equation 2.
C. Diversification results
The output of the generative model are financial market simulations, such as the ones in Figure 5. Each simulation is carried out for all assets at once and represents a possible behavior of the marketplace. The goal is to draw possible scenarios and to understand how the price of different assets are interacting, i.e. what may happen to one asset when another situation is presenting for another one. PAGAN aims to capture non-linear dependencies between different trends and samples simulations from the resulting multidimensional probability distribution. For example, Sim 4 in Figure 5 (thick dashed line) shows a possible situation where both market indices (ˆGDAXI,ˆFCHI) have a sudden loss in the first half of April, followed by a quick re-bounce. At the same time, UG.PA and FCA.MI accumulate significant losses by the end of the simulation (around the end of April). In this case (Sim 4) the best would have been to buy VOW3.DE. Yet, each simulation is just one possible realization of the probability distribution learnt by PAGAN. The goal of PAGAN is to enable us to investigate automatically several different simulations in order to organize a portfolio diversification strategy. The solutions of the optimization problem defined in Equations 10-13 generate a trade-off for the expected risk-return objective space given the probability distribution modeled by a PAGAN-generated simulation set S. In this work, we set the number of simulations in S to 250.
We evaluate how good the proposed approach is in diversifying the portfolio aiming at different time horizons f . In particular we address horizons of one week (5 days), two weeks (10 days), and 1 month (20 days). Figure 6 shows the return-risk trade off achieved during the test period by PAGAN and the reference benchmarks (Markowitz, default).
Since the market situation during the test period (starting in 2015-01) significantly diverges from the training period (ending in 2014-12), the returns for Markowitz significantly decrease when aiming at high-risk solution. This happens because Markowitz assumes that the future mean returns for a given asset equal the past ones (Assumption 1). This approach let Markowitz buy risky assets that were profitable along the train period and are losing in the test period. PAGAN does not assume the future probability distribution to be equal to the past one because it learns to forecast the probability distribution given the most recent market situation. This significantly improves PAGAN results that are able to generally provide higher returns when accepting a higher risk. Note that, given the very different market situation of the last few years, the default approach of randomly buying assets provides not too bad solutions. Yet default is not capable of systematically trading off risk for returns leading to an unstructured cloud of solutions in the risk-return objective space. Default solutions are all drawn purely at random and their final results do not differentiate much from one another. Figure 7 shows the diversifications proposed by PAGAN and Markowitz approaches for different risk levels given a onemonth horizon f = 20. These diversifications are averaged along the whole test period. PAGAN presents a smoother behavior. In fact, given the short-term conditioning window M b , PAGAN models the future probability distribution P (M f |M b ) enabling it to quickly adapt its diversifications to continuously and quickly changing market conditions. As example, (Figure 8a) allocates significantly fewer capital to VOW3.DE than the overall average allocated along the whole test period ( Figure 7c). In October, after the shock brought by the scandal to the automotive industries, PAGAN shows a defensive strategy by allocating always more than 50% of the overall capital to low volatility indices (ˆFCHI,ˆGDAXI) to avoid loosing capital on higher risk assets.
D. Realized performance
Let us analyse the financial performance of the considered optimization strategies by tracking the value of the corresponding portfolios along the test period. We consider three settings for both Markowitz and PAGAN approaches. A defensive setting with risk level ζ = 5, a balanced setting with ζ = 13, and an aggressive setting with ζ = 21. Over the considered Z = 25 risk levels, these settings account respectively for the 20%, 50%, and 80% of the risk. Since the performance for the default approach do not vary much by changing the risk level ( Figure 6), we only consider the balanced version of the default approach. We also include in the analysis portfoliospecific benchmarks such as a) the Dow Jones Industrial Average ETF tracker (IYY), and the US Treasury Bond ETF tracker (SHY) for usgen, and b) the German and French market indices (ˆGDAXI,ˆFCHI) for eucar. We initialize each portfolio with an unitary value. We consider an optimization horizon of 20 days to decide our diversification strategy, yet we allow trading every day to keep the diversification to the suggested level. In fact, a sudden variation of price for an asset changes the ratio of capital invested in this asset with respect to the others if the number of shares is not adjusted accordingly. Figure 9 shows the simulation results along the test period. PAGAN dominates the other approaches in terms of final portfolio value for the aggressive and balanced settings. In the defensive setting, the goal is not to maximize the portfolio returns but rather to reduce the risk that is the portfolio volatility (standard deviation of returns). Figure 10 depicts the financial performance of PAGAN and Markowitz approaches including the different settings: aggressive (A), balanced (B), and defensive (D). PAGAN-A and PAGAN-B clearly dominate in terms of monthly returns. This comes however at the cost of a higher risk (volatility). For a fairer comparison of the two approaches, we examine the annualized sharpe ratio ρ defined as ρ = (r−r)/σ, where r is the average annual return, r is the risk-free return 3 , and σ is the volatility (standard deviation of annual returns). In terms of Sharpe ratio (the higher, the better), PAGAN surpasses the reference approach for all settings (A, B, D) for usgen and for the A and B settings for eucar.
E. Repeatability considerations
GANs are known to be unstable, adversarial training often does not converge towards an equilibrium because of the non-linear dynamics introduced by the differential equations implementing the learning algorithm [28]. This let the PA-GAN model change during training. Furthermore, PAGAN results depend on the randomness introduced by the weight initialization. To analyze the repeatability of the presented results we train several PAGAN models and compare them with Markowitz. This reference approach is deterministic and, given the training data, it always produces the same results. We proceed as follows.
Given a trained PAGAN model, we analyze the returnrisk of PAGAN diversifications along the test period and we compare their results with the diversifications suggested by Markowitz. Let σ M ζ and σ P ζ be the volatility observed for Markowitz and PAGAN diversifications at risk level ζ, and let r M ζ and r P ζ be the related returns. We consider that PAGAN dominates Markowitz at risk level ζ if PAGAN results are better for at least one of the two metrics and not worse for the other: For every trained PAGAN model, we compute the percentage of risk levels ζ for which PAGAN dominates Markowitz (PA-GAN2M), and the opposite (M2PAGAN). Figure 11 shows the distribution of these metrics in box-plot for the two portfolios when considering different optimization horizons. In general, PAGAN2M increases for higher horizon demonstrating the difficulties of Markowitz in modeling long-term situations and the advantage of using the proposed PAGAN model. PAGAN systematically surpasses Markowitz for the usgen portfolio with values of PAGAN2M much higher than M2PAGAN. On the eucar portfolio, PAGAN2M and M2PAGAN are partially overlapping. For eucar, Markowitz achieves sometimes performance comparable to PAGAN. Yet, when aiming at an horizon of 20 days, PAGAN in average outperforms Markowitz. For a fair comparison, all results presented in this paper along the previous sections are gathered starting from the PAGAN model that achieved the median result for the PAGAN2M metric for an horizon of 20 days. Additional results can be found in Appendix A.
VI. CONCLUSION
In this work we presented a pioneering study about portfolio analysis with generative adversarial networks (PAGAN). A key novelty is that the proposed approach is explicitly intended to address the increasing challenges under high efficient market, i.e. when assuming that all information available today are already represented in current asset prices leaving limited room to medium-long term price trend forecast. We introduced the Markowitz framework to demonstrate how, under these conditions, it is still possible to take an educated guess on future returns probability distribution and use this guess to decide about a diversification strategy to optimize a portfolio for minimizing its risk and maximizing its expected returns. By relying on these considerations we proposed to apply a generative network to learn modeling the market uncertainty in its complex multidimensional form, such to let the deep-learning system embed non-linear interactions between different assets. The results demonstrate a clear advantage with respect to the state of the art in portfolio optimization theory. In particular, the proposed approach is able to expose to the final user the possibility of selecting a target risk level and to suggest a specific diversification given the current market situation. Compared to the Markowitz modern portfolio optimization approach, we systematically achieve better performance in terms of both, expected return maximization, and risk minimization.
APPENDIX
In this section we discuss the details of PAGAN networks and optimizer parameters. Detailed architectural parameters for the generator G, and the discriminator D are listed in Tables II, and III. Optimizer. PAGAN generator and discriminator ( Figure 4) are trained with Adam's optimizer with learning rate 2×10 −5 , and β 1 = 0.5. PAGAN models have been trained for 15'000 epochs.
Generator. The conditioning network in the generator G (Figure 4a) is composed of 4 consecutive convolutional layers with a constant number of channels. We use a convolution stride of two to iteratively compresses the inner representation of the backward sequence M b by halving the time resolution at each layer. In generative models the compression of the resolution by means of strided convolution is very common [29]. The output of these convolutional layers is then processed by means of a dense layer, and then concatenated with the analysis vector A and the latent vector λ.
The simulator network (Figure 4a) includes a first dense layer followed by a sequence of transpose convolution layers. As in traditional GANs [29], in PAGAN the generator applies transpose convolutions with strides of two to iteratively compress the number of channels and expand the time resolution. We apply two transpose-convolution layers. The output of the last transpose convolution has a number of channels equal to the number k of assets in the target portfolio, and a time resolution equal to the simulation horizon f . At every layer of the generative transpose convolutions we halve the number of channels and double the time resolution. Thus the output of the dense layer right before the first transpose convolution has by construction 2 l × f /2 l × k = f × k outputs, where l is the number of transpose-convolution layers.
Discriminator. The discriminator (Figure 4b) is implemented with a traditional sequence of convolutional layers that iteratively halves the time resolution and doubles the channels [29]. The last dense layer has a single output, a critic value as proposed in WGAN-GP [24]. We apply spectral normalization [30] on the kernel weights of the discriminator since this procedure improves the convergence and stability of GANs [31], [32].
Adversarial training defines a non-linear dynamic system where generator continuously adapts to the discriminator and vice versa. This leads to a series of convergence problems and results may change depending on the random initialization. For this reason we have retrained several PAGAN models to make sure that results are meaningful. The results presented in Section V refer to the PAGAN models that produced the median value of the PAGAN2M metric ( Figure 11), thus they are fair and representative. We here introduce some additional results obtained from other random initializations and the same experimental settings. Figure 13 shows the risk-returns trade off for the PAGAN models returning the 80th, 50th, and 20th percentiles of the PAGAN2M metric for the two portfolios (the 50th percentile is the median as in Figures 6c, 6f). PAGAN's results at the 80th percentile of the PAGAN2M metric are fairly similar to the ones at the 50th percentile. Yet, not all PAGAN models return good results, for example Figure 12c shows a Fig. 12: Returns-risk trade of measured on the test period by varying the risk levels (different points) considering a 20days optimization horizon. PAGAN models generating these results correspond to the 80th, 50th, and 20th percentiles of the PAGAN2M metric ( Figure 11). (Figure 13c). situation where PAGAN is not able to provide good solutions and only suggests low-risk diversifications without exposing a good risk-returns trade of (PAGAN solutions have been highlighted in red). In this case, the problem is strictly related to the adversarial training, the PAGAN generator has mode collapsed and keep on repeating the same simulationM f that is not representative of the actual probability distribution of M f . Figure 13, depicts example simulations obtained for the PAGAN models related with the results in Figures 13.
Mode collapse happens rarely with our experimental set-tings (Appendix A) and only in the lower tail of the PA-GAN2M metric (Figure 11), i.e. for the least performing PAGAN models. Mode collapse can be easily identified, e.g. with a graphical inspection of the simulations, Figure 13c. Removing the collapsed models from our analysis would further improve the results in favour of the PAGAN approach. Yet, for a fair evaluation, in this work all the PAGAN models including the collapsed ones are considered and results are captured in the plots of Figure 11. Mode collapse is a well known problem in GANs [11], [30]. To the best of our knowledge, there is not yet a generally accepted and well established solution. | 2019-09-19T18:08:20.000Z | 2019-09-19T00:00:00.000 | {
"year": 2019,
"sha1": "c2aa0847251865be4d956be7e4a41580f74c3f1e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1909.10578",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c2aa0847251865be4d956be7e4a41580f74c3f1e",
"s2fieldsofstudy": [
"Computer Science",
"Business",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Economics"
]
} |
221619270 | pes2o/s2orc | v3-fos-license | Advances in optical gastrointestinal endoscopy: a technical review
Optical endoscopy is the primary diagnostic and therapeutic tool for management of gastrointestinal (GI) malignancies. Most GI neoplasms arise from precancerous lesions; thus, technical innovations to improve detection and diagnosis of precancerous lesions and early cancers play a pivotal role in improving outcomes. Over the last few decades, the field of GI endoscopy has witnessed enormous and focused efforts to develop and translate accurate, user‐friendly, and minimally invasive optical imaging modalities. From a technical point of view, a wide range of novel optical techniques is now available to probe different aspects of light–tissue interaction at macroscopic and microscopic scales, complementing white light endoscopy. Most of these new modalities have been successfully validated and translated to routine clinical practice. Herein, we provide a technical review of the current status of existing and promising new optical endoscopic imaging technologies for GI cancer screening and surveillance. We summarize the underlying principles of light–tissue interaction, the imaging performance at different scales, and highlight what is known about clinical applicability and effectiveness. Furthermore, we discuss recent discovery and translation of novel molecular probes that have shown promise to augment endoscopists' ability to diagnose GI lesions with high specificity. We also review and discuss the role and potential clinical integration of artificial intelligence‐based algorithms to provide decision support in real time. Finally, we provide perspectives on future technology development and its potential to transform endoscopic GI cancer detection and diagnosis.
Optical endoscopy is the primary diagnostic and therapeutic tool for management of gastrointestinal (GI) malignancies. Most GI neoplasms arise from precancerous lesions; thus, technical innovations to improve detection and diagnosis of precancerous lesions and early cancers play a pivotal role in improving outcomes. Over the last few decades, the field of GI endoscopy has witnessed enormous and focused efforts to develop and translate accurate, user-friendly, and minimally invasive optical imaging modalities. From a technical point of view, a wide range of novel optical techniques is now available to probe different aspects of light-tissue interaction at macroscopic and microscopic scales, complementing white light endoscopy. Most of these new modalities have been successfully validated and translated to routine clinical practice. Herein, we provide a technical review of the current status of existing and promising new optical endoscopic imaging technologies for GI cancer screening and surveillance. We summarize the underlying principles of light-tissue interaction, the imaging performance at different scales, and highlight what is known about clinical applicability and effectiveness. Furthermore, we discuss recent discovery and translation of novel molecular probes that have shown promise to augment endoscopists' ability to diagnose GI lesions with high specificity. We also review and discuss the role and potential clinical integration of artificial intelligence-based algorithms to provide decision support in real time. Finally, we provide perspectives on future technology development and its potential to transform endoscopic GI cancer detection and diagnosis.
Introduction
Cancers in the gastrointestinal (GI) tract, including the esophagus, stomach, small intestine, and colon, are among the 10 most common cancers worldwide, imposing a significant healthcare burden globally [1]. Carcinogenesis in the GI tract typically involves a cascade of molecular dysregulation and architectural alternations; thus, a significant proportion of the resulting morbidity, mortality, and healthcare cost can potentially be prevented through detection and treatment of precursor lesions and early cancers. To detect cancer at early stages, a combination of screening and surveillance programs, including both endoscopic and nonendoscopic tests, have been developed and implemented. Recommended screening and surveillance programs vary by population risk and geography.
Current strategies for early detection of esophageal neoplasia depend on the type of esophageal cancer which is most prevalent. Globally, 90% of esophageal cancers are categorized as esophageal squamous cell carcinoma (ESCC); ESCC is most prevalent in certain geographical regions such as East Asia, Iran, and Africa [1,2]. In these regions, screening endoscopy in conjunction with Lugol's chromoendoscopy (CE) is advocated as the gold standard modality based on its high sensitivity [3,4], although limited specificity remains a challenge. In Western countries, esophageal adenocarcinoma (EAC) is the predominant histologic subtype. Population-based screening endoscopy is not currently recommended due to the low incidence. However, the last few decades have witnessed dramatically increasing rates of incidence of EAC [5], and guidelines in the United States and Europe recommend white light endoscopic surveillance of patient with Barrett's esophagus, the only known precursor to EAC [6,7]. Nonetheless, conventional white light endoscopy (WLE) is found to frequently miss early cancers in BE [8]. In addition to standard endoscopy, more affordable and less invasive endoscopic and tissue sampling approaches, such as ultrathin endoscopy and Cytosponge combined with biomarkers [9], are also under evaluation as alternative methods for esophageal cancer screening.
For gastric cancer, countries with a high incidence such as Japan and South Korea have implemented national screening programs [1,10,11]. While less invasive nonendoscopic screening methods such as barium upper GI series can be used, endoscopy remains the primary tool in high incidence countries with higher cancer detection rates and biopsy capability [12,13]. In regions with a low or intermediate incidence, endoscopy is only recommended for individuals at an increased risk for gastric cancer [14,15].
Colorectal cancer is the third most common cancer globally and ranks second in mortality [1]. Adenomatous polyps are the most important precursor lesions for colorectal cancer, and their detection and removal through polypectomy are associated with reduced cancer incidence and morbidity [16,17]. Large-scale screening is commonly practiced in North American and European countries [18], offering a range of nonendoscopic and endoscopic screening tests, including guaiacbased fecal occult blood test, fecal immunochemical test, sigmoidoscopy, and colonoscopy. The clinical adoption of these modalities varies in different countries [19], but colonoscopy is the only gold standard screening tool that offers direct visualization, same-session detection, and removal of polyps across the entire colon. With recent advances in optical endoscopy, there is a significant interest in optical diagnosis of diminutive (≤ 5 mm) colorectal polyps that represent a vast majority of all polyps, yet are only linked with minimal risks of malignant progression [20].
Currently, standard endoscopes remain the primary diagnostic and therapeutic tool for GI cancer screening and surveillance (Fig. 1). Standard endoscopy, in which either upper or lower GI tract of a sedated patient is thoroughly examined with a white light endoscope by a trained endoscopist, allows for imaging, biopsies, and treatment in a single endoscopic session. However, despite its central role, many studies report that standard endoscopy frequently misses GI lesions at early stages [8,21,22], primarily due to the inability to visualize subtle architectural changes under conventional white light illumination. The development of novel endoscopic technologies with improved accuracy, enhanced sampling, and minimal invasiveness could play a pivotal role to advance early detection and clinical management of GI lesions. Figure 1 highlights existing and novel endoscopic technologies that offer multimodal imaging of GI lesions to improve early detection at macroscopic and microscopic scales. Overall, macroscopic modalities can interrogate a large field of view (FOV) and serve as "red-flag" techniques to rapidly survey the entire lumen and identify suspicious lesions. In a two-step protocol, modalities with microscopic resolution can be used to further examine suspicious lesions with cellular or subcellular detail (10 µm or higher). As a minimally invasive macroscopic modality, capsule endoscopy offers an appealing option for "red-flag" imaging. Alternatively, high-resolution endoscopic modalities such as confocal laser endomicroscopy (CLE) and endocytoscopy can provide histologic information in real time.
Together, this wide range of technologies in Fig. 1 provides powerful tools for endoscopists to examine the GI tract both macroscopically and microscopically, forming the basis for the next generation of GI endoscopy. As summarized in Table 1, to help guide translation of promising new technologies, the American Society for GI Endoscopy (ASGE) created the preservation and incorporation of valuable endoscopic innovations (PIVI) performance thresholds that new technologies for assessment of Barrett's esophagus and colorectal polyps should meet prior to adoption [23,24].
In this technical review, we provide an overview of the current status of existing and emerging optical imaging modalities for GI endoscopy. Previous reviews of endoscopic imaging techniques in the GI tract have been provided by the ASGE technology committee, the European Society of GI Endoscopy (ESGE) research committee, and other authors [18,24-26-28], with a strong focus on the performance of commercially available and commonly studied modalities such as CE and CLE, while providing detailed descriptions of available evidence of their clinical performance. In this review, we focus on the technical aspects and clinical applicability of optical endoscopy, highlighting recent advances in device and optical design, molecular probes, and machine-learning algorithms for image interpretation. In addition to commercially available platforms, we also discuss emerging technologies at earlier stages of clinical translation, summarizing how they exploit different dimensions of light-tissue interaction to identify lesions at early stages of GI cancer progression. For each modality, we discuss the imaging capabilities while highlighting the fundamental principles of light-tissue interactions and their implications for clinical usefulness. We review imaging systems with various novel form factors (examples in Fig. 2), including capsules, balloon catheters, and probes, and discuss how they can contribute to multimodal and less invasive imaging of the GI tract with enhanced contrast and resolution. We also review the clinical translation and integration of novel molecular probes and machine-learning algorithms. Optical endoscopic techniques for macroscopic and microscopic imaging of the GI mucosa. Existing macroscopic modalities include high-definition endoscopy, ultrathin endoscopy, and capsule endoscopy. Microscopic resolution can be achieved using OCT, endocytoscopy, CLE, and HRME. Reproduced from [66,81,136] with permission from Elsevier (white light and chromoendoscope, capsule endoscope, and endocytoscopy, respectively), from [55] with permission from John Wiley and Sons (ultrathin endoscope), from [137] by permission from Springer Nature (OCT), from [69] with permission from © Georg Thieme Verlag KG (CLE). [29]. Similarly, CE using Lugol's staining is generally practiced for ESCC screening in many Asian countries. In screening colonoscopy, the use of high-definition endoscopy with advanced modalities such as virtual CE has also been recommended by the ESGE [18].
Macroscopic imaging systems
Over the last few decades, the technical performance of WLE has benefited from improvements in imaging sensors and optics that offer greater pixel density and higher magnification. Since the late 1990s, high-definition endoscopes have become widely available, enabling more meticulous examination of mucosal patterns, and replacing standard-definition endoscopes as the current modality of choice. Standard high-definition endoscopy has been further complemented by less invasive ultrathin endoscopes and capsule endoscopes, and the detailed imaging features and specifications are compared in Table 2. Recent commercial systems are also augmented by advanced imaging features, including magnification endoscopes with up to 150fold optical magnification and various techniques to enhance mucosal features such as CE. While data comparing high-definition with standard-definition WLE are relatively scarce, high-definition endoscopes have been used in numerous clinical studies, especially in tandem with virtual or dye-based CE.
Future development of high-definition WLE can benefit from further improvements in the optical FOV and resolution, as well as 3D imaging capability. As shown in Table 2, current high-definition endoscopes can support up to 2K video acquisition with an angular FOV of 140-170°. To better survey the GI anatomy, wide FOV endoscopes are being developed to provide an extra wide angular or near-panoramic view (245-330°), and preliminary studies have shown their utility for better colon polyp detection and improved visualization of occult regions in the upper GI tract [30][31][32][33]. In addition, endoscopes with UHD resolution (4k or 8k vs. current 2k) and 3D imaging capabilities, although mostly demonstrated in laparoscopic applications or animal models [34][35][36], also have the potential to enable more detailed mucosal examination and enhance surgical maneuverability in the GI tract.
Virtual chromoendoscopy
In contrast to WLE based on the entire visible spectrum, virtual CE exploits spectral variations in the interaction of light with tissue to highlight mucosal features, such as blood vessels or changes in light scattering. Because spectral imaging can be achieved by either optical filtering or postprocessing without modifying the imaging optics, virtual CE is seamlessly integrated in most current endoscopic systems. At the touch of a button, it provides a convenient means for endoscopists to investigate lesions with enhanced contrast. First-generation virtual CE include narrowband imaging (NBI; Olympus, Tokyo, Japan), Fuji Figure 2B-D reproduced from [97,138,139] with permission from Elsevier.
Intelligent Chromo Endoscopy (FICE; Fujinon, Tokyo, Japan), and iScan (Pentax, Tokyo, Japan). Newer modalities, such as blue laser imaging, have also been recently introduced. As summarized in Table 3, NBI enhances the contrast of mucosal features and microvascular networks by illuminating the GI surface with blue (415 nm) and green light (540 nm); FICE and iScan, in comparison, are based on postprocessing algorithms to improve vessel visualization and tissue type differentiation. Among these modalities, NBI is the most commonly studied virtual CE modality; thanks to its wide availability, as well as established interpretation criteria with substantial interobserver agreement [37], the modality is well accepted, especially among experts.
In Barrett's esophagus, CE was shown to improve diagnostic yield for detection of dysplasia/cancer by 34% in a meta-analysis of 14 studies involving 843 patients [38]. As a widely studied modality, NBI was shown to meet the PIVI thresholds for BE surveillance (Table 1) and its conjunction use with standard WLE is recommended [25,29]. For gastric precancerous histology, NBI is recommended by the ESGE due to its capability to significantly improve intestinal metaplasia detection compared to high-definition endoscopy alone [15]. In screening colonoscopy, a meta-analysis by ASGE reported a 91% NPV using NBI for detection of adenomas in academic centers, surpassing the PIVI threshold of 90% to support a "diagnosis-and-leave" strategy for diminutive polyps [24]. However, it should be noted that subpar results that fell short of meeting the same criteria were reported in community settings [39,40]. Per ESGE recommendations, the use of virtual CE to provide optical diagnosis is only suggested when endoscopy is adequately documented and performed by experts [18].
In settings where the most recent platforms are available, virtual CE has shown great promise to better visualize GI lesions without incurring additional cost and applying exogenous dyes. As most high-definition endoscope systems offer CE compatibility nowadays, it is gaining momentum and more popularity, especially among experts. To endorse their routine use, however, universal classification criteria need to be established and externally validated, especially among novice users outside of research centers. In addition, associated learning curves and interobserver reliability need to be studied. At the meantime, the importance of technical advances should be recognized. A very recent meta-analysis of 11 randomized controlled trials (RCTs) revealed that the second-generation and brighter NBI significantly increased the adenoma detection rate (ADR) compared to WLE, as well as to first-generation NBI [41].
As an extension to commercially available virtual CE that exploits tissue response in specific spectral bands, multispectral or hyperspectral imaging has also shown value for GI lesion characterization with increased spectral dimensions [42,43]. Recently, realtime hyperspectral imaging has been enabled in endoscopic systems and initial clinical evaluation in the GI tract has been reported [44][45][46]. Taken together, it is expected that GI lesion detection will be further improved by virtual CE as the technology continually improves and the clinical use becomes standardized.
Dye-based chromoendoscopy
Unlike optical or computational filtering in virtual CE, dye-based CE takes advantage of exogenous dyes to enhance contrast of mucosal features, especially in lesions that may appear subtle, flat, or depressed under WLE. In general, two types of dyes are clinically used: No absorptive dyes such as methylene blue and Lugol's iodine, and contrast stains such as indigo carmine (Table 3). Among those dyes, Lugol's iodine selectively binds to glycogen that is more abundantly stored in normal than dysplastic squamous epithelium. Methylene blue is preferentially absorbed by epithelial cells of small intestine or colon types, and indigo carmine highlights mucosal topology by filling crevices, pits, and ridges. The source of imaging contrast, and thus the targeted mucosal features, depends on the staining or uptake mechanisms of specific dyes. Thanks to its high uptake by glycogen-abundant cells, Lugol's iodine is routinely used as a highly sensitive (92-100% sensitivity), yet inexpensive dye for ESCC screening [2]. For surveillance of Barrett's esophagus, systematic reviews showed improved diagnostic yield and accuracy using acetic acid or methylene blue; therefore, CE is recommended as an adjunctive tool in addition to WLE [29]. For the choice of dyes, acetic acid is inexpensive and meets the ASGE threshold for BE surveillance (Table 1), with early evidence showing that it is more cost-effective than random biopsies in a high-risk population [29,47]. Methylene blue also provides substantial contrast enhancement, but concerns have been raised due to its link to DNA damage [48]. In the stomach, a meta-analysis of 10 studies using different dyes also demonstrated that dye-based CE can better detect early gastric cancers and precursors (pooled sensitivity and specificity of 0.90 and 0.82 in 699 patients) than WLE [49].
In settings that lack access to up-to-date endoscopic platforms with virtual CE, dye-based CE offers a safe and relatively inexpensive alternative to enhance mucosal contrast, even though it requires a more cumbersome procedure and relies on the quality and uniformity of the spraying. Like virtual CE, to make the best use of dye-based CE, user expertise is critical.
In addition, a consensus on validated interpretation criteria is yet to be established, which can be particularly challenging given the wide range of dyes and their different uptake or staining patterns.
Ultrathin endoscopy
Compared to standard endoscopes, ultrathin endoscopes are designed to access smaller luminal organs so that endoscopy can be performed during unsedated, outpatient procedures, even though expertise is still required for image interpretation. Thanks to a smaller diameter (6 mm or less, compared to up to 13 mm in standard endoscopes), patients have a higher acceptance and suffer from lower risks of complications and recovery time [50]. The trade-off includes decreased pixel resolution and reduced mechanical maneuverability (Table 2); a pediatric biopsy forceps can be passed through the accessory channel to obtain biopsies, but the ability to perform more complicated surgical procedures is limited. To further facilitate clinical adoption of the technology, less costly, portable, and disposable versions of ultrathin endoscopes have also been developed, but biopsy capability is not supported [51].
The capability of ultrathin endoscopy to investigate upper GI lesions is reported in several pilot studies. As a less costly outpatient procedure, transnasal endoscopy using ultrathin endoscopes is considered a potential alternative for BE screening [7], even though its role in BE surveillance is limited by the relatively low imaging quality. In a pilot randomized cross-over study involving 82 patients, transnasal endoscopy was found to have high diagnostic performance for BE detection (98% sensitivity and 100% specificity) comparable to standard WLE [52]; nonetheless, since an enriched surveillance population was examined in a tertiary-care center, the generalizability of this research needs to be further studied. Ultrathin endoscopy was also evaluated for gastric neoplasia detection in 57 patients with superficial gastric neoplasia or undergoing follow-up endoscopy after ESD, reporting significantly worse performance characteristics than highdefinition WLE [53]. This can be ascribed to the increased difficulty in scope manipulation and the relatively poor imaging quality. As the latest ultrathin models are equipped with better illumination and NBI capability [54,55], its potential role in the upper GI tract should be further evaluated.
Capsule endoscopy
First approved by the FDA in 2001, capsule endoscopy provides easy and safe access to the GI tract ( Fig. 2A). In the last two decades, capsule endoscopy has revolutionized the management of small bowel disease [56]. Most commercially available capsule endoscopes consist of a miniaturized imaging sensor, illumination, and imaging optics, and an internal power supply encased in a disposable enclosure. Data transmission is usually achieved wirelessly via radio telemetry or electric-field propagation [57], and wired data retrieval has also been reported in tethered capsules [58]. Typical imaging specifications of commercially available capsule endoscopy are shown in Table 2.
Given the successful application of capsule endoscopy in the small bowel, there is an increasing interest to investigate its role in other parts of the GI tract. In the upper GI tract, capsule endoscopy is challenging due to the unique anatomy. The average capsule transit time through the esophagus can be as low as about 30 s [59], mandating a fast frame rate to capture highquality images; once it enters the stomach, the uncontrolled capsule movement further complicates image acquisition. Using the Pillcam UGI capsule that captures 35 frames per second, capsule endoscopy has been shown to visualize important clinical landmarks of the esophagus; in addition, feasibility to view the entire stomach was demonstrated with patients positioned at different planes and angles on an examination bed in a nurse-led protocol [59]. Owing to its moderate accuracy (sensitivity 78% and specificity 73% for BE detection in a meta-analysis of nine studies involving 618 patients by Bhardwaj et al.), however, it is not recommended for BE screening [7,60]. In the lower GI tract, capsule endoscopy has also been found to be useful for detecting additional polyps in patients with incomplete colonoscopy [61].
A major drawback of capsule endoscopy is its lack of active locomotion, which compromises its imaging quality and limits its use in luminal organs. Recent technical efforts have been made to overcome this barrier with internal stabilizing or external control mechanisms [62,63]. Coupled with an external magnetic steering system, capsule endoscopy has been used to survey more capacious parts of the GI tract [64]. A recent study reported its safe use in 3182 asymptomatic participants to visualize focal lesions in the stomach, suggesting its potential for gastric cancer screening [65]. While in an early stage of development, very recent clinical evidence using a updated magnetically controlled capsule system has shown improved imaging quality and maneuverability [66,67], and future studies to assess its diagnostic value are warranted.
Microscopic imaging systems
While macroscopic imaging modalities form the fundamentals of GI endoscopy, the gold standard for cancer diagnosis remains microscopic examination. In the routine practice, this is achieved by acquisition of biopsies followed by standard histology procedures. This process can lead to unnecessary biopsies and related healthcare costs, delay of diagnosis, and loss to follow-up. As a result, there has been a long-standing interest in developing in vivo endoscopic techniques to facilitate lesion characterization with microscopic or near-microscopic resolution. In this section, we will review endomicroscopic imaging techniques (Table 4), including commercially available systems such as CLE and volumetric laser endomicroscopy (VLE), and investigative research systems such as the high-resolution microendoscope (HRME) and capsules based on optical coherence tomography (OCT).
Confocal laser endomicroscopy
By illuminating stained tissue with a scanning lowpower laser, CLE (Fig. 2C) can generate fluorescence images of the mucosal layer with micron-level resolution, providing histologic information similar to standard pathology (Table 4). To image deep epithelial layers with high contrast, confocal scanning is implemented for optical sectioning. Two types of fluorescent contrast agents can be used, including fluorescein that is intravenously administered, and topically applied vital dyes such as acriflavine, tetracycline, or cresyl violet. Fluorescein enhances the contrast of extracellular matrix such as mucosal crypts and villi, as well as vascular structures. Topically applied dyes stain nuclei and thus visualize nuclear morphometry. In clinical practice, fluorescein is FDA-approved and most widely used without adverse effects [68], and consensus classification criteria referred to as the Miami Classification have been established [69].
Overall, high-performance characteristics have been achieved using CLE for detection of BE-related dysplasia (pooled sensitivity and specificity of 89% and 83% in a meta-analysis of 789 patients) [70], gastric neoplasia (pooled sensitivity and specificity of 81% and 98% in 657 patients) [71], and CRC neoplasms (pooled sensitivity and specificity of 83% and 90% in 376 patients) [72]. Nonetheless, concerns regarding the heterogeneity in performance characteristics and diagnostic yield were raised in recent meta-analyses [25,29,70], indicating the importance of expertise and need for comprehensive training. While providing favorable diagnostic performance in many studies conducted in academic research centers, the routine use of CLE is largely hindered by the significant up-front investment. In addition, its small FOV can be prone to sampling error for targeted biopsies, especially when operated by novice users to probe small lesions with a stable FOV. Mosaicking in the GI tract to image regions with a larger surface area has been demonstrated in preliminary feasibility studies, but the clinical utility is still limited [73,74]. Overall, CLE, like many other emerging microscopic modalities, is mostly evaluated in tertiary centers. Without a better understanding of its learning curve and a thorough cost-utility analysis, the practicality for community use is still unknown.
Endocytoscopy
Endocytoscopy is in principle similar to magnification WLE that offers reflectance imaging with up to 150fold optical magnification, except that the optical magnification is further improved for imaging at the cellular level (Table 4). Commercially available systems are either endoscope-based or probe-based [75], and they support optical magnification of approximately 500fold (endoscope-based) to > 1000-fold (probe-based). To enhance mucosal surface contrast under white light illumination, methylene blue or its combination with crystal violet has been found useful for visualizing nuclear and glandular patterns [76].
Commercial endocytoscopy systems have been shown to allow for cellular-level characterization of GI lesions [77][78][79]. Recent studies investigated the effectiveness of endocytoscopy to tackle challenging tasks such as diagnosing adenomatous diminutive polyps and low-grade adenoma in the colon, reporting high diagnostic performance (96.8% accuracy in 39 patients and 86.4% accuracy in 573 patients, respectively) [80,81]. To facilitate its broader application, efforts have been made to develop classification systems for standardized clinical interpretation [82]. Different from fluorescence-based CLE, endocytoscopy provides a reflectance modality to investigate suspicious lesions at ultrahigh resolution. In addition to exogenous dyes, it offers compatibility with virtual CE such as NBI. While clinical evidence from large-scale RCTs is still unavailable, its further evaluation is supported by initial studies reporting performance characteristics comparable to CLE.
Optical coherence tomography
Using a low-coherence light source to probe tissue reflectance at varied transverse locations, OCT generates depth-resolved, near-microscopic images with axial resolution of about 10 µm and lateral resolution of approximately 30 µm (Table 4). Since OCT images are acquired via a single-mode fiber, it is inherently compatible with endoscopic imaging of luminal organs including the GI tract and cardiovascular systems. To access lesions in the GI tract, OCT systems of different form factors have been developed, including probeand balloon-based OCT that can pass through an endoscope working channel, and a capsule-based design that can be operated in the primary care setting (Table 4). Using OCT in different forms, depth-resolved and high-resolution imaging has been reported in Barrett's esophagus, stomach, small intestine, and colon polyps.
In 2013, an OCT-based imaging system became commercially available for esophageal imaging, known as the VLE (NvisionVLE, NinePoint Medical, Bedford, MA, USA; Fig. 2B). In the VLE, a balloon catheter is used to center the OCT probe, enabling stable circumferential scanning of a 6 cm segment of esophagus with a 3 mm imaging depth in 90 seconds. Since its introduction, studies have been conducted to establish image interpretation criteria, reporting a diagnostic accuracy of 87% for BE-related dysplasia in a pilot study of 27 patients [83]. Moreover, artificial intelligence (AI)-based algorithms are developed and clinical evaluation is underway in a multicenter RCT [84]. Another important extension of the VLE technology is the introduction of laser marking in 2016, enabling a targeted biopsy protocol shown to lead to a higher yield of dysplasia in BE compared with the standard Seattle protocol (33.7% in 106 patients vs. 19.6% in 95 patients) [85]. While the majority of OCT studies focused on the esophagus, successful imaging of the colon has been reported in case studies or case series [86,87]. Unlike the tubular esophagus, good tissue contact with the balloon catheter to ensure VLE imaging quality can be more difficult in the colon.
Thanks to the scanning and pullback mechanism, OCT systems can generate depth-resolved and near-microscopic maps of a long esophageal segment, and cross-sectional visualization of multiple histologic layers can provide important information for detecting disease progression beneath the superficial surface. Combined with a laser marking system, VLE is uniquely poised to reduce sampling errors, a major limitation for other endomicroscopic modalities. Nonetheless, commercial VLEs are costly and clinical interpretation of reflectance-based volumetric images remains a challenging process, even though computeraided systems are under evaluation. In addition, as OCT captures tissue reflectance, it precludes the use of specific contrast agents and molecular probes.
Many recent technical innovations in endoscopic OCT have embraced a capsule enclosure, allowing for miniature optomechanical components to be included at distal end of the probe and improving imaging quality and contrast. Spatial resolution, for example, can be improved with a miniaturized actuation system to minimize rotational distortion [88], or using a shorter wavelength to enable ultrahigh-resolution OCT [89]. OCT-based endoscopy also benefited from an increased scanning speed, allowing for ultra-fast OCT angiography to visualize subsurface microvasculature in the GI tract [90]; briefly, circumferential OCT images are acquired at 400 frames per second, and decorrelation between sequential frames is calculated to highlight circulating erythrocytes and thus the 3D microvasculature. In addition to circumferential imaging of luminal organs, a piezoelectric probe was also developed to enable forward-viewing OCT imaging of colorectal polyps [91]. Enabled by different scanning mechanisms such as microelectromechanical system and piezoelectric transducers to form a 2D sampling pattern, forward-viewing OCT probes can generate high-resolution and label-free images with an intuitive view similar to standard endoscopy.
High-resolution microendoscope
While offering superior resolution compared to conventional endoscopes, the high cost of the above-mentioned endomicroscopic modalities limits their clinical use in tertiary medical centers. As an alternative shown in Fig. 2D, the HRME is a low-cost ($1500), fiber-optic fluorescence microscope that can image nuclear morphology at subcellular resolution [92]. In pilot clinical trials, the clinical utility of HRME was demonstrated for accurate colon polyp characterization using proflavine as an inexpensive and topically applied dye for vital staining [93,94]; in a prospective single-center trial, HRME was shown to have a high accuracy and specificity of 94% and 95% for detecting adenomatous and neoplastic polyps during routine screening or surveillance colonoscopy of 94 patients [95]. In the upper GI tract, HRME has also shown value for detection of BE-related dysplasia and ESCC [96,97]. In a prospective trial of 147 high-risk patients, adjunctive HRME following Lugol's CE was shown to increase the specificity from 48% to 88% for ESCC detection while achieving a sensitivity of 91% [98].
To further facilitate dissemination of this high-resolution yet inexpensive technology, quantitative and automated diagnostic algorithms have been developed and prospectively validated [97,99,100]. Recent development focused on further improvement of its axial resolution with optical sectioning techniques [101,102]. Compared with commercial CLE, HRME offers a low-cost alternative to visualize important histologic features in community and resource-constrained settings.
Other in vivo microscopy technologies
Various emerging technologies at earlier stages of development have also been reported in animal models or case studies. Multiphoton endomicroscopy, for example, is a novel technology that has gained attention as a high-resolution and label-free imaging modality. Taking advantage of near-infrared laser excitation and endogenous autofluorescence, recent studies in rodent animal models reported deeper penetration (120 µm) than CLE without administration of contrast agent [103,104].
Spectrally encoded confocal endomicroscopy (SECM) is another fiber-optic imaging modality that can image tissue surface reflectance using a singlemode fiber coupled with a diffraction grating that encodes tissue locations with distinct wavelengths. Integrated with a rotary junction and a pullback mechanism, high-speed line scanning of large areas can be achieved [105] Like OCT-based capsules, a tethered SECM capsule was shown feasible for imaging of the esophagus [106].
In addition to imaging modalities that offer direct visualization, spectroscopic probes have also been developed to study the subcellular architecture and molecular constituents of GI lesions. Briefly, endoscopic spectroscopy measures the spectral composition of backscattered light from tissue under varied illumination wavelengths and geometries, exploiting three major types of light-tissue interactions-elastic scattering, absorption, and inelastic scattering. Elastic scattering, as the dominant type of scattering, occurs without wavelength change and is sensitive to alterations in cell and tissue architecture. In addition, tissue absorption, mostly from hemoglobin, lipids, and water, also contributes to elastic scattering spectra in the visible and near-infrared spectral regions. In comparison, inelastic (or Raman) scattering with wavelength shifts is relatively weaker, but can be employed to characterize chemical fingerprints of specific molecular components. A broad range of spectroscopic techniques, including diffuse reflectance spectroscopy and Raman spectroscopy, have been clinically evaluated with varied results in the GI tract [107][108][109][110]. Compared to imaging modalities, spectroscopic techniques provide label-free, quantitative, and objective measurements of GI mucosa. Nonetheless, its clinical interpretability is limited by the complexity of high-dimensional data from a point-to-point FOV. With recent studies reporting technical advances such as novel fiber design and nanoparticles [111,112], further evaluation in larger clinical trials is warranted for this broad range of technologies.
Molecular imaging
Most existing modalities detect early cancer lesions based on the presence of macroscopic or microscopic morphological changes. By studying the altered biomolecular cascades that may precede architectural changes, molecular imaging is an emerging field in modern oncology that has the potential to detect GI lesions at earlier stages with higher specificity than morphologic alterations. Driven by genetic and molecular profiling of tumors, specific biomarkers can be targeted using a wide range of molecular probes, including antibodies, peptides, nanoparticles, aptamers, and affibodies [113]. Since molecular probes are typically fluorescently labeled, they are mostly used in conjunction with fluorescence imaging modalities such as CLE or widefield fluorescence endoscopes. Ex vivo near-infrared imaging has also been employed to suppress confounding tissue autofluorescence.
While most studies are focused on proof of concept in rodent animal models, recent studies have demonstrated potential for ex vivo and in vivo applications in human trials. In Barrett's esophagus, lectin-based probes were found useful for differentiating dysplastic from benign lesions in ex vivo human specimens [114,115]. Sturm et al. [116] developed a peptide that binds specifically to BE-related dysplasia and demonstrated its safe in vivo application using the CLE. A similar study was conducted for CLE-based targeted imaging of EGFR in the colon [117]. Thanks to its specific binding, molecular probes can also been used with widefield fluorescence endoscopes for detection of flat colon lesions [118,119].
Molecular probes in combination with fluorescence endoscopic imaging provide novel routes to targeted imaging with high specificity, addressing a key limitation of many widefield endoscopic modalities. Another advantage of this approach, as demonstrated in animal models, is the safe use for longitudinal studies without tissue removal [120]. Recent successful translation of this approach into human trials shows promising results for functional characterization of GI lesions, thereby holding potential for personalized treatment recommendations. Future studies in this intrinsically multidisciplinary field will benefit from collective efforts to develop novel probes and imaging devices, while strictly monitoring safety during clinical use.
Machine learning for computeraided detection and diagnosis
With the advent of various novel endoscopic techniques, objective and confident interpretation of multiscale and multidimensional clinical data is a crucial, yet cumbersome and challenging task. In the meantime, the abundance of large-volume imaging data presents numerous opportunities to employ machine learning to assist novice endoscopists and experts alike. From an algorithm development perspective, two approaches have been used to facilitate clinical decision making. First, in classical machine-learning methods, as illustrated in Fig. 3A, a broad range of features can be explored and extrapolated, such as shape, texture, color, and clinically inspired features [121]. After feature extraction, a classifier can be developed for computer-aided diagnosis (CADx). Second, recent implementation of deep learning algorithms has been shown to contribute to more sophisticated examination of the feature space, while enabling accelerated processing of large datasets in real time (Fig. 3B). Compared with classical machine-learning approaches, deep learning scales more effectively with large datasets and thus is well suited for processing a series of images with high dimensions (e.g. multimodal images); in addition, it also alleviates the burden of complicated feature engineering. Nonetheless, a large and standardized dataset is usually required for effective training of deep learning algorithms, and their clinical interpretability is relatively poor. In this section, we discuss recent advances of machine-learning approaches to perform specific clinical tasks.
In terms of clinical output, algorithms are generally trained to perform either computer-aided detection (CADe) such as lesion detection using macroscopic modalities, or CADx based on microscopic or highdefinition widefield imaging. To date, CADe algorithms have been most extensively applied to improve automated colorectal polyp detection [122]. Initial studies in the field focused on polyp detection rate as the primary performance measure, and recent advancement in computing hardware and algorithms has significantly accelerated image processing for real-time and video-rate applications [123,124]. Very recently, in a prospective RCT involving 1057 patients, Wang et al. reported significantly increased ADR in WLE images using an AI-based system than standard colonoscopy (29.1% vs. 20.3%) [125]. Of note, the ADR improvement is attributed to detection of more diminutive polyps. Diagnostic algorithms (CADx) to differentiate adenomatous from benign colorectal polyps were also developed in retrospective studies [126], with a recent study reporting a high accuracy of 94% for diagnosing adenomatous diminutive polyps in unaltered NBI videos [127]. Similarly, excellent accuracies were reported to diagnose neoplastic lesions in the esophagus and stomach in widefield modalities, including WLE [128], NBI [129], or their combination [130]. When integrated with standard WLE or advanced widefield modalities such as NBI, these diagnostic algorithms can facilitate more quantitative and objective interpretation of clinical data; for novice users, such algorithms can provide critical decision support that can reduce the learning curve and improve the diagnostic accuracy. Microscope imaging modalities, including CLE, HRME, endocytoscopy, and VLE, have also benefited from automated algorithms [99,100,[131][132][133][134][135].
It is evident that machine learning is playing an increasingly significant role in the present field of GI endoscopy. While most algorithms are developed and evaluated retrospectively, prospective studies with realtime and low-latency algorithms are emerging. With the integration of AI systems in recently launched commercial systems, including EndoBrain in endocytoscopy systems (Olympus) and Intelligent real-time image segmentation in the NvisionVLE imaging system (NinePoint Medical), it is hoped that machine learning will contribute to palpable changes in routine practice in the coming years. As machine learning in GI endoscopy is experiencing rapid progress, there remain several barriers for clinical implementation. First, it is of crucial importance to validate the generalizability of computer-assisted algorithms, especially when images are acquired by users in varied settings. Second, for users to make best use of computer-aided decision support, interpretability and training are also important factors when designing the algorithm architecture. Finally, the intricate nature of computational algorithms can raise new ethical and regulatory concerns, and their role during practical use will ultimately depend on acceptance by the healthcare community.
Conclusion and outlook
The past few decades have witnessed substantial evolution of optical endoscopy techniques and their continuous clinical translation to enable in vivo characterization of the GI mucosa with unpreceded imaging contrast, resolution, depth, and speed. The multidimensional, volumetric, and real-time imaging data open new opportunities for improved detection of GI pathology, potentially leading to important shifts in diagnostic and therapeutic algorithms. Moreover, novel molecular probe and machine-learning methods are under clinical evaluation, showing great promises to improve detection accuracy and reliability. Taken together, the constantly evolving landscape of modern GI endoscopy presents numerous opportunities and demands a multidisciplinary and collaborative effort in the academic and healthcare community. In the meantime, with academic-industrial partnerships playing an increasingly important role in prototype development and commercialization, close integration of academia and industry is also called for to accelerate technology translation, standardization, and dissemination.
The rapid advent of novel techniques also presents new challenges. While offering direct visualization of GI lesions, acquisition and interpretation of clinical data using optical imaging techniques require adequate training and expertise. Since a large proportion of these clinical studies are conducted in research or academic centers, it is of paramount importance to standardize their clinical use and validate diagnostic performance in varied clinical settings. Once validated in large-scale studies, there are challenges to balance increased cost, learning curves, and user resistance to new technology. Therefore, in addition to technical performance metrics, clinical barriers for technology adoption should be assessed when placing novel devices into the hands of practitioners. Nonetheless, clinical adoption of novel technologies has taken place rapidly; for example, the use of capsule endoscopy for Fig. 3. Examples of computer-aided algorithms for detection and diagnosis of colorectal polyps. (A) In microscopic images of polyps collected with endocytoscopy, clinically inspired nuclear morphology features were extracted and quantified for diagnosing advanced histology. (B) A data-driven algorithm is trained using a convolutional neural network to highlight adenomatous CRC polyps in WLE images. Figure 3A reproduced from [134] with permission from Elsevier. Figure 3Breproduced from [125] with permission from BMJ Publishing Group Ltd. small bowel disease and NBI for colorectal polyp characterization shows that wide scale adoption is possible in short time frames. With an ever-increasing amount of technical innovations and clinical data, we expect that novel optical imaging technologies will lead to significant changes in the clinical workflow in the coming decade, thus improving outcomes for a large at-risk population through more accurate and less invasive endoscopic imaging. | 2020-09-12T13:06:51.614Z | 2020-09-11T00:00:00.000 | {
"year": 2020,
"sha1": "797cd242232acc8865bebb1a0fd443afa79cb153",
"oa_license": "CCBY",
"oa_url": "https://febs.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/1878-0261.12792",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "166d797baee2424d26f3a5d907b1ffc76a5a233d",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
232242056 | pes2o/s2orc | v3-fos-license | Predisplacement Abuse and Postdisplacement Factors Associated With Mental Health Symptoms After Forced Migration Among Rohingya Refugees in Bangladesh
Key Points Question What is the prevalence of severe mental health symptoms among Rohingya refugees in Bangladesh? Findings This cross-sectional study including 1184 Rohingya refugees found that a high prevalence of severe mental health symptoms exists in the Rohingya population even after 2 years of displacement. In Rohingya camp settings, low prevalence of traumatic distress was associated with employment opportunities and adequate humanitarian support; severe mental health symptoms were commonly found in adults who had experienced physical abuse or both physical and sexual abuse in Myanmar. Meaning The risk of mental illness may be high for the Rohingya population; health care networks in Rohingya camp settings should ensure adequate mental health services to protect Rohingya adults from developing mental illness.
Introduction
The Rohingya population of Myanmar is one of the world's most oppressed minority groups. 1 The Citizenship Act of 1982 removed the Rohingya from the list of officially recognized ethnic minority groups. It denied them many fundamental rights, including citizenship, freedom of movement, access to health care and education, marital registration, and the ability to vote, making them the world's largest stateless group. 2 In August 2017, in continuation of past persecution events, the Myanmar army began a massive clearance operation in Rakhine, which was then home to approxmately 1.2 million Rohingya individuals. 3 The operation saw an estimated 7800 Rohingya killed. 4 The movement drove the Rohingya out of Myanmar, leading to a massive exodus of about 750 000 Rohingya refugees to Bangladesh's Cox's Bazar district. 5 Several studies have recorded the experience of abuses faced by Rohingya refugees before displacement. [6][7][8][9][10][11] Rohingya refugees were refused medical care for physical or sexual assault in Myanmar. [7][8][9][10][11][12] A major component of these violent incidences was also emotional and verbal abuse.
Rohingya group leaders echoed and corroborated these descriptions in different reports. 4,12 These occurrences of physical, sexual, and emotional abuse were reported to have a lasting effect on the survivors' mental health and to be associated with the potential diagnosis of psychological conditions, such as posttraumatic stress disorder (PTSD). 7 Posttraumatic stress is a syndrome of PTSD characterized by distracting ideas, nightmares, memories of past traumatic events, avoidance of trauma reminders, hypervigilance, and sleep disturbance. 13 Such aspects are associated with extreme psychological, occupational, and interpersonal dysfunction. 14 However, patients with PTSD experience pronounced cognitive, affective, and behavioral reactions to stimuli, resulting in hallucinations, extreme anxiety, and fleeing or combative behavior. These symptoms may result in emotional numbness and reduced involvement in daily activities and, in the extreme, may result in alienation from others. Among patients with PTSD, depressive disorders, anxiety disorders, and drug misuse are 2 to 4 times more prevalent than among patients without PTSD. 15 Moreover, PTSD may increase the risk of suicide attempts. 16 We used the Impact of Event Scale-Revised (IES-R) for the evaluation of posttraumatic stress symptoms (PTSSs). 13,14 This self-report instrument was designed to include all 3 groups of symptoms of PTSD (ie, interference, avoidance, and hyperarousal) associated with a particular lifethreatening incident.
During the predisplacement and postdisplacement periods, refugees faced multiple stressors. 17 Individuals with an experience of abuse were at risk of increasing mental health problems. 18,19 The challenges facing Rohingya refugees while living in Bangladesh have been reported in many studies. 20,21 They live in small, overcrowded temporary shelters in refugee camps without adequate food, clean water, or toilets. Moreover, their lives are on hold, and they are unsure about their future.
A 2017 study analyzed the daily environmental stressors among 148 Rohingya adults and found worse mental health outcomes for refugees. 21 Compared with 2 years ago, the Rohingya refugees' basic needs and health care have mostly improved. 22 However, postdisplacement factors are still important for improving the mental health of Rohingya refugees.
Screening is effective only when combined with high-quality services for mental well-being.
One of the challenges to ensuring appropriate services for Rohingya refugees in Bangladesh is the lack of statistical data on the group's mental health status. In this report, we plan to identify the prevalence of PTSSs and associated predisplacement and postdisplacement factors of PTSSs among Rohingya adults living in Bangladesh after the massive clearance operation.
Study Design and Participants
From September 17, 2019, to January 11, 2020, we conducted a cross-sectional survey among Rohingya refugees residing in Cox's Bazar, Bangladesh. More than 1 million Rohingya refugees are now staying in Bangladesh. Most are clustered in Ukhia and Teknaf, the district's 2 upazilas (administrative regions). The largest refugee settlement in the world, Kutupalong, centered in Ukhia, is home to more than 600 000 refugees alone. 23 The prospective participants were recruited from Kutupalong via the process of a 3-stage sampling technique. First, with the assumption of equal population size in each camp, we chose 8 camps randomly from the Kutupalong refugee camp and expansion areas that consist of 23 camps. Second, we selected at least 160 households from each selected camp, with a target of 1280 households in the study. We targeted more households to be included in our study than the required sample size at 80% power, 95% CI of 0.05 to 1.96, with an assumption that 40% of the population had a mental illness, and a design effect of 2. We applied a systematic sampling technique, and the first household was randomly chosen from the approximate geographical center of the camp. Data collectors proceeded to the next closest household until 160 households were sampled. Third, we had a single respondent per household interviewed, preferably the head of the household. The female head of the household or other available adult member of the household was surveyed when the male head of the household was not available. Household members have been described as those who lived for at least 1 month under the same roof and shared cooking and eating facilities from the same source. We also ensured that the participant lived in the camp for at least 2 years after the displacement. The details of the sampling allocation are given in eTable 1 in the Supplement. Participants provided verbal consent for the study because they were reluctant to sign their names or provide their fingerprints on any piece of paper; in addition, most participants were analphabetic. They were reassured that all the information collected would be kept strictly confidential and would not be used for anything other than research purposes. However, they were provided with a consent paper with detailed contact information of the research investigators for any future query. The institutional review board at North South University, Bangladesh, approved the study. This study followed the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guideline.
Recruitment and Training
The data were obtained and cleaned by a team of 4 enumerators consisting of 2 men and 2 women from the Health Management BD (HMBD) Foundation. The HMBD Foundation, a local nongovernmental organization (NGO) that has been working in Rohingya refugee camps since the influx, chose the local community leaders (called a "maji") from each of the selected 8 camps. The local leader from each of the camps was then informed about the study's research and ethics. They assisted the HMBD Foundation in recruiting 8 local male residents from each camp who could communicate in both the Bengali (Bangla) and Rohingya languages. A team of data collectors was then formed and included 2 persons (ie, one person from the camp and another person from the HMBD Foundation). The interview was held in both the Rohingya and Bangla languages: the camp data collector asked questions in the Rohingya language, and then the HMBD Foundation member checked the answer by asking the same question in Bangla. Our 2 research investigators from North South University (A.H. and T.A.K.) arranged a 1-day practical training session about ethics and data collection. The enumerators and data collectors were also briefed about the study objectives, methods, and questionnaire. The researchers also taught the data collectors about the techniques of report building and preserving neutrality as well as information on ethical problems, privacy concerns, cultural awareness, and risk management for mental health. After the training, a pilot study was arranged for the 8 study teams and was evaluated as a single unit. The aim was to observe the capacity to comprehend the relevant techniques and troublesome situations while interviewing. We
Data Collection
It was made clear to the respondents that participation in the study was entirely voluntary. The faceto-face interview took place 1 person at a time, to ensure privacy. The respondents were given no monetary or food-item incentives. The questions were read aloud to the respondents 1 question at a time during the interview, and the respondents were asked which of the scale choices was acceptable. The coinvestigators reviewed the data collection sheets for completeness, accuracy, and internal consistency, which were confirmed by the principal investigator.
Sociodemographic Variables
The first section of the questionnaire assessed sociodemographic characteristics, including age (in years), sex (male or female), height (in inches), weight (in kilograms), marital status (never married, widowed or divorced, and married), and the number of family members in a household. Later we categorized family members as 2 or less, 3 to 4, and more than 4 members. Educational level was subdivided into 2 groups: respondents who attended school and those who cannot read or write.
Predisplacement Abuse
We used the word "abuse" here, which involves the violence recognized by the intention of imposing control or dominance over Rohingya people to cause rage, harm, resentment, humiliation, coercion, and helplessness. We collected data on the types of abuse that refugees were exposed to before the forced displacement to Bangladesh. The categories were indirect abuse or not exposed to any abuse, verbal or emotional abuse, physical abuse, and both physical and sexual abuse. Verbal or emotional abuse was defined as a nonphysical act that impairs an individual's psychological integrity and may take the form of coercion, defamation, verbal abuse, or harassment. 24,25 It might also include being forced into labor and separation from or witnessing abuse against family or community members.
Physical abuse was regarded as any force applied to any part of the body, such as shaking, burning or scalding, choking, hair pulling, hitting, slapping, kicking, or threatening and/or attacking with a knife, gun, or other type of weapon. It also also might have included restraining, tying, or locking up against the individual's will. 26 Sexual abuse was defined as sexual humiliation (forced masturbation or nudity), sexual slavery, rape (vaginal, oral, anal, or attempted), genital abuse (beatings, electric shock, or mutilation), castration, penis amputation, sterilization, or forced marriage, cohabitation, or sexual activity (with a stranger, family member, or corpse). 26,27 However, to differentiate between physical and sexual abuse, the respondents were intentionally asked with caution whether the specific event included the sexual organs or not. Because Rohingya women were reluctant to respond to the question of sexual abuse as an individual option, we added the option of both physical and sexual abuse together. Another response option was that the refugee did not experience any direct abuse but was exposed to grief for losing family members or property or experiencing separation, anxiety, or other trauma.
Postdisplacement Factors
Many of the Rohingya refugees worked in the camp, especially as day laborers who performed critical infrastructure work. In addition to assisting with outreach and coordination in the sprawling camps, the paid volunteer work was essential to initiatives to build roads, prevent landslides, and clear sewage. We included employment status as either employed (paid work) or as unemployed. Based on an individual's perception, refugees were also assessed on whether they were receiving sufficient humanitarian aid for the family or not. In addition, we obtained data on the presence of any existing physical disability (eg, deafness, blindness, or amputation) of a stateless refugee.
Outcome Measurement: PTSSs
The study applied the psychometric properties of the IES-R criteria to assess the severity of PTSSs within the study population. This scale is a short, 22-item self-report questionnaire that is not a diagnostic tool for PTSD but an appropriate instrument to measure the subjective response to a specific traumatic event experienced by an adult. 13,28 Three clusters of symptoms are included in the IES-R: the hyperarousal, interference, and avoidance subscales. Respondents were asked to describe a particular event and then indicate how much each event identified has upset or disturbed them during the past 7 days. We calculated the total subjective stress IES-R score (range, 0-88) from the 3 subscales. The total scale of Cronbach α was 0.87 (95% CI, 0.86-0.88), indicating a high degree of reliability. Posttraumatic stress symptoms were categorized into 4 categories: no symptoms (total IES-R score, Յ23), mild symptoms (score, 24-32), moderate symptoms (score, 33-38), and severe symptoms (score, Ն39) of PTSD. The categorization and interpretation of the IES-R score were described in a report from the Hartford Institute for Geriatric Nursing. 29 In addition, we categorized the total IES-R score into 2 groups following the recommendation of Creamer et al 13 : less than 33, which identified refugees with no or minimum symptoms of PTSD, and 33 or above, which identified refugees with moderate or severe symptoms of PTSD.
Statistical Analysis
Data were analyzed using R, version 3.6.2 (R Project for Statistical Computing). The questionnaire, R scripts, and data are available online. 30 All categorical variables (presented as frequencies and percentages) were assessed using descriptive statistics. We estimated prevalence ratios (PRs) and their 95% CIs using a multivariable logistic model after adjusting for potential confounders. The PR is the ratio between the likelihood of an outcome in the exposed group and the likelihood of an outcome in the unexposed group. 31 By using the delta method, we obtained the SEs for PRs. In the adjusted model, we controlled for demographic factors, such as age, sex, educational level, marital status, history of predisplacement abuse, current paid employment status in refugee camps, number of family members, and self-reported sufficient humanitarian aid for the family. We also obtained variance inflation factors in the logistic regression model to evaluate potential multicollinearity in the model (eTable 2 in the Supplement).
The pattern of missing data in the study sample is presented in eTable 3 in the Supplement. We found that the proportion of missing data ranged from 0.01% (1 of 1183) for family size to 12.5% (148 of 1036) for body mass index (BMI; calculated as weight in kilograms divided by height in meters squared). Also, we did not include missing data for covariates for 32 participants in multivariable analysis.
Response Rate
Of the 1280 households sampled, 17 were excluded because the head of the household did not consent to participate. An additional 57 households were excluded because no persons were eligible to be included in the study during the study period. Finally, in our analysis, we had 1184 households, for a 92.5% response rate. A thorough calculation of the response rate is given in the eAppendix in the Supplement. Moreover, we investigated the association between postdisplacement factors and demographic variables (eFigure in the Supplement). It appears that Rohingya women were not receiving opportunities for paid employment equal to the opportunities that men received. People younger than 45 years were more engaged in paid employment. Also, adults younger than 35 years reported experiencing more physical and sexual abuse relative to older age groups. The Rohingya adults who had more than 4 family members reported a lack of relief from humanitarian agencies.
Multivariable Analysis: Adjusted PRs
We estimated adjusted prevalence ratios (aPRs) using a multivariable logistic regression model adjusted for potential confounders ( This study provides a detailed view of the symptoms of traumatic distress encountered by Rohingya refugees in Bangladesh. The prevalence of mental illness was 54% among the war-affected Ugandan population, where two-thirds of that population was displaced for more than 5 years. 34 Also, 48.8% of Afghan refugees who fled their country nearly 20 years ago and have now lived in Australia for 1 to 5 years have met the criteria for PTSD. 35 One study of Syrian refugees living in Germany found that only 13% had mental health disorders. 36 However, a small study conducted in 2017 among camp-based Rohingya refugees who fled to Bangladesh found that 36% of adults had PTSSs and that 89% had symptoms of depression. 21 There are currently insufficient resources for mental health services in Rohingya refugee settings, and the number of mental health professionals is too low to cover the entire Rohingya population in need. 37 The high prevalence of PTSSs suggests that a scale-up of mental health care is needed, which could be met by increasing medical workers' capacity in refugee health facilities to diagnose and treat patients with mental disorders.
In Bangladesh, refugees are not legally entitled to work. The inability to survive without jobs has led many refugees, especially men, to illegally seek paid employment. Many of the Rohingya refugees work in the refugee camps, especially as paid day laborers who help construct roads, prevent Humanitarian aid from the government or NGOs plays an essential role in Rohingya refugees' lives. The refugees are solely dependent on humanitarian assistance. Our findings show that sufficient humanitarian aid to a family is associated with a lower risk of developing mental health symptoms compared with those who received insufficient assistance. Similar results found that a lack of social support was associated with poorer mental health. [39][40][41] Before the mass exodus to Bangladesh, sex-based abuse toward refugees was well documented in many reports. [4][5][6] We found that physical abuse was more frequent among male refugees than female refugees and that both physical and sexual abuse were equally prevalent among male and female refugees. Our findings indicate that, among Rohingya refugees subjected to physical and sexual abuse, the prevalence of severe PTSSs was high. Many studies show that women were found to have a higher incidence of mental health disorders after rape or sexual assault. 38,42 Approximately half the respondents in our sample were male. The study findings show that symptoms of traumatic distress were more prevalent among male refugees than female refugees.
Several studies have shown that male refugees had a higher prevalence of poor mental health. 38,41,42 A qualitative study among refugees from Afghanistan speculates that the lack of job opportunities plays a crucial role in mental distress among men after years of forced migration. 43 Rohingya women do not get enough paid jobs in conservative Rohingya society, making it difficult for them to earn a living. Lack of paid employment may be associated with the increase in depression among women.
Our research also indicates that refugees with physical disabilities were at a much greater risk of elevated PTSD symptoms than those without disabilities. This finding is consistent with previous studies that found that disability was associated with a high prevalence of mental illness. 44 In our sample, the risk of traumatic distress increased with increasing age regardless of sex. The findings are consistent with studies conducted among Syrian and Sudanese refugees. 33,45 Our research also found that refugees who were unable to read or write had a higher prevalence of severe mental health symptoms than those who had schooling. Currently, adult refugees do not have educational opportunities in the refugee camps, and job placement is not based on education. Therefore, among adults in the refugee camps, education has no significance. This finding is comparable to previous research that showed that poor education was correlated with higher rates of PTSD. 46,47 Also, married respondents were more likely to have PTSSs than those who were never married.
This finding is comparable to findings in previous research. 48 Our study further found a high risk of PTSSs for refugees with family sizes of more than 4 members. The Rohingya population believed that family planning methods clash with their faith. 49 Thus, married people are more likely to have a large family and be unable to afford enough food or fulfill other needs; this explains why the variables of marital status and family sizes have a positive association with traumatic distress.
Strengths and Limitations
There are some strengths to our research. First, a local NGO working with Rohingya refugees for more than 3 years helped us collect data from the 8 camps. Our data reduced the information bias by involving interviewers from Rohingya refugee camps. Second, we recruited households through a random sampling technique. Third, to assess symptoms of mental well-being, we collected data on a variety of variables that were not taken into account in current registries or population-scale monitoring efforts.
Several limitations should be taken into account when interpreting our results. First, our research used a scale to measure PTSSs that was not validated for use in the Rohingya community. Second, the method of this short duration of the cross-sectional survey could fail to address the transient nature of the population. Third, we did not assess mental health treatment availability and use in the last 2 years of the postmigration period. This limitation could be significant because accessing or using mental health services may mitigate the prevalence of PTSSs among respondents.
JAMA Network Open | Public Health
Fourth, we did not gather details on the number of assaults experienced by the respondents or the number of many family members who were involved in those events. We had a question concerning the loss of family members during the attack. Unfortunately, with this question, some of the respondents got too emotional to respond, and several respondents misunderstood the question about the concept of immediate family members. We stopped asking these questions later in the study. Similarly, several respondents were worried about their health conditions, but they were not certain about chronic diseases because they did not visit any health facilities. Therefore, many respondents were unable to answer this question. We did not include several postmigration stressors, such as the refugee camp climate, access to health care or education, varieties of food, and limited access to certain services. These variables may have confounding effects on mental health.
Conclusions
The high prevalence of self-reported mental health symptoms among Rohingya refugees in this study suggests that the trauma of displacement and the violent consequences of military clearance operations are still present. Different support services, such as access to education or training on stress management for their violent memories, may reduce the burden of severe mental health symptoms. Our findings indicate the importance of ensuring that every household receives sufficient humanitarian assistance. Employment opportunities at Rohingya refugee camp settings hold promise as a potential intervention to reduce the burden of mental health symptoms. | 2021-03-17T06:17:28.331Z | 2021-03-01T00:00:00.000 | {
"year": 2021,
"sha1": "341c4a1e1f82b5b5c7c5a6c55d12eb0c0d0f4438",
"oa_license": "CCBY",
"oa_url": "https://jamanetwork.com/journals/jamanetworkopen/articlepdf/2777507/hossain_2021_oi_210081_1615314318.31688.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ed9d075caa869b955f7833e1f0758ce8a8866cc2",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
139122863 | pes2o/s2orc | v3-fos-license | Surface-Treated Poly(dimethylsiloxane) as a Gate Dielectric in Solution-Processed Organic Field-Effect Transistors
Poly(dimethylsiloxane) (PDMS) is a transparent and flexible elastomer which has a myriad of applications in various fields including organic electronics. However, the inherent hydrophobic nature and low surface energy of PDMS prevent its direct use in many applications. It is seldom utilized as a gate dielectric in solution-processed organic field effect transistors (OFETs). In this work, we demonstrate a simple method, extended ultraviolet–ozone (UVO) treatment, to modify the PDMS surface and effectively employ it in solution-processed OFETs as a gate dielectric material. The modified PDMS surface shows enhanced wettability and adherence to both polar and nonpolar liquids, which is contrary to the generally observed hydrophilic nature of UVO-treated PDMS surfaces because of the creation of polar functional groups. The morphological changes happening on the PDMS surface as a result of extended UVO treatment play a major role in making the surface suitable for all type of solvents discussed here. The contact angle measurements are used to give qualitative evidence for this observation. The modified PDMS is then used as a gate dielectric in solution-processed n- and p-channel OFETs using [6,6]-phenyl-C61-butyric acid methyl ester (PC60BM) and regioregular poly(3-hexylthiophene) (rr-P3HT) semiconductors, respectively.
INTRODUCTION
Poly(dimethylsiloxane) (PDMS) is a highly flexible, chemically inert, and inherently hydrophobic silicone elastomer. 1−3 The high optical transparency, flexibility, and solution processability make this material attractive for many technological applications such as soft lithography, microcontact printing, microfluidics, and medical and optical devices. 4−9 Because of its reversible stretchability and flexibility, PDMS is often employed in flexible electronic devices such as organic field effect transistors (OFETs), 10,11 organic light emitting diodes, 12 ac thin film electroluminescent devices, 13 and solar cells. 14 For the fabrication of flexible devices such as OFETs, solutionprocessing methods spin-coating, dip-coating, and ink-jet printing are preferred to expensive techniques such as physical vapor deposition because of ease of processability, costeffectiveness, and the ability to cover a large area. 15,16 However, direct deposition of organic materials on PDMS by solution-processing methods is considered a challenging task because the organic semiconductors (OSCs) dissolved in nonpolar solvents show poor wettability and adherence on the PDMS surface in spite of its inherent hydrophobic nature. 17 This has created major obstacles in making solution-processed multilayer devices on PDMS. 2,18 Surface treatments are often used to improve the wetting characteristics of polymers prior to their use in different applications. The commonly adopted techniques for PDMS surface modification are ultraviolet−ozone (UVO) treatment, oxygen plasma treatment, corona discharge, and other chemical treatments. 19−22 Except for the large processing time, UVO treatment is a milder and less expensive technique compared to other surface treatment methods. 19−22 It is reported that UVO treatment modifies the PDMS surface to hydrophilic with the creation of polar functional groups on its surface and hence wets the polar solvents. 19,23 This does not solve the wettability and adherence issue with nonpolar solvents on the PDMS surface. In the present work, we describe a simple method called extended UVO treatment to improve the wettability and adherence of both nonpolar and polar solvents on the PDMS surface.
PDMS is commonly used as a supporting layer in flexible organic devices. 24,25 Because of its good dielectric properties, PDMS is a suitable material as a gate dielectric in OFETs. 5,26 In previous reports where PDMS is used as a gate dielectric, the active materials are either thermally evaporated or their single crystals are physically placed between the channels. 5,10,18,27 To the best of our knowledge, PDMS is not used as a gate dielectric in solution-processed OFETs. Here, we employed the surface-treated PDMS as a gate dielectric in n-and p-channel OFETs by direct solution-casting of active materials, dissolved in nonpolar solvents, on PDMS. The nand p-type semiconducting materials used in the study are [6,6]-phenyl-C61-butyric acid ester (PC 60 BM) and regioregular poly(3-hexylthiophene) (rr-P3HT), respectively. The OFETs thus fabricated show good electrical characteristics. The enhanced wettability of organic solutions on the PDMS surface using a simple UVO treatment method can create a good impact on fields such as organic electronics and microfluidics where high flexibility, transparency, and solution processability are desirable.
RESULTS AND DISCUSSION
2.1. Effect of UVO Treatment on Wettability and Adhesion. The wetting nature of untreated PDMS to polar and nonpolar solvents is examined using contact angle (CA) measurements ( Figure 1). The hydrophobicity 1 of PDMS is evident from the large water CA (∼115°), as shown in Figure 1a. The nonpolar solvents, which are used to dissolve the semiconducting polymer/molecule used in this study, are expected to wet the PDMS surface before any surface treatment as the CAs of these solutions are small (∼20°) (Figure 1c,e). This is in accordance with the theory of wetting that a hydrophobic surface naturally gets wetted with nonpolar solvents; 28,29 but we have observed that during spin-coating, the nonpolar organic solutions spilt out from the PDMS surface and failed to form a thin layer on it. This nonadhesive behavior of nonpolar solutions on the PDMS surface can be mainly due to the insufficient surface energy possessed by PDMS. 17,20,21,30,31 We carried out UVO treatment on the PDMS surface for different exposure times and found that an extended UVO treatment for about 60 min resulted in a large reduction in the CA values (which is a measure of the surface energy) 32 for both polar and nonpolar liquids. The water CA was reduced from 115°to 34°, and the CAs of chlorobenzene and chloroform were reduced from 20°to 3°and 7°, respectively (Figure 1d,f). The spin-coated organic solutions on the UVO-treated PDMS layer showed perfect wetting and adhesion and formed uniform thin layers (schematic representation is given in Figure 2). The high wettability of polar and nonpolar solvents on the extended UVO-treated PDMS surface point out that the PDMS surface has undergone a noticeable modification in addition to the creation of polar functionalities on its surface. Such improved wetting and adhesion can be explained based on the presence of chemical entities on the PDMS surface as well as its surface morphology. 19,33 2.2. Chemical Modification on the PDMS Surface. Attenuated total reflectance−Fourier transform infrared spectroscopy (ATR−FTIR) measurements can be used to identify different chemical functionalities present on the PDMS surface before and after UVO treatment. In Figure 3a−e, ATR−FTIR spectra of PDMS at different UVO exposure times are given. PDMS has a series of characteristic infrared bands. 19 among these, the most intense ones were obtained for those associated with asymmetric −CH 3 stretching in Si−CH 3 (2950−2970 cm −1 ), symmetric −CH 3 deformation in Si−CH 3 (1245−1270 cm −1 ), stretching vibration of Si−O−Si bonds (1000−1100 cm −1 ), −CH 3 rocking, and Si−C stretching vibrations in Si− CH 3 (785−815 cm −1 ). 19,34,35 These bands are common for untreated and UVO-treated PDMS. However, we observed that with increasing UVO exposure time, a broad region appeared around 3050−3700 cm −1 , as shown in Figure 3f, which corresponds to the polar functional group, hydroxyl (−OH). Also, bands corresponding to Si−O stretching vibrations in silanol (Si−OH) appeared with an increase in the UVO exposure time in the regions between 825−865 and 875−920 cm −1 ( Figure S3). Simultaneously, the band corresponding to Si−C vibration was decreased ( Figure S4). The reactive oxygen radicals formed during UVO exposure are responsible for the conversion of methyl (−CH 3 ) groups to silanol (Si−OH) groups. 35 The increase of −OH functional
ACS Omega
Article groups on the PDMS surface makes it more and more hydrophilic as the time of exposure increases. 19−23 2.3. Morphological Changes on the PDMS Surface. The role of morphological changes in determining the wetting characteristics of PDMS was examined using scanning electron microscopy (SEM) and atomic force microscopy (AFM) images of untreated PDMS and 60 min UVO-treated PDMS surface (Figure 4a−f). The images show that UVO treatment completely altered the surface morphology, featuring the formation of wrinkles with hills and valleys. From the AFM images, the root mean square (rms) roughness of the annealed PDMS surface was found to be ∼10 nm. Upon extended UVO treatment, the roughness almost doubled to ∼20 nm (roughness data are given in Table 1). The mechanism of wrinkle formation can be explained based on the high viscoelasticity of PDMS. High viscoelasticity results in mechanical stress in spin-coated PDMS thin films. 10,36 Thermal curing and subsequent cooling release this stress, which results in the formation of wrinkles on the surface. 10 Further, the prolonged UVO irradiation causes an increment in surface wrinkling resulting in high surface roughness. 34,36 An increase in the surface roughness can also be caused by the evaporation of volatile fragments from the PDMS surface while the siloxane component is converted to silicon oxides. 34
ACS Omega
Article This is similar to the observed nanostructuring in oxygen plasma-treated PDMS. 22 The reason for the high degree of wetting by nonpolar liquids on the modified PDMS surface can be attributed to the increased surface roughness. 28,39 The increased surface roughness effectively increases its surface area. A low viscous liquid can easily penetrate into the deep narrow valleys, causing a reduction in the CA, and allow complete spreading, 39 which then forms a thin film during spin-coating. 2.4. Dielectric-Semiconductor Interface Morphology. The dielectric−semiconductor interface of an OFET plays a crucial role in determining the channel formed between the source and drain electrodes because of the applied electric field. 31,40 Any defect in the interface can cause the formation of trap states, deteriorating its operating values. 31 The solvents used for dissolving the OSCs can cause swelling of the underlying dielectric, which may reduce the interface quality. The swelling ability of solvents on PDMS used in this study are in the order chloroform > cyclohexane > chlorobenzene. 41 We observed that because of the moderate solubility of PDMS in chlorobenzene, the PDMS layer lying beneath the P3HT layer was negligibly affected. Though PDMS is highly soluble in chloroform, 41 it did not affect the underneath layer in the device because of the high volatility of chloroform. In a bottom gate device, surface properties of the dielectric can influence the morphology and molecular organization of the OSC coated on top of it. 31 Surface morphology of PDMS coated with the OSC solutions was evaluated using AFM and SEM images ( Figure 5). The roughness data of films are given in Table 1. From the images, it is clear that the OSC layers follow the underneath wrinkled morphology of PDMS. The wrinkles are more visible in the case of the rr-P3HT layer because it is thinner than the PC 60 BM layer. Nevertheless, the images show that the continuity of the active layer is not lost.
2.5. X-ray Diffraction (XRD) Analysis. Besides morphology, the surface property of the gate dielectric can influence the crystallinity of the OSC layer. 42 We carried out XRD analysis of the surface-treated PDMS thin film and the organic active layers coated on the modified PDMS film. The results are compared with the unmodified PDMS thin film and the untreated organic active layers spin-coated on glass substrates ( Figure 6). The XRD patterns of untreated PDMS and surfacetreated PDMS show similar characteristics. We observed a characteristic broad peak at a 2θ value of 12.2°in the XRD patterns of PDMS thin films before and after extended UVO treatment (Figure 6a), indicating the amorphous nature of PDMS. 43 A noticeable variation in the ordering is also not observed for organic layers coated on glass and on the UVOtreated PDMS layer. The XRD pattern of rr-P3HT shows a crystalline nature with characteristic peaks around 5.6°and 11.2°corresponding to the crystal planes (100) and (200), respectively (Figure 6b), indicating a stacked structure. 44,45 The sharp peaks at 2θ values of 7.6°and 10.7°and the small distinct peaks at around 20°in the XRD pattern of PC 60 BM indicate its crystalline nature. 46 From the above observations, we conclude that the UVO-treated PDMS layer with enormous surface irregularities does not alter the basic crystallinity of the
ACS Omega
Article OSC materials coated on top of it. A broad region (from 15°to 35°) seen in all XRD data is the effect of the glass substrate on which the layers are coated. While examining the XRD patterns with and without the surface-treated PDMS layer, the peak intensity is not considered as an indication of improved crystalline nature. It is because the layered structure of PDMS can cause accumulation of OSC on the valleys and formation of a thin layer on the hills; so the intensity of peaks can vary depending on the area of focus. 47 2.6. Capacitance−Voltage Measurements. The basic requirements of a gate dielectric material are its high dielectric constant (ε r ), high geometric capacitance (C i ), and the ability to produce a good quality interface with the semiconductor to ensure good field effect for proper transistor operation. With the favorable interface properties of PDMS/OSC as obtained earlier, we evaluated ε r and C i of PDMS for its application in OFETs using capacitance−voltage(C−V) measurements. The C−V characteristics of the dielectric material evaluate the charge accumulation and discharge capabilities of the material and hence verify its capability to be used as a dielectric in transistors. We used metal−insulator−metal (MIM) (Al/ PDMS/Al) and metal−insulator−semiconductor (MIS) (Al/ PDMS/rr-P3HT) configurations to perform the quasi-static C−V measurements, and the plots are shown in Figure S5. From the C−V measurements of the MIM capacitor, the dielectric constant of the material was obtained as ∼2.32 (at 1 kHz frequency), which is in agreement with the literature values (ε r = 2.3−2.8 from 1 MHz to 100 Hz). 26 The capacitance per unit area of the device (C i ) was obtained as ∼1.03 nF/cm 2 . The MIS capacitor showed typical p-type behavior with inversion−depletion and accumulation regions as the gate voltage is swept from +30 to −30 V. Figure 7a). This nature is expected because of the high level of oxygen doping in rr-P3HT, causing a significant increase in conductivity and thereby increasing the channel current. When the channel current is high enough to dominate the field effect, the channel fails to pinch-off. 48,49 This effect is similar to the short-channel effect, which is a result of the thick dielectric layer compared to the channel length. 50 The perfect linearity at low drain voltages indicates Ohmic contacts between the source−drain electrodes and the active material. The best value for linear mobility of p-channel OFET is found to be 6.81 × 10 −3 cm 2 V −1 s −1 and the current on/off ratio is 0.21 × 10 3 (at V DS = −60 V), when the measurements are done in air. The linear mobility and current on/off ratio in vacuum are obtained as 3.38 × 10 −3 cm 2 V −1 s −1 and 0.73 × 10 2 (at V DS = −60 V), respectively. The threshold voltages observed in vacuum and air are −10 and 10 V, respectively. The observed change in polarity is again attributed to the increased conductivity of rr-P3HT in air because of the unintentional oxygen doping. 51 The positive threshold voltage and the high off current for devices characterized in air imply that the conductivity of rr-P3HT is not negligible even in a reverse-biased condition. Oxygen doping at the exposed regions of rr-P3HT increases the conductivity of the channel, thereby increasing the off current, causing a decrease in the on/off ratio and also resulting in overestimated mobility values. 51,52 The n-channel OFETs show good FET output characteristics in both air and vacuum. The I DS −V DS curves are linear at low drain voltages and saturate at high drain voltages. The threshold voltage of the device is ∼30 V, and the drain current is of the order of 10 −6 A for a voltage range from 0 to 100 V, when characterized in vacuum. The mobility in the saturation regime is obtained as 0.81 × 10 −2 cm 2 V −1 s −1 and that in the linear regime at a low drain voltage (V DS = 8 V) is obtained as
ACS Omega
Article 6.33 × 10 −3 cm 2 V −1 s −1 in vacuum. The current on/off ratio is of the order of 10 2 (at V DS = 90 V) (Figure 7c,d). Unlike rr-P3HT, the PC 60 BM semiconductor material degrades in an ambient atmosphere, and the OFET device performance is affected. The output characteristics of the p-and n-channel OFETs in vacuum and air, respectively, are shown in Figure S6. The characteristic values of both n-and p-channel OFETs are listed in Table S1 of the Supporting Information.
The transfer characteristics of both OFET devices show a small hysteresis as shown in Figure S7. In OFETs, the hysteresis can occur because of several reasons, such as chargecarrier trapping close to the channel, presence of mobile ions in the dielectric, or slow polarization of the dielectric. 53 As discussed earlier, the increased roughness of the PDMS layer leads to the creation of wrinkles in the channel region, which may act as charge-carrier traps. 31 We obtained hysteresis-free characteristics with slow sweeping rates ( Figure S7). This observation suggests the presence of trapped majority or minority charges in the channel near the semiconductor/ dielectric interface. Even though the roughness in the dielectric is detrimental to the dielectric/semiconductor interface quality, the transistor performance is not affected by this factor. The mobility values obtained from the OFET devices are comparable to the reported values for rr-P3HT and PC 60 BM semiconductors. The reasonable device performance, even with the wrinkled morphology of the PDMS dielectric, can be attributed to the presence of a strong localized electric field at the valleys of the wrinkles. 54
CONCLUSIONS
In summary, we modified the surface of PDMS by using extended UVO treatment and enabled the adhesion of both polar and nonpolar solvents on its surface. The presence of hydroxyl groups and the wrinkled morphology together contributed to the increase in its surface energy and thereby improved the wetting and adhesion. The modified PDMS was then utilized as a gate dielectric in solution-processed active layer-based OFETs. The n-and p-channel OFETs using solution-processed OSCs, PC 60 BM and rr-P3HT, showed characteristics typical for the corresponding OFETs. The present study demonstrates the potential of PDMS as a gate dielectric in solution-processed FETs and suggests its applicability in solution-processed multilayer flexible devices.
EXPERIMENTAL METHODS
4.1. Materials. PDMS (Sylgard 184) along with the curing agent were purchased from Dow Corning. The semiconductors, rr-P3HT and PC 60 BM, were procured from American Dye Source Inc.
4.2. PDMS Surface Modification and Surface Property studies. PDMS, mixed with the curing agent in a 10:1 ratio and diluted in cyclohexane (PDMS/cyclohexane = 1:2), was spin-coated onto cleaned glass substrates at a speed of 6000 rpm. The thickness of the layer was found to be ∼2 μm (measured using an Alpha-Step D-600 Stylus Profiler, KLA-Tencor). After thermal annealing for 30 min at 150°C, the layers were treated with a UVO cleaner (UVOCS, USA) for different time durations. A low-pressure quartz-mercury vapor lamp of output power 9 mW/cm 2 , which generates UV emissions in the 254 and 185 nm range, was used as the UV source.
CA measurements (CA Meter, Holmarc) were done with water and organic solutions of rr-P3HT and PC 60 BM by dissolving them in chlorobenzene and chloroform solvents, respectively. The method used to qualitatively find the wetting nature and surface energy from CA measurements is given in the Supporting Information ( Figure S1). The FTIR−ATR spectra of untreated PDMS and UVO-treated PDMS at different exposure times were measured using a Bruker Alpha-E FTIR spectrometer equipped with a ZnSe crystal. The surface morphologies of untreated PDMS (only annealed at 150°C), UVO-treated PDMS, and OSC coated on PDMS were investigated using AFM (Bruker multimode 8 AFM) and SEM (Nova NanoSEM 450 FEI) techniques. The effect of UVO treatment on the PDMS surface and thereby on the ordering of the OSC layer spin-coated on PDMS were investigated using XRD (Empyrean, PANalytical XRD instrument with Cu Kα radiation) measurements. Figure S2) on cleaned glass substrates with rr-P3HT and PC 60 BM as active layers and extended UVO-treated PDMS as the gate dielectric. Thermally evaporated aluminum of thickness ∼80 nm served as the gate electrode (monitored using INFICON quartz thickness monitor). The PDMS film spin-coated on the Al gate electrode was then cured at 150°C for 30 min and UVO-treated for about 60 min before semiconductor coating. The optimized values of concentration, spin speed, and annealing temperatures were used for active layer coating. The OSCs, rr-P3HT, and PC 60 BM were spincoated on PDMS from a solution of 10 mg/mL in chlorobenzene and 20 mg/mL in chloroform, respectively, at a spin speed of ∼800 rpm. The rr-P3HT layer was annealed at 110°C for 30 min, and PC 60 BM was annealed at 60°C for 10 min. The thicknesses of rr-P3HT and PC 60 BM were 60 and 250 nm, respectively. Gold source−drain electrodes were evaporated onto the OSC layers leaving a channel length (L) of 50 μm between the source and drain electrodes, keeping the channel width (W) at 2 mm.
The OFET devices were fabricated in a nitrogen atmosphere. The characterization was done in air as well as in vacuum conditions using a Keithley SCS 4200 semiconductor parameter analyzer. We calculated the field-effect mobilities (μ lin and μ sat ) of OFETs in the linear and saturation regimes of operation using following eqs 1 and 2, respectively: Current in the linear regime where W (2 mm) and L (50 μm) are the transistor channel width and length, respectively, V Th is the threshold voltage, and C i is the capacitance per unit area of the PDMS gate dielectric. | 2019-04-30T13:08:14.205Z | 2018-09-17T00:00:00.000 | {
"year": 2018,
"sha1": "055ca0e62f5dfa09714b7458db4e5c98c3027484",
"oa_license": "acs-specific: authorchoice/editors choice usage agreement",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.8b01629",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "255677212925c6ef409ca7ceda7cb7231c821d77",
"s2fieldsofstudy": [
"Materials Science",
"Engineering",
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
46910503 | pes2o/s2orc | v3-fos-license | An Etiological , Clinical and Histological Study of Oral Lichen Planus with Comparative Evaluation between Various Therapies
1Sonal B Dudhia, 2Bhavin B Dudhia, 3Jigna S Shah 1Assistant Professor, Department of Oral Diagnosis, Oral Medicine and Dental Radiology, College of Dental Science Ahmedabad, Gujarat, India 2Associate Professor, Department of Oral Diagnosis, Oral Medicine and Dental Radiology, Ahmedabad Dental College and Hospital Gandhinagar, Gujarat, India 3Professor and Head, Department of Oral Diagnosis, Oral Medicine and Dental Radiology, Government Dental College and Hospital Ahmedabad, Gujarat, India
INTRODUCTION AND REVIEW OF LITERATURE
Lichen planus is a complex chronic mucocutaneous disease reportedly affecting 0.5 to 2.0% of the population with a mild predilection for females and a mean age at onset in the forth to fifth decade of life. 1 The name 'lichen planus' (LP) refers to the superficial similarity of the lesions of reticular lichen planus to the lace like pattern produced by symbiotic algal and fungal colonies on the surface of rocks in nature (lichens). 2The disease was first described in 1869 by Erasmus Wilson as 'leichen planus', 'an eruption of pimples remarkable for their color, their figure, their structure, their habits of isolated and aggregated development, their habitat, their local and chronic character and for the melasmic stains which they leave behind them when they disappear'. 3s the true cause of lichen planus remains poorly defined, it has been associated with so many local and systemic factors. 1,4 number of publications strongly favor an immune pathogenesis for OLP ranging from basal cell alterations to metabolic dysfunctions that interfere with tonofilament production.There is extensive evidence to suggest that cellular immune system dysfunctions play a key role in the onset and perpetuation of this disorder. 58][9][10] Other etiological factors include rheumatic collagen diseases, HLA predispositions, mouth irritation and smoking. 1,11he fundamental lesion of lichen planus is a small papule.The appearance of lichen planus lesions depends upon the various arrangements of the minute papule. 3,7The grayish white lines and dots representing the arrangement of papules were described as a pathognomonic sign by Wickham in 1895.Gougerot described reticular, annular, plaque lesions as well as unrelated dotted and sclerotic forms. 39]12,13 Tongue, gingivae, palate and lips are the next frequent sites. 1,3,4,6,7,9The lesions are almost always to some extent bilateral. 11][16][17] They can occur separately or simultaneously. 1,3,6,7Reticular, papular and plaque lesions seldom present with any symptom. 6n other hand atrophic, erosive and bullous lesions cause the patient to come to the office complaining of a sore burning mouth. 1,3,6,7,18,19 trophic lichen planus involving the gingivae is often referred to as desquamative gingivitis. 3,7,20n approximately 40 to 50% of the cases of lichen planus, oral lesions appear concomitantly with dermal lesions. 6The skin eruptions of lichen planus take the form of flat, violaceous papules with a fine scaling on the surface. 6,14The papules are usually 2 to 4 cm wide and may coalesce to form annular lesions or plaques. 6,7,14The most common symptom is pruritus. 6he lesions are most often found in the lumber region, around the genitalia, on the ankles, on the anterior surface of the lower legs and on the flexor surfaces of the wrists. 3,6,17 Over the years, there has been considerable controversy concerning the possible relationship between squamous cell carcinoma and chronic lichen planus. 6,8,21The occurrence of squamous cell carcinoma in most series ranged from 0.2 to 10%. 1,13,21,22It has been reported that atrophic, erosive and ulcerative lesions of oral lichen planus and those showing erythroplakic components are potentially more dangerous. 4,8,9,20,21,23any and widely varied therapeutic approaches have been offered for the control and/or cure of LP.These therapies range from symptomatic treatment with analgesics to wide excisional biopsies.Various corticosteroids have been used in either gel, cream, oral solution or aerosol formulations.Systemic and topically administered tretinoin, systemic etretinate and systemic and topical isotretinoin are all effective. 24Intralesional injections of triamcinolone (10 mg/ml) in lidocaine are generally reserved for more refractory, persistent cases.Systemic prednisolone or azathioprine has been administered for the control of chronic lichen planus with some success.Recent studies have shown topical human fibroblast interferon may result in the temporary remission of lichen planus. 1 OLP responds most favorably to treatment with topical or systemic steroids. 3,9,13,16,18,20,25,26Topically applied corticosteroid preparation, such as betamethasone valerate (0.01%) and fluocinonide (0.1%) continue to enjoy the most widespread use and success in providing remissions. 1,6,26luocinonide 0.025 to 0.05% have been shown to be a safe and effective drug to reduce signs and symptoms of OLP. 9,12,13,16teroids have no effect on hyperkeratotic lesions while the atrophic and erosive types respond to local and/or systemic steroid with quick relief of symptoms. 1,27ystemic retinoids etretinate, isotretinoin and tretinoin (all trans retinoic acid) have produced clinically significant results in patients with oral lichen planus. 16,19,28Topical tretinoin has produced generally good results in patients with OLP without the complications of systemic retinoids. 6,16,18,28A related retinoid-isotretinoin had been made available in 0.1% and showed considerable efficacy in the treatment of OLP. 28trophic and erosive lesions responded less dramatically than reticular lesions. 18,28he purpose of present study was to achieve better understanding of clinical, etiological and histopathological characteristics of OLP and also to evaluate the curative response by different treatment modalities in various forms of OLP.
MATERIALS AND METHODS
Present study was conducted among 71 patients selected from the Department of Oral Diagnosis, Government Dental College and Hospital, Ahmedabad, Gujarat, India.The patients were selected randomly irrespective of age, sex and marital status.
All patients with lesion morphologically showing white elevated papules/lines (Wickham's striae) were included in the study irrespective of their complaint.After selecting the patients for study, all the patients were interviewed for a thorough medical and dental history.Clinical examination included extraoral and intraoral examination.Histopathologic investigation in term of punch biopsy was carried out in 59 cases.
Each patient was treated by supportive and specific therapeutic measures.1. Supportive therapy: Vitamin B complex and iron tablets were given to patients once daily dose up to 2 months, local factors were eradicated like removal of sharp cusps/teeth or occlusal disturbances and oral hygiene was maintained by scaling and prophylaxis.
Specific therapy:
A. Patients of OLP without any skin lesions were treated by topical therapy and categorized in various groups such as: a. Topical application of steroid (0.025%).b.Topical application of vitamin A (0.025%).c.Topical application of vitamin A (0.05%).Total 8 weeks follow-up was done and size of lesion in the form of score was noted after every 2 weeks.Curative response was noted as score 1 and 0, which indicated discrete papules and normal mucosa respectively.In erosive/atrophic form of lesions curative response from score 5 and 4 to score 4, 3 and 2 was considered as partial response and score 5 and 4 to score 1 and 0 was considered as complete response.In reticular/annular/ linear/plaque and papular form of lesions score 3 to score 2 was considered as partial response and score 3 and 2 to score 1 and 0 was considered as complete response.B. Patients with skin lesions were treated with systemic steroids.
RESULTS
Results are discussed in Tables 1 to 5.
DISCUSSION
In the present study of most of patients fall in 26 to 50 years of age group.][9]12 The present study showed almost equal distribution of lichen planus among males and females.]12 This may be due to the fact that reticular/linear/annular lesions were more commonly associated with pan/tobacco/betel nut/ gutkha chewing, which is more common among the males than females.Erosive form was more commonly associated with stress and hormonal disturbances, which is most frequently seen in females.In mixed lesions there were equal distribution and this may be due to the fact that mixed lesions contain both erosive and reticular type of patients in equal numbers.
Most common symptom noted in OLP patients was burning sensation, which was more common in erosive form.According to literature reticular/linear/annular, papular and plaque lesions seldom present with any symptom.On the other hand atrophic, erosive and bullous lesions caused patients to come to the clinic complaining a sore burning mouth. 1,3,6,7,19In the present study another common complaint was pruritus in skin lesions, which is in agreement with literature. 6harp cusps of teeth and habit of pan/tobacco/betel nut/ gutkha chewing were the commonest associated local etiological factors found in different clinical forms of OLP.Sharp cusps of teeth and habit of chewing pan/tobacco/betel nut/gutkha causes continuous irritation to that particular site and may act as predisposing factor for OLP.According to various literatures mouth irritation and traumatism 29 might be the contributory etiological factors.
Psychogenic stress was found to be the commonest associated systemic factor with OLP lesions.[8][9][10] Other systemic factors noted in were allergy to drugs, nutritional disturbances, hypertension, skin lesions, diabetes and hormonal disturbances.Drug allergy included hypersensitivity to drugs-such as nonsteroidal anti-inflammatory drugs, Penicillines, Sulfonamides and iodides.According to literature drugs also have been causally linked to the so-called lichenoid eruptions of the oral mucosa. 1,3,4,6,18,24,29Out of 71 patients, nine patients had skin lesions (Fig. 1).According to literature among large cases studied at least one-third of patients with oral lesions also demonstrate cutaneous lesions. 1 Comparing to literature the incidence of skin involvement was low in present study.This may be due to the fact that most of the patients came to our dental OPD with chief complaint in relation to oral lesions.
Most common clinical form observed was reticular/linear/ annular form which is in agreement with other literature. 6onsidering various clinical forms of OLP in the present study, reticular/linear/annular and erosive/atrophic forms were more commonly found on buccal mucosa/vestibule (Fig. 2) and papular/plaque forms of lesions were more commonly found on tongue.This is in agreement with other studies. 3,7n present study hyperparakeratosis was seen as a remarkable histopathologic feature (Fig. 3), whereas orthokeratosis was noted in few patients.It is pointed out by Andreasen (1968) that hyperparakeratosis is much more common finding than hyperorthokeratosis. 30 Considering different clinical forms of OLP, hyperparakeratosis was prominent histopathological feature in reticular/linear/annular forms.In erosive/atrophic forms most commonly associated changes were hyperparakeratosis and atrophy.
In present study, other epithelial findings noted were hyperplasia, acanthosis and atrophy.According to literature acanthosis was the most common change in the stratum spinosum.Normal appearing or atrophic epithelium was not uncommon. 17Liquefaction degeneration of basal cells was noted in majority of cases.,24 Most prominent histopathologic finding in connective tissue was infiltration of chronic inflammatory cells (Fig. 3).The deeper layers of the connective tissue were relatively free of chronic inflammatory cell.]17 Epithelial dysplasia was noted in 10 cases.Dysplastic features included increased N/C ratio and hyperchromatasia; cellular pleomorphism, lack of cellular cohesion, increased or abnormal mitosis and teardrop-shaped rete pegs (Fig. 8).According to literature, 31 presence of any two or more of these histopathologic features mandates separate condition of lichenoid dysplasia.According to literature, it has been reported that atrophic, erosive and ulcerative lesions of oral lichen planus are potentially more dangerous 4,8,9,20,21,23 which is similar to findings in the present study.
In all clinical forms of OLP fluocinolone acetonide 0.025% therapy responded better than any other therapy.According to literature 9,12 fluocinolone acetonide 0.025 to 0.05% is a safe and effective drug for OLP.The benefit of corticosteroids in management of lichen planus appears to be related to both immunosuppressive and anti-inflammatory properties. 1,16,27ith retinoic acid 0.025% therapy, number of completely cured lesions was less in all forms.With retinoic acid 0.05% therapy, curative response was good in reticular/linear/annular form of OLP.Going through literature topical retinoic acid had produced generally good results in patients with OLP. 6,16,18,28trophic and erosive lesions responded less dramatically than reticular lesions. 18,28Effect of retinoic acid seems to be due to antikeratinizing and immunomodulating effects. 16,19,28th systemic steroid therapy, there was no curative response considering the decrease in size of lesion after completion of therapy.Although burning sensation and pruritus were relieved in most of the patients. 25
SUMMARY AND CONCLUSION
The following conclusions were drawn from the present study: Lichen planus mostly affects persons in age group of 26 to 50 years.Males and females are equally affected.Reticular/ linear/annular form is more commonly seen in OLP patients.Most common chief complaint of patients with OLP is burning sensation.Considering sites of involvement buccal mucosa/ vestibule is the commonest site for reticular/linear/annular as well as for erosive and atrophic form of lesions.Tongue is the commonest site for plaque and popular form of lesions.
The main systemic factor associated with all clinical forms of OLP is psychogenic stress in all clinical forms of OLP.Skin lesion is noted in very few patients of OLP.Considering local etiological factors in OLP sharp cups of teeth and habit of pan/ tobacco/betel nut/gutkha chewing are the commonest factors in all forms of OLP.
Common histopathological changes for OLP are hyperparakeratosis, liquefaction degeneration of basal cells, atrophy and saw tooth appearance in epithelium and infiltration of lymphocytes in connective tissue.Epithelial dysplasia is noted in few number of patients and it is more common in erosive/ atrophic forms of OLP.
Topical application of fluocinolone (0.025%) is effective and safe drug for all forms of OLP.Topical application of retinoic acid (0.05%) is effective in reticular forms of OLP.Topical application of retinoic acid (0.025%) is less effective as compared to topical application of fluocinolone (0.025%) and topical application of retinoic acid (0.05%) in all forms of OLP.Systemic steroid therapy gives no response in any form of OLP.There is only symptomatic relief in burning sensation and pruritus.Larger doses of steroids combined with topical steroids are required in patients with oral and dermal LP with long-term follow-up.
Fig. 1 :Fig. 2 :
Fig. 1: Photograph showing skin lesions of LP on flexor surface of hands
Table 1 :
Distribution of various signs and symptoms among different clinical forms of OLP Shows burning sensation in the oral cavity as the most common symptom among OLP patients
Table 3 :
Association of various systemic conditions among different clinical forms of OLP patients Note: Shows psychogenic stress as the commonest systemic factor associated with different clinical forms of OLP
Table 4 :
Comparative analysis of various histopathologic features Note: Shows hyperparakeratosis and liquefaction degeneration of basal cells as most common epithelial features and infiltration of chronic inflammatory cells as most prominent connective tissue feature
Table 2 :
Association of various local factors among different clinical forms of OLP patients Note: Shows sharp cusps of teeth and habit of pan/tobacco/betel nut/gutkha chewing as commonest local factors associated with different clinical forms of OLP
Table 5 :
Response of various clinical forms of OLP patients to various therapeutic measures | 2018-03-30T06:33:14.590Z | 2011-10-01T00:00:00.000 | {
"year": 2011,
"sha1": "627c35a40fbc3da0658120e9272bb585faad5e0e",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.5005/jp-journals-10011-1211",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "627c35a40fbc3da0658120e9272bb585faad5e0e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118909424 | pes2o/s2orc | v3-fos-license | Semiclassical theory of cavity-assisted atom cooling
We present a systematic semiclassical model for the simulation of the dynamics of a single two-level atom strongly coupled to a driven high-finesse optical cavity. From the Fokker-Planck equation of the combined atom-field Wigner function we derive stochastic differential equations for the atomic motion and the cavity field. The corresponding noise sources exhibit strong correlations between the atomic momentum fluctuations and the noise in the phase quadrature of the cavity field. The model provides an effective tool to investigate localisation effects as well as cooling and trapping times. In addition, we can continuously study the transition from a few photon quantum field to the classical limit of a large coherent field amplitude.
I. INTRODUCTION
The center-of-mass motion of atoms strongly coupled to the electromagnetic field in a microresonator has become a central problem in current research in cavity quantum electrodynaqmics (QED) [1]. Very recently trapping of a single atom by a single photon in a high-Q optical cavity has been predicted [2] and demonstrated [3,4]. These pioneering experiments open exciting new avenues for manipulating the motion of neutral atoms, with the prospects for effective sub-Doppler cooling schemes [5], for controlled cavity-mediated long-range interactions between atoms [6,7], or for quantum information processing [8,9].
The motion of a two-level atom in a free laser field is quite well understood by now [10]. In a high-Q cavity, the confined radiation field has additional mechanical effects on the atomic center-of-mass (CM) dynamics. The reason for this is the back-action of the atom on the field, which becomes important in the regime of strong coupling between the atomic dipole and the field. Depending on its momentary position the atomic dipole can drastically modify the radiation field which, in turn, exerts forces on the moving atom. This results in a complex coupled dynamics of the atomic CM and the field.
There is a fully quantum theory describing the cavity-induced mechanical effects for intensities well below the singlephoton level [5]. Analytical expressions have been derived for the linear friction force and for momentum diffusion. For a large range of parameters one finds strongly enhanced diffusion compared to free space, which could be partly attributed to the quantum nature of the cavity field. Simulations based on these expressions are in good agreement with experimental observations [6].
However, this theory is difficult to extend to higher intensities, where one could expect better cooling and trapping due to reduced quantum diffusion. A possible extension to higher photon numbers based on the standard approach for laser cooling [11] adapted to treat the situation in cavities was recently presented by Doherty et al. [12,13]. The computational effort, however, explodes with increasing photon numbers and the approach is limited to calculate the dynamics in few photon fields. Hence there is a high demand for a model that allows for exploring the intermediate regime between the quantum domain of subphotonic fields and classical laser fields with large intensity. The interest is at least twofold. Firstly, trapping and cooling of atoms in fields with intensities higher than the single-photon level can lead to new phenomena. The forces get larger, hence the potential wells involved are deeper, which is certainly interesting for further applications. The ability of modeling and efficently simulating such systems is one essential goal. Secondly, a simple model is needed for elucidating fundamental features of the cavity-induced forces such as the enhanced or suppressed momentum diffusion in comparison to free space.
In Sec. II we develop a semiclassical model based on the Wigner representation of both the motional atomic degrees of freedom and the cavity photon field, including spontaneous atomic and cavity decay. Systematic truncation of the corresponding time evolution equation at second order derivatives yields a Fokker-Planck type equation which can be simulated by classical stochastic differential equations. The essential feature of this approach is the consistent treatment of the quantum noise of the atom and cavity dynamics. The atomic momentum noise and the cavity phase noise turn out to be correlated, which is the specific ingredient of the atomic diffusion in a strongly coupled cavity. In Sec. III we use our simulation method to study the effects of atomic localisation on the steady state properties of the system. We also check the validity of our model by comparing the results with the analytic ones in the weak excitation limit. In Sec. IV we address the question of the atomic trapping in the potential wells. We calculate the characteristic trapping time, an important quantity in experimental realisations. Finally, we summarise our results in Sec. V.
II. SEMICLASSICAL MODEL
Let us consider a two-level atom interacting with a single mode of the electromagnetic field in a weakly driven cavity. The driving field is supposed to be monochromatic with frequency ω p with a given detuning from the mode frequency ω C and with fixed effective amplitude η. Dissipation in the system is due to cavity decay (κ) and spontaneous emission from the atom into other than the cavity mode (γ). The atomic dipole is strongly coupled to the field mode, i.e., the coupling constant g is at least of the same order of magnitude as the relaxation rates κ and γ. The atom is allowed to move in the cavity under the effect of field forces. The intracavity field is given by the mode function f (x). For the sake of simplicity this study will be restricted to one dimension but the generalization of the model to three dimensions is straightforward.
We derive the semiclassical model by systematic approximations from the full quantum master equatioṅ The coherent atom-field dynamics is considered in the electric dipole and in the rotating wave approximation. In a frame rotating with the pump frequency ω p , the Hamiltonian H reads where ∆ C = ω p − ω C and ∆ A = ω p − ω A . The operators p and x are associated with the atomic momentum and position. The field and the atomic dipole are described by the annihilation and creation operators a, a † , respectively by the lowering and raising operators σ − , σ + . The damping terms are given by where the last term includes the momentum recoil due to spontaneous emission. It is possible to introduce an effective Hamiltonian of much simpler form when the internal atomic dynamics can be adiabatically eliminated. This is the case if the internal atomic variables σ ± evolve on a much more rapid time scale than the other variables due to the large detuning ∆ A or due to the large damping rate γ. In either case the population in the excited atomic state is negligible (low saturation regime), and the atomic dipole moment σ − can formally be replaced by The dispersive and absorptive effect of the atom can be described by the parameters respectively. The remaining atomic CM and field dynamics is governed by the adiabatic Hamiltonian and the damping terms Eqs. (6) and (7) define an effective quantum master equation that accounts for the coupled atomic CM motion and field dynamics, and also for the environment induced losses. The usual phase space method can be invoked to derive semiclassical equations of motion [14], a technique which has been successfully used for the study of laser cooling in free space, see e.g. Refs. [15][16][17]. We will omit the presentation of the detailed calculation here and will concentrate on the main concepts only. The key for deriving semiclassical equations from the quantum model is the use of the combined Wigner function W (x, p, α, α * ) that represents the atomic motion and the state of the cavity light field in the total phase space. The Wigner transform of the quantum master equation leads to a partial differential equation which is truncated at second-order derivatives to obtain a Fokker-Planck-type equation (FPE). The validity of this approach relies on two conditions. First, the photon momentumhk must be small compared to the momentum width ∆p of the Wigner distribution, i.e., a single-photon absorption or emission will not change the momentum distribution considerably. Accordingly, the powers of the small parameterhk/∆p ≪ 1 introduce a hierarchy in the different orders, which justifies the truncation procedure. Besides this condition which is well known from semiclassical treatments of free-space laser cooling, there is a second one concerning the quantized field state. That is, the second-order derivative ∂ 2 ∂α∂α * must be neglected compared to |α| 2 , in order to drop a term containing third-order derivatives in momentum and field variables. A sufficient condition is to consider fields close to coherent states with mean photon numbers higher than one. This step makes the model "semiclassical" in the description of the cavity field as well.
The FPE for the combined atom-field Wigner function reads where the first term is the coherent evolution of the cavity field, the second term is the decay and diffusion of the cavity field due to cavity decay and scattering of photons out of the cavity by the atom, the third term is the conservative atomic motion, and the forth term is the momentum diffusion due to scattering of photons by the atom. Note that this latter term can become negative, which reflects the break-down of the semiclassical model at very weak intracavity field intensities. Additionally, we find a rather unintuitive fifth term which gives rise to correlated momentum and cavity field noise. We will discuss this in more detail later. In (8) we introduced the abbreviationū 2 = N (u)u 2 du, which we will set to 0.4 in the following, valid for circularly polarised light.
In the case of positive Wigner functions W , the FPE can be solved by Monte-Carlo simulations of the corresponding stochastic differential equations (SDE) [18] where we decomposed the cavity field into its real and imaginary parts, α = α r + iα i . We thus have reduced the solution of the FPE to a set of four SDE which is easily tractable by numerical integration. Another major advantage of this approach as compared to previously used methods [2,12] is that the numerical effort is independent of the photon number. Thus our method is suitable for exploring the transition from the quantum fields with very low photon numbers all the way to the high-intensity classical regime [19]. Note however that due to the adiabatic elimination of the atomic excited state the method presented here only works for small atomic saturation, which is not guaranteed for some experiments even in the low photon regime. The most intricate part in a numerical simulation of the SDE is the correct treatment of the noise terms dP , dA r , and dA i since these are correlated, as already mentioned above. For a discussion of the correlated diffusion, it is convenient to introduce the field amplitude noise dA and the field phase noise dA ⊥ by a rotation of dA r and dA i , The diffusion matrix D in this basis then reads where From this we see that the amplitude noise is in fact independent, but the phase noise and the momentum noise are correlated. We interpret this in the following way. In a free space description it is well-known [10] that a twolevel atom gives rise to photon redistribution between the two counterpropagating travelling waves which form the standing wave. 1 The incoherent part of this redistribution is proportional to Γ 0 and accounts for one part of the atomic momentum diffusion. In standard weak-field Doppler cooling [10], the backaction of this scattering process on the light field is neglected as this is assumed to be fixed by the driving laser. However, in the strong coupling limit of an optical cavity this backaction must be taken into account. Since such a spontaneous backscattering process does not change the photon number in the standing wave mode, there is no corresponding intensity fluctuation but, depending on the atomic position, the phase of the cavity field is changed. This leads to the correlated momentum and phase noise represented by the nondiagonal elements of the diffusion matrix D.
Note that the cavity field noise d 1 , Eqs. (11) and (12) is independent of the field amplitude α. Hence, the noise terms dA r,i in the SDE (9) play an important role in the system dynamics for small α. For large field amplitudes, d 1 is small compared to the diffusion matrix elements d 2 and d 3 and can be neglected. In this limit we recover the limit of atomic motion in a dynamically varying classical light field. If additionally the atomic saturation is reduced by increasing the detuning ∆ a , also the atomic momentum diffusion dP becomes negligible compared to the coherent dynamics. We then obtain the classical equations of motion of a cassical dipole in a high-finesse cavity [2,5]. This limit has been recently suggested as a possible scheme for laser cooling of molecules, where one tries to avoid spontaneous atomic transitions to uncoupled molecular states [19].
Let us now compare our method with previous semiclassical simulations used e.g. by Doherty et al. [12,13]. In that approach, the full master equation is numerically solved up to first order in velocity for any position in space, which gives the local dipole and friction forces. Additionally, the momentum diffusion coefficient is obtained by calculating the force autocorrelation function using the quantum regression theorem. Hence, no semiclassical assumption for the cavity field is imposed, and arbitrary atomic saturation is taken into account. In this treatment the numerical effort increases drastically with increasing photon number and becomes exceedingly high in the case of several atoms or cavity modes, as for instance in a ring cavity. In contrast, our semiclassical method is easy to implement also in these more complex situations. In addition, the explicit dynamical evolution of the cavity field in our case allows for arbitrary atomic velocities and provides new insight into the coupled atom-cavity dynamics, such as the correlated noise sources discussed above.
III. TEMPERATURES AND LOCALISATION EFFECTS
We will now turn to the discussion of some specific numerical examples and compare these with previous analytic expressions in the weak excitation limit [2,5]. The validity of the latter is restricted to parameter regimes with mean intensity well below the single-photon level. Nevertheless, the analytic solution is expected to be a good approximation for higher photon numbers as long as the atomic saturation is very low. As an example, we will consider a standing wave with mode function f (x) = cos(kx). First, we will study the cavity-induced effects on the "temperature" (the time-averaged kinetic energy) of a single atom in the one-dimensional optical lattice. Next, the spatial distribution of the atomic position is considered, and the effect of localisation on the temperature will be demonstrated.
The steady-state temperature as a function of the cavity decay rate is plotted for the two parameter regimes where the cavity-mediated force on the atom is known to give rise to cooling, that is, for ∆ A > 0, ∆ C = 0 in Fig. 1(a) and for ∆ A < 0, ∆ C = U 0 in Fig. 1(b). For the numerical simulations we keep the ratio of pump strength to cavity decay rate constant, η/κ = 3, such that the cavity contains on average approximately nine photons for all parameter configurations. The variation of the temperature can then be associated with the effect of the cavity. The cavityinduced forces are expected to be important in the range κ ≈ U 0 = 0.312γ, while the limit of κ, η → ∞ corresponds to a free-space radiation field. For reference, the analytical results of Ref. [5] have been used to obtain two curves for the temperature, based on the total friction force and, respectively, on the cavity-induced friction force only. The difference between these analytic results is thus due to the standard Doppler force which is heating for ∆ A > 0 and cooling for ∆ A < 0. As expected, the numerical results are closer to the analytic curves without the Doppler force, because the adiabatic elimination of the atomic excited state excludes the Doppler force from our semiclassical model. This is a good approximation since the Doppler shift kv is much smaller than the atomic detuning ∆ A . However, for increasing cavity linewidth, κ ≫ U 0 , the cavity-induced friction force as well as the diffusion vanish. It is no longer justified to neglect the Doppler force in this limit, indicated by the bifurcation point of the two analytic curves. Nevertheless, even within the limit of validity we find a systematic deviation of the numerical and the analytic results: the simulated temperatures are always higher. In the following we show that the difference can be attributed to atomic localisation in the potential wells, which is treated correctly within the semiclassical model but not contained in the weak-driving limit of the analytic formulas. In order to study the effects of localisation in more detail, we show in Fig. 2(a) the dependence of the steady-state temperature on the pump strength when all other parameters are kept constant. We see that for decreasing pump strength (decreasing photon number) the temperature approaches the analytic result of the weak-excitation limit. Note that for very small photon numbers of 1 or 2 our semiclassical model is no longer valid and the numerical results exhibit a residual deviation from the analytical ones. (One of the numerical signatures of the limited validity of our model in this parameter regime is the occurence of a negative eigenvalue of the diffusion matrix D, Eq. (11), in single particle trajectories.) Figure 2(a) also shows the ratio of the steady-state temperature and the time-averaged optical potantial depth given by the optical potential times the average photon number U 0 n , with n = a † a. In the weak driving limit, this ratio is much larger than one, indicating a nearly flat atomic distribution in position space. For stronger pumping, the ratio is continuously decreasing and drops well below unity corresponding to strong localisation. This indicates a strong local cooling force, that is, even a particle which is trapped within a single potential well experiences a friction force. Hence, in order to achieve good trapping of particles in the cavity QED domain, one should go to higher photon numbers than those which are currently used in experiments, despite the corresponding higher steady-state temperatures. We will return to the discussion of trapping times in the following section.
Finally, we show in Fig. 2(b) the averaged spatial distribution of a particle for two different values of the pump strength. This again shows the stronger localisation for increasing η. Moreover, the dashed lines give the spatial distribution of a particle obtained from a thermal distribution with the same mean temperature and a sinusoidal potential of the same mean depth, that is, Note that the numerically obtained distribution strongly deviates from a thermal one due to dynamical effects and the corresponding correlations between atomic degrees of freedom and the cavity field. Fig. 1(a).
IV. TRAPPING TIMES
For cavity QED experiments, especially in view of potential applications in quantum information processing, it is an important issue how long the particles can be trapped in the potential wells. Depending on the initial condition, there are different ways to discuss the trapping time of an atom in a well. 3. Probability distribution P (τ ) for the times at which an atom initially in the ground state of a potential well leaves this well. The solid line is a histogram obtained numerically and the dashed line is an exponential fit. The parameters are those of Fig. 2(b) and η = 2κ.
At time t = 0 the atom can be assumed in the ground state of the potential well. The potential is approximately harmonic so the Wigner function of the ground state of a harmonic oscillator can be used as initial distribution in the semiclassical Monte-Carlo simulations. For a large statistical ensemble we then record the times τ when the atoms leave the initial potential well. A typical example for the resulting distribution P (τ ) is presented in figure 3. We find a low probability that an atom leaves the potential well at short times since it takes a while until the particle is sufficiently heated by momentum diffusion. For larger times the distribution becomes exponentially decaying P (τ ) ≈ exp(−τ /T trap ), e.g., for the given parameters and τ > 100µs a fit yields the trapping time T trap = 88.3µs. Note that this initial state is in fact realised in recent experiments [4] where atoms from a magneto-optical trap are velocity selected through the narrow cavity entrance slits before they interact with the optical field, and thus the atoms are effectively prepared in their motional ground state as an initial condition. Alternatively one could consider a thermalised atom as initial condition. In this case we simulate a single atom trajectory for a very long period and record the times τ which the atom spends within a potential well. In the following we will call τ a "flight time" in order to distinguish it from the trapping time T trap as defined before. We then obtain a distribution for short and long flight times as presented in Figure 4.
As a help for the interpretation of this curve, let us first discuss the corresponding distribution obtained for a thermal ensemble with the same temperature and in a static (conservative) potential of the same depth n U 0 . In this case there is no momentum diffusion and hence an atom which is trapped initially will stay within the same potential well for arbitrarily long times. Therefore only hot atoms with a total energy larger than the optical potential depth will contribute to the distribution function of flight times which is shown by the dashed curve in Fig. 4. This distribution function shows no flight times τ < 1µs that would correspond to very fast atoms, a single maximum at τ ≈ 2.3µs, and a continuously decreasing probability for longer flight times.
Taking the full cavity dynamics into account has two main effects. First, because of momentum diffusion every atom undergoes a random walk in momentum space and thus will leave a given potential well after a finite time. Second, if an atom enters a potential well with a not too high velocity, it will be slowed down due to the cavity-mediated friction force and thus can be trapped for a longer time. This effect manifests itself in the decrease of the distribution function for times 2µs < τ < 6µs compared to the conservative motion limit. However, in general a particle which just has been trapped will have an energy only slightly below the potential maximum and therefore a relatively high probability of being ejected from the well within a short time. This leads to the second maximum of the distribution function at τ ≈ 7µs.
Finally, for long times the exponential distribution of the trapping times is detected again, with T trap = 80.5µs, in reasonable agreement with the previous result in Fig. 3. Note, however, that the two definitions for the trapping time T trap are qualitatively different: in Fig. 3 the distribution is obtained from an ensemble of statistically independent atoms, whereas in Fig. 4 only a single atom is followed. Hence in the latter case the distribution exhibits a certain memory of the particle history. For example, a very fast atom will travel over many potential wells and thus will produce many entries in the distribution at short flight times until it gets slowed down and trapped. Because of this the large maximum at times 1µs < τ < 5µs in Fig. 4 only corresponds to about 22% of the total time for which the atom was observed. This also agrees well with the fraction of 20.4% of atoms with energies above the potential barrier obtained from a thermal ensemble, which further supports our interpretation of the distribution function P (τ ) in terms of trapped and untrapped atoms.
The above outlined method for calculating the characteristic trapping time T trap has been carried out for different parameter settings. The trapping time as a function of κ and of the pump strength η is shown in Figures 5 and 6 Fig. 1(a), where the photon number is kept approximately constant by a fixed ratio of η/κ = 3, and the cavity-induced effects are modified by varying the cavity decay rate κ. Depending on the value of κ we find three very distinct regimes for the behaviour of the trapping time. First, there is a strong increase of the trapping time for very small decay rates. This is due to the strong coupling of the photon number to the atomic position in this limit: already a small elongation of an atom from the field node is sufficient to shift the cavity out of resonance and to drastically reduce the photon number. Thus, the atom dynamically reduces the trapping potential and subsequently can easily leave the potential well. Hence the trapping times are short in this limit. For κ > U 0 the photon number can at most be changed by a factor of two by the atom and the behaviour of the trapping time is no longer dominated by this dynamical effect. In this regime the photon number and consequently the potential depth are approximately constant, but the temperature of the atom increases with increasing κ as we have seen in Fig. 1(a). Thus the atomic localisation decreases, which in turn gives rise to a decrease of the trapping time. Finally, we find again a slight increase of the trapping time when κ is so large that the temperature is of the order of or larger than the optical potential depth. An increase of κ then still increases the temperature and thus the fraction of untrapped particles. However, our definition of the trapping time as the exponential decay of the flight time distribution for very long times (Fig. 4) relies on the trapped particles only and does not depend on the fraction of untrapped atoms.
The trapping time is then dominated by the time scale of the friction force and of the momentum diffusion, which becomes longer for large increasing κ. This is due to approaching the classical potential limit for κ → ∞ with a fixed ratio η/κ. This explains the final increase of the trapping time in Fig. 5.
In Figure 6 the photon number is varied by η, while the other parameters are kept constant. Though the temperature gets higher with increasing photon numbers (see Fig. 2), the resulting trapping times exhibit a considerable increase owing to the strong deepening of the potential wells. This result suggests clearly the benefit of working in an intermediate photon number regime for implementing coherent manipulation with neutral atoms in optical lattices.
V. CONCLUSIONS
We have derived semiclassical stochastic differential equations to simulate the coupled dynamics of an atom moving in a single cavity mode in the limit of strong nonresonant interaction between the field and the atomic dipole. Starting from the basic quantum equations we performed systematic approximations such that the model accounts consistently for the quantum noise properties of the system. A nontrivial correlation between the field phase noise and the momentum noise has been revealed and can be interpreted in terms of simple physical processes.
As an example for our method we have studied the atomic motion in a one dimensional optical lattice sustained by the strongly coupled cavity field. In the appropriate limits the calculated steady-state temperatures are in good agreement with the results of previous calculations. Deviations can be attributed to atomic localisation within the potential wells, which is consistently described in our semiclassical simulation. We found that for higher intracavity photon numbers the characteristic trapping times can be significantly increased up to milliseconds. In this case the potential wells get deeper and diffusion and friction are enhanced in a way that the temperature only slowly increases. Hence the steady state temperature can get significantly smaller than the well depth. Alternatively by simultaneously increasing the atom-field detuning, we can keep the effective potential the same but reduce diffusion to achieve longer storage times. As the calculational effort only weakly depends on the photon number we can nicely study the transition from a quantum to a classical field.
Let us mention here that the model can easily be generalised to 3D, more cavity modes and several atoms simultaneously in the cavity. Hence, this semiclassical approach offers a tool for a tractable numerical simulation of the dynamics of these complex systems. These simulations could then be used to check the performance of atom cavity systems for quantum computing and quantum information processing. | 2019-04-14T03:17:38.322Z | 2000-10-17T00:00:00.000 | {
"year": 2000,
"sha1": "0708c1f0e144054ec009a6ba187aa73cd936ee8e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/quant-ph/0010061",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "72de14c4e09405b43552d67e09607519fcf69c3f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
266942297 | pes2o/s2orc | v3-fos-license | Deep Red Photoluminescence from Cr3+ in Fluorine-Doped Lithium Aluminate Host Material
Deep red phosphors have attracted much attention for their applications in lighting, medical diagnosis, health monitoring, agriculture, etc. A new phosphor host material based on fluorine-doped lithium aluminate (ALFO) was proposed and deep red emission from Cr3+ in this host material was demonstrated. Cr3+ in ALFO was excited by blue (~410 nm) and green (~570 nm) rays and covered the deep red to near-infrared region from 650 nm to 900 nm with peaks around 700 nm. ALFO was a fluorine-doped form of the spinel-type compound LiAl5O8 with slightly Li-richer compositions. The composition depended on the preparation conditions, and the contents of Li and F tended to decrease with preparation temperature, such as Al4.69Li1.31F0.28O7.55 at 1100 °C, Al4.73Li1.27F0.17O7.65 at 1200 °C, and Al4.83Li1.17F0.10O7.78 at 1300 °C. The Rietveld analysis revealed that ALFO and LiAl5O8 were isostructural with respect to the spinel-type lattice and in a disorder–order relationship in the arrangement of Li+ and Al3+. The emission peak of Cr3+ in LiAl5O8 resided at 716 nm, while Cr3+ in ALFO showed a rather broad doublet peak with the tops at 708 nm and 716 nm when prepared at 1200 °C. The broad emission peak indicated that the local environment around Cr3+ in ALFO was distorted, which was also supported by electron spin resonance spectra, suggesting that the local environment around Cr3+ in ALFO was more inhomogeneous than expected from the diffraction-based structural analysis. It was demonstrated that even a small amount of dopant (in this case fluorine) could affect the local environment around luminescent centers, and thus the luminescence properties.
Introduction
Inorganic phosphors have been widely used in display and lighting applications such as cathode-ray tubes and fluorescent lamps.Conventional phosphors used in these applications are typically excited by high-energy radiation such as X-rays and ultraviolet (UV) rays or electron beams.Recent light-emitting diode (LED) lighting achieves a pseudowhite color by combining blue light emitted from LED chips and the emission of the phosphors excited by the blue light, which has lower energy than X-rays and UV rays.
"Red" is an important color in lighting applications because it has a significant emotional effect on humans, as people unconsciously perceive the warmth of an atmosphere, the state of a person's health from their complexion, or the freshness of red meat from the color red.Typical rare-earth ions that emit red light are Eu 3+ and Eu 2+ .The former shows aluminum source was aluminum triisopropoxide (>99.9%,Kanto Chem.Co., Inc.); aluminum hydroxide was precipitated at pH 10 in isopropanol solution by NH 3 aq.Using Cr(NO 3 ) 3 •9H 2 O (98.0-103.0%,Kanto Chem.Co., Inc.) as the starting material, Cr 3+ was coprecipitated with aluminum hydroxide in the isopropanol solution.The coprecipitated hydroxide mixture was rinsed with deionized water, heated in a platinum crucible to 1000 • C at a ramp rate of 10 • C/min, and cooled in the furnace to obtain γ-Al 2 O 3 :Cr 3+ .The concentration of Cr 3+ was defined as Cr 3+ /(Al 3+ + Cr 3+ ), assuming that it replaced Al 3+ in the products.γ-Al 2 O 3 :Cr 3+ and LiF were weighed at (Al + Cr)/Li = 4 (note that γ-Al 2 O 3 contains a certain amount of water in its composition and that (Al + Cr)/Li in the starting composition could be slightly smaller than 4).Typically, 0.2 g of a mixture of LiF and γ-Al 2 O 3 :Cr 3+ was placed in a 5 cm 3 platinum crucible with a lid.The 5 cm 3 platinum crucible was put in a larger platinum crucible (30 cm 3 ) with a lid to suppress volatilization of the components, specifically lithium and fluorine, during heat treatment.Samples were heated at 800-1300 • C at a ramp rate of 20 • C/min.Heating times were 15 min at 1100 • C and above, and 1 h at 800-1000 • C.
For comparison, LiAl 5 O 8 :Cr 3+ was prepared from Li 2 CO 3 (99%, Kanto Chem.Co., Inc.) and α-Al 2 O 3 :Cr 3+ at a ratio of Li:(Al + Cr) = 1:5.α-Al 2 O 3 was prepared by heating the Al(OH) 3 precipitate at 1200 • C for 2 h in air.The Li and Al sources were thoroughly mixed in an alumina mortar and pressed into pellets.The pellets were heated in an alumina crucible with the inner bottom covered with pre-prepared LiAl 5 O 8 powder to suppress the diffusion of the lithium component from the sample to the alumina crucible.
The chemical compositions of the samples were analyzed for Li, Al, and Cr using inductively coupled plasma-optical emission spectroscopy after the samples were decomposed in molten salt.Fluorine contents were analyzed by ion chromatography after thermal hydrolysis.Oxygen contents were estimated from charge neutrality using the analyzed Al 3+ , Li + , Cr 3+ , and F − contents.The morphology of the phosphor particles was observed with scanning electron microscopy (SEM) (T-330, JEOL Ltd., Tokyo, Japan and TM3030 plus, Hitachi High-Tech Co., Tokyo, Japan).Carbon was evaporated for conductive coating on the samples.
The crystallographic phases were identified by X-ray diffraction (XRD) using Cu Kα radiation (MiniFlex, Rigaku Co., Tokyo, Japan) operated at 40 kV and 15 mA.The scan step was 0.01 • .Structural refinement was carried out by the Rietveld method using Rietan-FP [36].The crystal structures were illustrated using VESTA [37].The PL spectra and the photoluminescence excitation (PLE) spectra were measured using a fluorescence spectrometer (FL-6000, Shimadzu Co., Kyoto, Japan) and a multichannel spectral analyzer (PMA-11, Hamamatsu Photonics Co., Hamamatsu, Japan) combined with a lamp and a bandpass filter (MC-570, Asahi Spectra Co. Ltd., Tokyo, Japan) as the excitation light source.Quantum efficiencies (QEs) were evaluated with FL-6000 using an integral sphere.X-band (9.13 GHz) electron spin resonance (ESR) measurements were conducted at room temperature between 0 and 800 mT with 100 kHz magnetic field modulation to detect differences in the local environments around Cr 3+ in the different host materials.As an aid for the discussion of the ESR results, classical molecular dynamics (MD) simulations using the MXDORTO code [38] were performed to reproduce the local structures.One Cr 3+ ion was introduced into an octahedral site in the MD cell to replace the Al 3+ ion, and the MD cell consisted of the crystallographic unit cells with x, y, and z axes of 23-26 Å.The detailed calculation procedure of the MD has already been described in our previous paper [39].
Compositions and Crystal Structure
Figure 1 compares the XRD patterns of ALFO:Cr 3+ and LiAl 5 O 8 :Cr 3+ prepared at 1200 • C with the Cr 3+ concentration of 0.5%.Each pattern is consistent with the single phase of LiAl 5 O 8 (ICDD # 38-1426) and Al 4 LiO 6 F (ICDD #38-610) reported by Belov et al. [33].The peaks marked with ▼ are the extra peaks originating from the cation ordering of Li + and Al 3+ in spinel-type LiAl 5 O 8 , which will be discussed below.
Compositions and Crystal Structure
Figure 1 compares the XRD patterns of ALFO:Cr 3+ and LiAl5O8:Cr 3+ prepared at 1200 °C with the Cr 3+ concentration of 0.5%.Each pattern is consistent with the single phase of LiAl5O8 (ICDD # 38-1426) and Al4LiO6F (ICDD #38-610) reported by Belov et al. [33].The peaks marked with ▼ are the extra peaks originating from the cation ordering of Li + and Al 3+ in spinel-type LiAl5O8, which will be discussed below.Table 1 shows the chemical compositions of the ALFO host material without the luminescent center ion Cr 3+ prepared at 1100 °C, 1200 °C, and 1300 °C.The XRD patterns of these samples are shown in Figure S1 in Supplementary Materials, confirming that each sample was in the single phase.The composition suggested by Belov et al. [33] was Al4LiO6F, but the compositional analysis (Table 1) showed that ALFO is a fluorine-doped form of LiAl5O8 with Li-rich compositions.The Al/Li/F ratios varied depending on the preparation temperature.The fluorine content was 0.28 for the sample prepared at 1100 °C; it decreased to 0.10 for the sample prepared at 1300 °C, indicating that fluorine dissipated during heat treatment at the high temperatures.The lithium content also decreased from Al/Li = 4.69/1.31(=3.58/1) at 1100 °C and 4.73/1.27(=3.72/1) at 1200 °C to 4.83/1.17(=4.13/1) at 1300 °C.The Cr 3+ concentration in the Cr 3+ -doped ALFO samples was consistent with the starting composition; the compositional analysis showed that ALFO:Cr 3+ prepared at 1200 °C with a starting concentration of 0.5% Cr 3+ was 0.48%.
The structural refinement by the Rietveld analysis for ALFO prepared at 1200 °C without Cr 3+ converged to Rwp/Rp = 0.058/0.042,goodness-of-fit (S) = 1.90, and the lattice Table 1 shows the chemical compositions of the ALFO host material without the luminescent center ion Cr 3+ prepared at 1100 • C, 1200 • C, and 1300 • C. The XRD patterns of these samples are shown in Figure S1 in Supplementary Materials, confirming that each sample was in the single phase.The composition suggested by Belov et al. [33] was Al 4 LiO 6 F, but the compositional analysis (Table 1) showed that ALFO is a fluorine-doped form of LiAl 5 O 8 with Li-rich compositions.The Al/Li/F ratios varied depending on the preparation temperature.The fluorine content was 0.28 for the sample prepared at 1100 • C; it decreased to 0.10 for the sample prepared at 1300 • C, indicating that fluorine dissipated during heat treatment at the high temperatures.The lithium content also decreased from Al/Li = 4.69/1.31(=3.58/1) at 1100 • C and 4.73/1.27(=3.72/1) at 1200 • C to 4.83/1.17(=4.13/1) at 1300 • C. The Cr 3+ concentration in the Cr 3+ -doped ALFO samples was consistent with the starting composition; the compositional analysis showed that ALFO:Cr 3+ prepared at 1200 • C with a starting concentration of 0.5% Cr 3+ was 0.48%.
The structural refinement by the Rietveld analysis for ALFO prepared at 1200 • C without Cr 3+ converged to R wp /R p = 0.058/0.042,goodness-of-fit (S) = 1.90, and the lattice constant a = 7.9222(8) Å. Figure S2 in Supplementary Materials shows the refinement results and the structural parameters are listed in Table S1 in Supplementary Materials.The structural model was based on disordered spinel with the space group Fd−3m.The spinel-type lattice accommodates cations in the tetrahedral and octahedral sites.In the initial model, Al 3+ and Li + were assumed to be randomly distributed in the tetrahedral sites (8a position) and the octahedral sites (16d position).For the random distribution, the occupancies (g) of Li estimated from the chemical composition analysis (Table 1) should be g = 0.211 for both the tetrahedral and octahedral sites.However, the refined structure indicated a slight preference for Li to occupy the octahedral site, with g = 0.183 and 0.226 for the tetrahedral and octahedral sites, respectively.
According to the literature, LiAl 5 O 8 has crystallographic polymorphs, namely I (disordered) and II (ordered) [19,[40][41][42] (Appendix A). Figure 2 compares the crystal structures of ALFO and ordered LiAl 5 O 8 .Figure S3 in Supplementary Materials illustrates the structural differences in the arrangement of the tetrahedra and the octahedra in the range of x = 0.2 to 0.6 for spinel-type ALFO and ordered LiAl 5 O 8 in a perspective view along <100>.The octahedra form diagonal chains by edge sharing, and the chains of the octahedra are linked to each other by the tetrahedra connected by corner sharing.In ordered LiAl 5 O 8 , the tetrahedra consist of only AlO 4 , and the chains of the octahedra consist of one LiO 6 and three AlO 6 in sequence (Figure 2a and Figure S3a in Supplementary Materials).In ALFO, on the other hand, the tetrahedra are statistically AlO 4 or LiO 4 and the octahedra are AlO 6 or LiO 6 (Figure 2b and Figure S3b in Supplementary Materials).
type lattice accommodates cations in the tetrahedral and octahedral sites.In the initial model, Al 3+ and Li + were assumed to be randomly distributed in the tetrahedral sites (8a position) and the octahedral sites (16d position).For the random distribution, the occupancies (g) of Li estimated from the chemical composition analysis (Table 1) should be g = 0.211 for both the tetrahedral and octahedral sites.However, the refined structure indicated a slight preference for Li to occupy the octahedral site, with g = 0.183 and 0.226 for the tetrahedral and octahedral sites, respectively.
According to the literature, LiAl5O8 has crystallographic polymorphs, namely I (disordered) and II (ordered) [19,[40][41][42] (Appendix A). Figure 2 compares the crystal structures of ALFO and ordered LiAl5O8. Figure S3 in Supplementary Materials illustrates the structural differences in the arrangement of the tetrahedra and the octahedra in the range of x = 0.2 to 0.6 for spinel-type ALFO and ordered LiAl5O8 in a perspective view along <100>.The octahedra form diagonal chains by edge sharing, and the chains of the octahedra are linked to each other by the tetrahedra connected by corner sharing.In ordered LiAl5O8, the tetrahedra consist of only AlO4, and the chains of the octahedra consist of one LiO6 and three AlO6 in sequence (Figure 2a and Figure S3a in Supplementary Materials).In ALFO, on the other hand, the tetrahedra are statistically AlO4 or LiO4 and the octahedra are AlO6 or LiO6 (Figure 2b and Figure S3b in Supplementary Materials).The spinel-type framework leads to the unmarked peaks common to ALFO and LiAl5O8 in the XRD patterns in Figure 1.The peaks marked with ▼ in Figure 1 are derived from the periodicity due to the regular arrangement of Al 3+ and Li + on the cation sites in ordered LiAl5O8.The difference in the cation arrangement results in the lattice symmetry of Fd−3m for ALFO and P4332 for ordered LiAl5O8.The doped Cr 3+ was assumed to occupy octahedral sites in both the host materials because of the preference of Cr 3+ with the d 3 configuration to occupy octahedral sites; the ionic radius of Cr 3+ on a tetrahedral site is not defined in Shannon s ionic radii table [43].
As a summary of the compositional and structural analyses, ALFO and ordered LiAl5O8 have a disorder-order relationship with respect to the arrangement of Li + and Al 3+ on the cation sites.ALFO has slightly Li-richer compositions than LiAl5O8, resulting in anion vacancies for the charge neutrality.Fluorine replaces 1.3-3.5% of the oxygen sites.
The solubility limit of Cr 3+ in the ALFO lattice exceeded 10%. Figure S4 in Supplementary Materials shows the XRD patterns of ALFO:Cr 3+ prepared at 1200 °C at different Cr 3+ concentrations from 0.01 to 10%.All the samples were substantially a single phase of ALFO; the six-fold coordinated ionic radii of Al 3+ and Cr 3+ were 0.535 Å and 0.615 Å [43], The spinel-type framework leads to the unmarked peaks common to ALFO and LiAl 5 O 8 in the XRD patterns in Figure 1.The peaks marked with ▼ in Figure 1 are derived from the periodicity due to the regular arrangement of Al 3+ and Li + on the cation sites in ordered LiAl 5 O 8 .The difference in the cation arrangement results in the lattice symmetry of Fd−3m for ALFO and P4 3 32 for ordered LiAl 5 O 8 .The doped Cr 3+ was assumed to occupy octahedral sites in both the host materials because of the preference of Cr 3+ with the d 3 configuration to occupy octahedral sites; the ionic radius of Cr 3+ on a tetrahedral site is not defined in Shannon's ionic radii table [43].
As a summary of the compositional and structural analyses, ALFO and ordered LiAl 5 O 8 have a disorder-order relationship with respect to the arrangement of Li + and Al 3+ on the cation sites.ALFO has slightly Li-richer compositions than LiAl 5 O 8 , resulting in anion vacancies for the charge neutrality.Fluorine replaces 1.3-3.5% of the oxygen sites.
The solubility limit of Cr 3+ in the ALFO lattice exceeded 10%. Figure S4 in Supplementary Materials shows the XRD patterns of ALFO:Cr 3+ prepared at 1200 • C at different Cr 3+ concentrations from 0.01 to 10%.All the samples were substantially a single phase of ALFO; the six-fold coordinated ionic radii of Al 3+ and Cr 3+ were 0.535 Å and 0.615 Å [43], and replacing Al 3+ with larger Cr 3+ shifted the diffraction peaks slightly toward the lower angles.
Morphologies
Figure 3 shows the SEM images of 0.5% Cr 3+ -doped ALFO and LiAl 5 O 8 prepared at 1200 • C and 1300 • C. The effect of treatment temperature did not appear to have a significant effect on the morphology; ALFO:Cr 3+ was composed of angular grains with well-developed crystal faces.The grain size varied from sub-micrometers to several micrometers.On the other hand, the grains of LiAl 5 O 8 :Cr 3+ did not have a definite shape and their size was several hundred nanometers, which formed agglomerates of a few micrometers in size.The difference in morphology between ALFO:Cr 3+ and LiAl 5 O 8 :Cr 3+ was attributed to the difference in the lithium sources.The melting points of LiF and Li 2 CO 3 used to prepare 1200 °C and 1300 °C.The effect of treatment temperature did not appear to have a significant effect on the morphology; ALFO:Cr 3+ was composed of angular grains with welldeveloped crystal faces.The grain size varied from sub-micrometers to several micrometers.On the other hand, the grains of LiAl5O8:Cr 3+ did not have a definite shape and their size was several hundred nanometers, which formed agglomerates of a few micrometers in size.The difference in morphology between ALFO:Cr 3+ and LiAl5O8:Cr 3+ was attributed to the difference in the lithium sources.The melting points of LiF and Li2CO3 used to prepare ALFO and LiAl5O8 are 848 °C and 723 °C, respectively.The development of the crystal faces of the ALFO:Cr 3+ grains strongly suggested that LiF formed the melt at elevated temperatures and acted as a self-flux to grow the crystalline grains.On the other hand, the indistinct shape of the grains observed in Figure 3c,d for LiAl5O8:Cr 3+ indicated that no flux growth effect was expected, and thus Li2CO3 reacted with α-Al2O3:Cr 3+ to form LiAl5O8:Cr 3+ before forming the melt.
Photoluminescence Properties
The emission and excitation spectra of 0.5% Cr 3+ in ALFO, LiAl5O8, and α-Al2O3 prepared at 1200 °C are compared in Figure 4.The excitation spectra were similar to each other regardless of the type of the host crystal, and the excitation bands consisted of two broad peaks at around 420 nm and 570 nm.They corresponded to the transitions from the ground state 4 A2 to the excitation states 4 T1 ( 4 F) and 4 T2 ( 4 F) in Cr 3+ of the d 3 state.The emission peaks depended on the host crystal.α-Al2O3:Cr 3+ showed a typical line spectrum, named "R line", at 694 nm with several small side peaks on the longer wavelength side, as found in the literature [9,44]; LiAl5O8:Cr 3+ showed a sharp peak at 716 nm with smaller
Photoluminescence Properties
The emission and excitation spectra of 0.5% Cr 3+ in ALFO, LiAl 5 O 8 , and α-Al 2 O 3 prepared at 1200 • C are compared in Figure 4.The excitation spectra were similar to each other regardless of the type of the host crystal, and the excitation bands consisted of two broad peaks at around 420 nm and 570 nm.They corresponded to the transitions from the ground state 4 A 2 to the excitation states 4 T 1 ( 4 F) and 4 T 2 ( 4 F) in Cr 3+ of the d 3 state.The emission peaks depended on the host crystal.α-Al 2 O 3 :Cr 3+ showed a typical line spectrum, named "R line", at 694 nm with several small side peaks on the longer wavelength side, as found in the literature [9,44]; LiAl 5 O 8 :Cr 3+ showed a sharp peak at 716 nm with smaller peaks at 703 nm and 730 nm which are also similar to those reported in the previous literature [19,45].On the other hand, Cr 3+ in ALFO characteristically exhibited a doublet peak at 708 and 716 nm, and the profile was broader than those of α-Al 2 O 3 and LiAl 5 O 8 .Although the apparent difference between the crystal structures of ALFO and ordered LiAl 5 O 8 shown by the Rietveld analysis was in the cation arrangement, the broad doublet peak in ALFO:Cr 3+ indicated that the local environment around the luminescent center Cr 3+ in ALFO was even more inhomogeneous than simply considering octahedral coordination by six neighboring oxygens.
hough the apparent difference between the crystal structures of ALFO and o LiAl5O8 shown by the Rietveld analysis was in the cation arrangement, the broad peak in ALFO:Cr 3+ indicated that the local environment around the luminescen Cr 3+ in ALFO was even more inhomogeneous than simply considering octahedral nation by six neighboring oxygens.Figure 5 shows the variation of the emission peaks of ALFO:Cr 3+ with the tre temperature and compares them with those of α-Al2O3:Cr 3+ and LiAl5O8:Cr 3+ .The 696 nm of the ALFO:Cr 3+ sample prepared at 1000 °C was due to the unreacted Al idue (Figure S5 in Supplementary Materials), indicating that the coprecipitated C incorporated into the α-Al2O3 grains to form α-Al2O3:Cr 3+ .The peak at around 715 specific to ALFO:Cr 3 and was observed in ALFO:Cr 3+ prepared at 1000-1300 °C.I ingly, the rather broad doublet peak with peak tops at 708 nm and 716 nm was ch istically observed in the sample prepared at 1200 °C, and the detailed mechanism occurrence of the doublet peak is still unclear.It could be attributed to the differ the local coordination environment around Cr 3+ that did not appear in the averag ture by XRD (Figure S5 in Supplementary Materials). Figure S5 in Supplementary rials shows the XRD patterns of the ALFO:Cr 3+ samples prepared at 1000-1300 °C ual unreacted α-Al2O3 was recognized in the sample prepared at 1000 °C, while th ples prepared at 1100-1300 °C were in the single phase.Figure 5 shows the variation of the emission peaks of ALFO:Cr 3+ with the treatment temperature and compares them with those of α-Al 2 O 3 :Cr 3+ and LiAl 5 O 8 :Cr 3+ .The peak at 696 nm of the ALFO:Cr 3+ sample prepared at 1000 • C was due to the unreacted Al 2 O 3 residue (Figure S5 in Supplementary Materials), indicating that the coprecipitated Cr 3+ was incorporated into the α-Al 2 O 3 grains to form α-Al 2 O 3 :Cr 3+ .The peak at around 715 nm was specific to ALFO:Cr 3 and was observed in ALFO:Cr 3+ prepared at 1000-1300 • C. Interestingly, the rather broad doublet peak with peak tops at 708 nm and 716 nm was characteristically observed in the sample prepared at 1200 • C, and the detailed mechanism of the occurrence of the doublet peak is still unclear.It could be attributed to the difference in the local coordination environment around Cr 3+ that did not appear in the average structure by XRD (Figure S5 in Supplementary Materials). Figure S5 in Supplementary Materials shows the XRD patterns of the ALFO:Cr 3+ samples prepared at 1000-1300 • C. Residual unreacted α-Al 2 O 3 was recognized in the sample prepared at 1000 • C, while the samples prepared at 1100-1300 • C were in the single phase.
erials 2024, 17, x FOR PEER REVIEW Figure 5. Variation of PL spectra of ALFO:Cr 3+ with preparation temperature of α-Al2O3:Cr 3+ and LiAl5O8:Cr 3+ prepared at 1200 °C are also shown for comp Figure 6 compares the PL spectra of ALFO:Cr 3+ (a) and LiAl5O8: Cr 3+ concentrations.The emission intensity of ALFO:Cr 3+ increased f 0.5% and decreased above 0.5%.Above 2.5%, the intensity of the mai creased while the shoulders developed around 780 nm.In the previo been discussed that the broad emission of Cr 3+ in the near-infrared reg to the 4 T2 → 4 A2 transition of Cr 3+ placed in a weak crystal field, or to m between Cr 3+ ions at the neighboring sites.According to the Tanabe-Su 48], the first excited state in the d 3 configuration of octahedral coordina state, depending on the strength of the crystal field.The transition from 4 A2 ground state yields the line spectra typically observed for Cr 3+ aro broad near-infrared emission is reported for the transition from the 4 materials with a weak crystal field [21][22][23][24]27,29,30]. Magnetic interac red-shift of the emission [49], as exemplified by the so-called N-lines of and the broad luminescence due to Cr 3+ -Cr 3+ pairs observed in SrAl11.88Rajendran et al., which was concluded from the decay time measurem shoulder peak around 780 nm in ALFO:Cr 3+ , the origin was attribut interaction between Cr 3+ ions from the fact that the change in the latt Cr 3+ addition, i.e., the change in the strength of the crystal field at the very large (Figure S4 in Supplementary Materials), and that the intera ions was observed in the ESR signal at a high Cr 3+ concentration as d 3.4.Figure 6 compares the PL spectra of ALFO:Cr 3+ (a) and LiAl 5 O 8 :Cr 3+ (b) at different Cr 3+ concentrations.The emission intensity of ALFO:Cr 3+ increased from Cr 3+ = 0.1% to 0.5% and decreased above 0.5%.Above 2.5%, the intensity of the main doublet peak decreased while the shoulders developed around 780 nm.In the previous literature, it has been discussed that the broad emission of Cr 3+ in the near-infrared region was due either to the 4 T 2 → 4 A 2 transition of Cr 3+ placed in a weak crystal field, or to magnetic interactions between Cr 3+ ions at the neighboring sites.According to the Tanabe-Sugano diagram [46][47][48], the first excited state in the d 3 configuration of octahedral coordination is the 4 T 2 or 2 E state, depending on the strength of the crystal field.The transition from the 2 E state to the 4 A 2 ground state yields the line spectra typically observed for Cr 3+ around 700 nm, while broad near-infrared emission is reported for the transition from the 4 T 2 state in the host materials with a weak crystal field [21][22][23][24]27,29,30]. Magnetic interactions also bring the red-shift of the emission [49], as exemplified by the so-called N-lines of Cr 3+ emission [9,15] and the broad luminescence due to Cr 3+ -Cr 3+ pairs observed in SrAl 11.88−x Ga x O 19 :0.12Cr 3+ by Rajendran et al., which was concluded from the decay time measurements [28].As for the shoulder peak around 780 nm in ALFO:Cr 3+ , the origin was attributed to the magnetic interaction between Cr 3+ ions from the fact that the change in the lattice parameter with Cr 3+ addition, i.e., the change in the strength of the crystal field at the Cr 3+ sites, was not very large (Figure S4 in Supplementary Materials), and that the interaction between Cr 3+ ions was observed in the ESR signal at a high Cr 3+ concentration as described in Section 3.4.LiAl5O8:Cr 3+ showed the maximum intensity at Cr 3+ = 1.0%, and the change of the overall peak profile was less distinct than ALFO:Cr 3+ .The broad peak around 780 nm also developed in LiAl5O8:Cr 3+ above 1.0%, but the intensity was relatively low.
Figure 7 shows the variation of the internal and external quantum efficiencies (QEs) of ALFO:Cr 3+ and LiAl5O8:Cr 3+ versus Cr 3+ concentration.The circles and triangles indicate the internal and external quantum efficiencies QEInt and QEExt, respectively; the filled and unfilled marks indicate the QEs of ALFO:Cr 3+ and LiAl5O8:Cr 3+ , respectively.ALFO:Cr 3+ showed higher QEs than LiAl5O8:Cr 3+ .The QEInt for ALFO was 85.6% at Cr 3+ = 0.5%, higher than the maximum QEInt of 58.3% for LiAl5O8 at Cr 3+ = 1.0%.The highest QEExt was 18.6% for ALFO:Cr 3+ and 11.6% for LiAl5O8:Cr 3+ .The enhancement of QEs of ALFO:Cr 3+ was attributed to the high crystallinity suggested by the well-developed grains observed in SEM (Figure 3), the disruption of local symmetry around the luminescent center Cr 3+ from the ideal octahedron with the introduction of F − and vacancies on the O 2− sites, and the increased area of the PL peaks due to peak broadening.
Local Environments around Cr 3+
Cr 3+ prefers octahedral coordination in the host spinel lattice in both ALFO LiAl 5 O 8 :Cr 3+ showed the maximum intensity at Cr 3+ = 1.0%, and the change of the overall peak profile was less distinct than ALFO:Cr 3+ .The broad peak around 780 nm also developed in LiAl 5 O 8 :Cr 3+ above 1.0%, but the intensity was relatively low.
Figure 7 shows the variation of the internal and external quantum efficiencies (QEs) of ALFO:Cr 3+ and LiAl 5 O 8 :Cr 3+ versus Cr 3+ concentration.The circles and triangles indicate the internal and external quantum efficiencies QE Int and QE Ext , respectively; the filled and unfilled marks indicate the QEs of ALFO:Cr 3+ and LiAl 5 O 8 :Cr 3+ , respectively.ALFO:Cr 3+ showed higher QEs than LiAl 5 O 8 :Cr 3+ .The QE Int for ALFO was 85.6% at Cr 3+ = 0.5%, higher than the maximum QE Int of 58.3% for LiAl 5 O 8 at Cr 3+ = 1.0%.The highest QE Ext was 18.6% for ALFO:Cr 3+ and 11.6% for LiAl 5 O 8 :Cr 3+ .The enhancement of QEs of ALFO:Cr 3+ was attributed to the high crystallinity suggested by the well-developed grains observed in SEM (Figure 3), the disruption of local symmetry around the luminescent center Cr 3+ from the ideal octahedron with the introduction of F − and vacancies on the O 2− sites, and the increased area of the PL peaks due to peak broadening.LiAl5O8:Cr 3+ showed the maximum intensity at Cr 3+ = 1.0%, and the change of the overall peak profile was less distinct than ALFO:Cr 3+ .The broad peak around 780 nm also developed in LiAl5O8:Cr 3+ above 1.0%, but the intensity was relatively low.
Figure 7 shows the variation of the internal and external quantum efficiencies (QEs) of ALFO:Cr 3+ and LiAl5O8:Cr 3+ versus Cr 3+ concentration.The circles and triangles indicate the internal and external quantum efficiencies QEInt and QEExt, respectively; the filled and unfilled marks indicate the QEs of ALFO:Cr 3+ and LiAl5O8:Cr 3+ , respectively.ALFO:Cr 3+ showed higher QEs than LiAl5O8:Cr 3+ .The QEInt for ALFO was 85.6% at Cr 3+ = 0.5%, higher than the maximum QEInt of 58.3% for LiAl5O8 at Cr 3+ = 1.0%.The highest QEExt was 18.6% for ALFO:Cr 3+ and 11.6% for LiAl5O8:Cr 3+ .The enhancement of QEs of ALFO:Cr 3+ was attributed to the high crystallinity suggested by the well-developed grains observed in SEM (Figure 3), the disruption of local symmetry around the luminescent center Cr 3+ from the ideal octahedron with the introduction of F − and vacancies on the O 2− sites, and the increased area of the PL peaks due to peak broadening.
Local Environments around Cr 3+
Cr 3+ prefers octahedral coordination in the host spinel lattice in both ALFO and LiAl5O8, whereas the different spectra suggested different local environments around Cr 3+
Local Environments around Cr 3+
Cr 3+ prefers octahedral coordination in the host spinel lattice in both ALFO and LiAl 5 O 8 , whereas the different spectra suggested different local environments around Cr 3+ in ALFO and LiAl 5 O 8 .Such aperiodic local structures are difficult to investigate using diffraction-based structural analysis, particularly for low-concentration dopants.In fact, the ALFO:Cr 3+ samples prepared at the different temperatures showed substantially the same XRD patterns (Figure S5 in Supplementary Materials), but the emission peaks varied as shown in Figure 5.
Cr 3+ in the d 3 configuration is an ESR active ion, and the ESR technique was expected to effectively detect the differences in the electronic structures affected by the coordination environment.Figure 8 compares the ESR spectra of 0.5% Cr 3+ doped α-Al 2 O 3 (a), LiAl 5 O 8 (b), and ALFO (c) prepared at 1200 • C. Cr 3+ in α-Al 2 O 3 was focused on in the 1960s as a good model example for ESR measurements [44,[50][51][52][53][54][55][56][57][58].The ESR signal of Cr 3+ in α-Al 2 O 3 (Figure 8a) was consistent with those reported for Cr 3+ in polycrystalline α-Al 2 O 3 in the previous literature [50] and consisted of several peaks corresponding to g = 3.79, 2.26, 1.72, and 1.46.For octahedrally coordinated d 3 ions placed in a magnetic field, the lowest energy state is the spin quartet, with the spin quantum number s = −3/2 along the direction of the magnetic field.The signal of a polycrystalline sample is the average of the spectra of individual crystallites, and the angular dependence of Cr 3+ in single crystalline Al 2 O 3 [59] indicated that the peaks at g = 3.79 and 2.26 were assigned to the s = −3/2 to −1/2 transition, and the peaks at g = 1.72 and 1.46 to the −1/2 to +1/2 transition and +1/2 to +3/2 transition, respectively (Appendix B) [50,51]. in ALFO and LiAl5O8.Such aperiodic structures are difficult to investigate using diffraction-based structural analysis, particularly for low-concentration dopants.In fact, the ALFO:Cr 3+ samples prepared at the different temperatures substantially the same XRD patterns (Figure S5 in Supplementary Materials), but the emission peaks varied as shown in Figure 5. Cr 3+ in the d 3 configuration is an ESR active ion, and the ESR technique was expected to effectively detect the differences in the electronic structures affected by the coordination environment.Figure 8 compares the ESR spectra of 0.5% Cr 3+ doped α-Al2O3 (a), LiAl5O8 (b), and ALFO (c) prepared at 1200 °C.Cr 3+ in α-Al2O3 was focused on in the 1960s as a good model example for ESR measurements [44,[50][51][52][53][54][55][56][57][58].The ESR signal of Cr 3+ in α-Al2O3 (Figure 8a) was consistent with those reported for Cr 3+ in polycrystalline α-Al2O3 in the previous literature [50] and consisted of several peaks corresponding to g = 3.79, 2.26, 1.72, and 1.46.For coordinated d 3 ions placed in a magnetic field, the lowest energy state is the spin quartet, with the spin quantum number s = −3/2 along the direction of the magnetic field.The signal of a polycrystalline sample is the average of the spectra of individual crystallites, and the angular dependence of Cr 3+ in single crystalline Al2O3 [59] indicated that the peaks at g = 3.79 and 2.26 were assigned to the s = −3/2 to −1/2 transition, and the peaks at g = 1.72 and 1.46 to the −1/2 to +1/2 transition and +1/2 to +3/2 transition, respectively (Appendix B) [50,51].Cr 3+ in LiAl5O8 showed essentially the same ESR signal (Figure 8b) as reported by Singh et al. [45] for LiAl5O8:Cr 3+ .Singh et al. assigned distinct peaks at g = 4.03 and 3.27 and smaller peaks at g = 5.44, 4.89, and 4.51 to the isolated Cr 3+ ions and the resonance signal at g = 1.97 to the magnetic interaction between the Cr 3+ ions, based on the literature [60,61].Cr 3+ in LiAl 5 O 8 showed essentially the same ESR signal (Figure 8b) as reported by Singh et al. [45] for LiAl 5 O 8 :Cr 3+ .Singh et al. assigned distinct peaks at g = 4.03 and 3.27 and smaller peaks at g = 5.44, 4.89, and 4.51 to the isolated Cr 3+ ions and the resonance signal at g = 1.97 to the magnetic interaction between the Cr 3+ ions, based on the literature [60,61].
The basic features of the ESR signal of Cr 3+ in ALFO (Figure 8c) were similar to those of Cr 3+ in LiAl 5 O 8 (Figure 8b), but the profiles became broad and the signal positions shifted toward the low magnetic field side.The broadening of the ESR signal was attributed to an inhomogeneous local environment around the Cr 3+ ions in ALFO.The CrO 6 octahedra in ALFO are expected to be disturbed by the statistical distribution of Al 3+ and Li + on the adjacent cation sites.The shift of the ESR signal of ALFO toward the lower field side than LiAl 5 O 8 indicated the increased zero-field splitting.
The PL spectra of Cr 3+ in ALFO prepared at 1100 • C and 1300 • C were apparently similar to that of Cr 3+ in LiAl 5 O 8 , but the ERS signal of Cr 3+ in ALFO prepared at 1300 • C (Figure 8d) was different from that in LiAl 5 O 8 prepared at 1200 • C (Figure 8b), indicating that the similarity in the PL spectra was not due to similar local environments around the luminescent center Cr 3+ in these host materials.
Increasing the Cr 3+ concentration to 2.5% resulted in broadening and enhancement of the ESR peak around g ~2.3 (Figure 8e), which was considered to reflect the magnetic interaction between Cr 3+ ions, as Singh et al. discussed that g = 1.95 was due to exchange coupling of Cr 3+ -Cr 3+ pairs [45].The broadening of the peak at g ~1.96 with increasing Cr 3+ concentration was also observed in Cr 3+ -containing phosphate glasses in the literature [62].The presence of the magnetic interactions between Cr 3+ ions at high concentrations was consistent with the discussion of the PL spectra, where the broad shoulder peak developed around 780 nm with increasing Cr 3+ concentration (Figure 6a).S2 in Materials.
In α-Al 2 O 3 , Cr 3+ was considered to be placed in a trigonal symmetry as suggested by McClure experimentally [63], which was also reproduced in our [39].The previous literature referred to the rhombic distortion for CrO 6 in LiAl 5 O 8 with C 2 symmetry [19,45].Our MD result indicated that Cr 3+ was arranged in a monoclinic symmetry that retained a single two-fold axis passing through the midpoints of O1 and O3, and of O5 and O6 (Figure 9b), and the two-fold axis is illustrated on the O1-O3-O6-O5 plane (c).Because of the complexity of the structure, including the disordered arrangement of cations and the presence of vacancies on the anion sites, MD for ALFO has not yet been performed; the local environment around Cr 3+ in ALFO was inferred to be similar to that in LiAl 5 O 8 , but even more disordered.
The basic features of the ESR of Cr 3+ in ALFO (Figure 8c) were similar to those of Cr 3+ in LiAl5O8 (Figure 8b), but the profiles became broad and the signal positions shifted toward the low magnetic field side.The broadening of the ESR signal was attributed to an inhomogeneous local environment around the Cr 3+ ions in ALFO.The CrO6 octahedra in ALFO are expected to be disturbed by the statistical distribution of Al 3+ and Li + on the adjacent cation sites.The shift of the ESR signal of ALFO toward the lower field side than LiAl5O8 indicated the increased zero-field splitting.
The PL spectra of Cr 3+ in ALFO prepared at 1100 °C and 1300 °C were apparently similar to that of Cr 3+ in LiAl5O8, but the ERS signal of Cr 3+ in ALFO prepared at 1300 °C (Figure 8d) was different from that in LiAl5O8 prepared at 1200 °C (Figure 8b), indicating that the similarity in the PL spectra was not due to similar local environments around the luminescent center Cr 3+ in these host materials.
Increasing the Cr 3+ concentration to 2.5% resulted in broadening and enhancement of the ESR peak around g ~ 2.3 (Figure 8e), which was considered to reflect the magnetic interaction between Cr 3+ ions, as Singh et al. discussed that g = 1.95 was due to exchange coupling of Cr 3+ -Cr 3+ pairs [45].The broadening of the peak at g ~ 1.96 with increasing Cr 3+ concentration was also observed in Cr 3+ -containing phosphate glasses in the literature [62].The presence of the magnetic interactions between Cr 3+ ions at high concentrations was consistent with the discussion of the PL spectra, where the broad shoulder peak developed around 780 nm with increasing Cr 3+ concentration (Figure 6a).
Since it was difficult to directly deduce the detailed local environment from the ESR signals, the local environment of isolated Cr 3+ ions is discussed here using MD.The O-Cr-O bond angles of CrO6 octahedra in α-Al2O3 and ordered LiAl5O8 obtained in MD are tabulated in Table S2 in Supplementary Materials.
In α-Al2O3, Cr 3+ was considered to be placed in a trigonal symmetry as suggested by McClure experimentally [63], which was also reproduced in our MD [39].The previous literature referred to the rhombic distortion for CrO6 in LiAl5O8 with C2 symmetry [19,45].Our MD result indicated that Cr 3+ was arranged in a monoclinic symmetry that retained a single two-fold axis passing through the midpoints of O1 and O3, and of O5 and O6 (Figure 9b), and the two-fold axis is illustrated on the O1-O3-O6-O5 plane (c).Because of the complexity of the structure, including the disordered arrangement of cations and the presence of vacancies on the anion sites, MD for ALFO has not yet been performed; the local environment around Cr 3+ in ALFO was inferred to be similar to that in LiAl5O8, but even more disordered.
Figure 1 .
Figure 1.Comparison of XRD patterns of ALFO:Cr 3+ and LiAl5O8:Cr 3+ prepared at 1200 °C.The peaks marked with ▼ are the extra peaks originating in the cation ordering of Li + and Al 3+ in the spineltype lattice of LiAl5O8 (see text for details).
Figure 1 .
Figure 1.Comparison of XRD patterns of ALFO:Cr 3+ and LiAl 5 O 8 :Cr 3+ prepared at 1200 • C. The peaks marked with ▼ are the extra peaks originating in the cation ordering of Li + and Al 3+ in the spinel-type lattice of LiAl 5 O 8 (see text for details).
ALFO and LiAl 5 O 8 are 848 • C and 723 • C, respectively.The development of the crystal faces of the ALFO:Cr 3+ grains strongly suggested that LiF formed the melt at elevated temperatures and acted as a self-flux to grow the crystalline grains.On the other hand, the indistinct shape of the grains observed in Figure3c,d for LiAl 5 O 8 :Cr 3+ indicated that no flux growth effect was expected, and thus Li 2 CO 3 reacted with α-Al 2 O 3 :Cr 3+ to form LiAl 5 O 8 :Cr 3+ before forming the melt.
Figure 7 .
Figure 7. Internal and external quantum efficiencies of ALFO:Cr 3+ and LiAl 5 O 8 :Cr 3+ with different Cr 3+ concentrations.QE Int and QE Ext represent internal and external quantum efficiencies, respectively.
Since it was difficult to directly deduce the detailed local environment from the ESR signals, the local environment of isolated Cr 3+ ions is discussed here using MD.The O-Cr-O bond angles of CrO 6 octahedra in α-Al 2 O 3 and ordered LiAl 5 O 8 obtained in MD are tabulated in Table
Fluorine-doped lithium aluminate (ALFO) was prepared from LiF and Al 2 O 3 at a ratio of 1:2.ALFO was a fluorine-doped form of LiAl 5 O 8 with slightly lithium-rich compositions.The concentration of the incorporated fluorine was dependent on the preparation conditions; the chemical composition varied from Al 4.69 Li 1.31 F 0.28 O 7.55 prepared at 1100 • C to Al 4.83 Li 1.17 F 0.10 O 7.78 prepared at 1300 • C. ALFO and LiAl 5 O 8 were isostructural with
Table 1 .
Chemical compositions of the fluorine-doped lithium aluminate (ALFO) host material prepared at different temperatures.
Table 1 .
Chemical compositions of the fluorine-doped lithium aluminate (ALFO) host material prepared at different temperatures.
and LiAl5O8, whereas the different spectra suggested different local environments around Cr 3+ | 2024-01-12T16:06:29.783Z | 2024-01-01T00:00:00.000 | {
"year": 2024,
"sha1": "e60fec736fe96cb351a46c8ae881af97ccd04954",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/17/2/338/pdf?version=1704868973",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "379bbd5b78de50ea2223649202d3bbc96ec026c9",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": []
} |
232379245 | pes2o/s2orc | v3-fos-license | Evaluation of SVM performance in the detection of lung cancer in marked CT scan dataset
Received May 26, 2020 Revised Oct 8, 2020 Accepted Nov 30, 2020 This paper concerns the development/analysis of the IQ-OTH/NCCD lung cancer dataset. This CT-scan dataset includes more than 1100 images of diagnosed healthy and tumorous chest scans collected in two Iraqi hospitals. A computer system is proposed for detecting lung cancer in the dataset by using image-processing/computer-vision techniques. This includes three preprocessing stages: image enhancement, image segmentation, and feature extraction techniques. Then, support vector machine (SVM) is used at the final stage as a classification technique for identifying the cases on the slides as one of three classes: normal, benign, or malignant. Different SVM kernels and feature extraction techniques are evaluated. The best accuracy achieved by applying this procedure on the new dataset was 89.8876%.
INTRODUCTION
According to the statistics of the American cancer society, lung cancer is the primary cancer killer in the United States [1]. The overall estimated number of the new cases of all types of cancer in 2013 was 1660290 (854790 for men and 805500 for women), of which the number of lung cancer accounted for 13.7% of the incidences. Whereas the total number of estimated death cases of cancer were 580350 cases (306920 for men and 273430 for women), 27.5% of these were for cases diagnosed with lung cancer with close shares for both genders [2]. According to official records of the the Iraqi ministry of health, lung cancer cases were the second among the most cancer types in 2016, where there were 2123 people of the two genders diagnosed with lung cancer. This represents about 8.31% with a small rise from the ratio of 2015. The distribution among genders was slightly different. It represented 13.27% of all types of cancers in men, while accounting for 12.7% of the total cancer types in women. Cancer is the fourth cause of mortality in the eastern Mediterranean region and is the third cause of death in Iraq, and this rate is generally rising continuously [3,4]. The primary and most important cause of this increase is thought to be smoking. Other factors include pollution, unhealthy diet, continuous exposure to industrial and agricultural carcinogens, and reduced physical activity. The total count of cancer deaths in Iraq in 2014 was (8211), about (4525) in males and (3959) in females. Of which lung cancer deaths scored about 16. 31% Lung cancer is recognized as one of the most critical and most lethal kinds of cancer. In addition to the truth, the survival percentage after late diagnosis is shallow, and sometimes it is almost non-existent. This deadly disease occurs because of the uncontrolled growth of malignant cells within one or both lungs, and its danger lies through the ability of these cells to spread to the rest of the human body. To eliminate this risk, early detection is the best solution. It saves the lives of people and keeps a lot of time on doctors to determine the appropriate treatment for the case. The increasing rates and the vicious nature of lung cancer worldwide puts added pressure on health society to find better, faster, and more accurate methods for the early diagnosis of this type of diseases. Computer based methods are known for their success in various industrial but more importantly medical fields. An essential factor in the quest for effective battling of a disease is the early and accurate detection and diagnosis of the medical condition. Computers resources can be utilized in a fast and scalable fashion.
Many studies employed machine learning and computer vision techniques for detecting lung cancer. For examples, the authors in [5] combined artificial neural networks and fuzzy clustering methods. Their method was applied to a dataset of about 1000 images and obtained accuracy about 97%. Artificial neural networks were employed again in another study in [6]. This one was limited to some symptoms of lung cancer, such as: Yellow fingers, Wheezing, Chest pain, Fatigue, Allergy, and etc. They achieved overall accuracy about 96.67%. The researchers in [7] developed a computer-aided system based on artificial neural networks, their accuracy was around 90.63%. Genetic algorithm was applied by the authors in [8] for feature selection. They employed SVM and ANN classification with reported accuracy at 95.87% and 93.66% respectively. Another study in [9] proposed a CAD system where the authors claim to estimate survival rate in addition to the detection and prediction of lung cancer. They extracted features from a dataset containing 909 image and got an accuracy of about 96%. Artificial intelligence was used in almost all fields of biomedical engineering and informatics such as screening and diagnosis of breast cancer [10,11], diagnosis of Heart diseases [12][13][14], and also in diagnosing and classification of diabetes [15].
The main setback for computer vision applications is the large dimensionality of the problem. In order to achieve credible results with such problems, datasets ought to be very large. The vast majority of success stories in computer based medical diagnosis rely on two pillars: an effective algorithm, and a good, large dataset. This research introduces a new large marked dataset of CT scan images collected at two specialist hospitals in Iraq. This dataset is then tested and benchmarked by using computer vision methods to create a baseline for later investigation.
THE IQ-OTH/NCCD LUNG CANCER DATASET
The Iraq-Oncology teaching hospital/national center for cancer diseases (IQ-OTH/NCCD) lung cancer dataset was collected in the above-mentioned specialist hospitals over a period of three months in fall 2019. It includes CT scans of patients diagnosed with lung cancer in different stages, as well as healthy subjects. IQ-OTH/NCCD slides were marked by oncologists and radiologists in these two centers. The dataset contains a total of 1190 images representing CT scan slices of 110 cases as shown in Figure 1. These cases are grouped into three classes: normal, benign, and malignant. of these, 40 cases are diagnosed as malignant; 15 cases diagnosed with benign; and 55 cases classified as normal cases. The CT scans were originally collected in DICOM format. The scanner used is SOMATOM from Siemens. CT protocol includes: 120 kV, slice thickness of 1 mm, with window width ranging from 350 to 1200 HU and window center from 50 to 600 were used for reading, with breath hold at full inspiration. All images were de-identified before performing analysis. Written consent was waived by the institutional oversight and review board of the participating medical centers which approved the study. Each scan contains several slices. The number of these slices range from 80 to 200 slices, each of them represents an image of the human chest with different sides and angles. The 110 cases vary in gender, age, educational attainment, area of residence and living status. Some of them are employees of the Iraqi ministries of Transport and Oil, others are farmers and gainers. Most of them come from places in the middle region of Iraq, particularly, the provinces of Baghdad, Wasit, Diyala, Salahuddin, and Babylon. The dataset can be accessed online on Kaggle at [16,17].
METHODS AND IMPLEMENTATION
In the proposed method, four steps were applied to analyze the input images as shown in Figure 2.
Preprocessing stage
Original Images are initially subjected to preprocessing. which includes enhancing image quality that might have deteriorated due to poor illumination, and eliminating of noise [18]. This is achieved by bit plane slicing which furthermore reduces their original sizes and focuses them [19]. Figure 4 shows an example input image alongside the result after applying bit plane slicing. Next, Gaussian filtering is applied [20,21]. This image processing technique is applied for enhancing the image and eliminating noise. Then, the erosion technique, which is a morphological operation is used to decrease the sizes of objects. This helps in removing small anomalies by subtracting objects with a radius smaller than the structuring element [18]. The result of applying Gaussian filter and erosion are as depicted in Figures 5 and 6 respectively. Then, the outlining operation [22] utilized for forming the outline of the entire image containing lungs, ribcage, etc., is performed by making all darker pixels in the original image black and all the lighter pixels in the original image white, also the darker pixels at the edges of images become white. This is done to extract the lung borders as shown in the Figure 7 At the end of the enhancement stage, the lung regions are readied for the process segmentation of the nodules from the lung CT image as depicted in Figure 8.
Segmentation stage
The second stage employed here is the segmentation stage. The aim of this stage is to extract the nodules in the lungs. The extracted nodules will be used to derive the features fed to the classifier. Otsu's thresholding technique [23,24] is applied in this research to extract the nodules from the extracted region of the patients' lungs. The technique works by finding a threshold value which reduces the variances of the segmented regions. This techniques seems capable to obtain excellent outputs when the input image histogram has distinct foreground-background peaks [24]. The output from the segmentation method of one of the input images is shown in Figure 9.
Feature extraction stage
Two types of features are extracted and used in this research, Gabor filter [25] and GLCM [26]: Gabor filter catches the total frequency spectrum, the pair amplitude, and phase. For the operation of Gabor feature extraction, the image has been convoluted with each of the Gabor filter group at all pixels [27]. The GLCM refer to Gray Level Co-occurrence Matrix [28]. The technique which consists of two steps is widely used in the applications of the image analysis, especially in the biomedical field. At the first step, GLCM matrix is computed, then the texture features based on the previously computed GLCM matrix are determined in the second step of this feature extraction method [29]. GLCM explains how often each grey level occurs at a pixel located at a fixed geometric position related to each other pixel, as a function of the grey level. The features extracted are as given below: Contrast, refers to the measure of intensity or the grey-level variations in the image between the reference pixel and its neighborhood [ Energy gives the sum of square elements in the GLCM matrix. It derived from the Angular Second Moment (ASM). which estimates the local uniformity of the grey levels [29,30]. The value of the ASM will be large if pixels are very similar, and this value will be small when this pixel is different. 2 Where is the number of gray levels, P is the normalized symmetric GLCM of dimension × , and ( , ) is the ( , ) the element of the normalized GLCM.
The classification stage
For classification, this paper employs SVM [31,32] which is a supervised learning algorithm [33] as a learning model. It is a powerful machine learning techniques for classification and regression [34]. This algorithm achieves the classification of its input by computing the optimal separating hyperplane. This hyperplane is characterized by having the most possible distance to the classes' nearest points (support vectors). The above-mentioned approach ensures that with larger margins, the classifier yields a lower generalization error as depicted by Figure 7a [35] in which H1 does not divide the two classes; H2 divides them but with a small separating margin while H3 separates the classes with the best margin. Separation hyperplanes as shown in Figure 10. The essential ideas behind the Support Vector Machine algorithm can be explained by understanding only four fundamental thoughts: the separating hyperplane, the maximum-margin hyperplane, the soft margin and the kernel function. The above concepts will be described with the aid of the example proposed by Golub et al [36]: A fast look at Figure 11(a) illustrates that green-colored data that represent a specified class, and the red colored data represents another class. A simple rule applied to separate the data by forming an edge, as presented in Figure 11(b). In order to describe the concept of a separating hyperplane, if the data contains only single class so the 'space' is represented by a line with a one-dimension, this line possible to be divided in half applying a particular point, this showed in Figure 11(c). In the case of two-dimensional presented in Figure 11(b), a direct line can divide the area in half, but in three-dimensional, there is a necessity for a plane for sharing the area by using a mathematical trick to extrapolating the data to a higher dimensions; this showed in Figure 11(d).
The difficylity showned in Figure 11(e) gives a big question, which one of the classifiers can be providing the most desirable classifier? By special consideration can getting straightforward thought of choosing the plane or line that is the one in the midpoint, another opinion might go with selecting the plan which divides the two groups but leaves the maximum range than each item of the different classes; this is shown in Figure 11(f), when explaining the range that separates hyperplane and the most next class vector as "the maximum-margin hyperplane", various existing data sets not possible to be separating as clear; rather, they seem similar to the example in Figure 11(g), when the data set includes an "error'" the surrounded object. we need a technique to be effective in dealing with the mistakes in the data by enabling some objects to drop at the 'reverse area' of the separating hyperplane. For handling states similar to those, this algorithm should be altered by appending the "soft margin" by enabling several data objects to force their move over the separating hyperplane margin without influencing the last decision as shown in Figure 11(h).
Let's assume having only a single class as shown in Figure 11(i), in such state, the maximum-margin that separates hyperplane is a particular point. The difficulty here is there is none particular point that able to separating the categories and offering a "soft margin", so the kernel function adds to the data an additional dimension as shown in Figure 11(j) by squaring the original values and a simple direct line will sepreates the data. But the data in Figure 11(k), which isn't separable by such a direct plan. Still, a simplistic kernel function that outlines the data of the two-dimensions area to area with four-dimensions, the result of this operation is given as the bowed line in Figure 11(k). The data on the Figure 11(l) is the equivalent as Figure 11(k), though the extended hyperplane originates from an SVM which applies a high-dimensions kernel function.Then the decision is that the border separating the groups, which is so particular to the patterns in the training dataset. In this case, The algorithm is assumed to become overfitted. And hence, it will not be generalizing well while performed with different examples [37]. Figure 11. The essentials of the Support Vector Machine algorithm
RESULTS AND DISCUSSION
In this paper, three types of features are employed for training the model. First, the Gabor filter only, second GLCM matrix only, and third a combination of the two mentioned features. SVM was applied using three kernels, linear kernel, RBF kernel and polynomial kernel. The dataset was split into two parts, 70% training set and 30% testing set.
Referring to Table 1, results show that generally in all types of feature extraction, the polynomial kernel gives the best accuracy followed by the RBF kernel and the lowest accuracy was for linear kernel. Regarding the extracted features, combining the Gabor filter with GLCM give the best accuracy as described above and the loss of the model: The confusion matrix of the whole 110 cases with the values of true positives, false positives, true negatives and false negatives are shown in the Table 2. The performance metrics of the proposed model based on the above confusion matrix such as sensitivity (Recall) that represents the proportion of true positives that are correctly identified by the test [38]. Specificity that is the proportion of true negatives that are correctly identified by the test [38]. Precision or the positive predictive value that refers to the portion of related cases among the total cases and F score that Represents a combination between precision and recall [39]. These metrics were calculated for the results of the methods employed in this paper, the outcomes are listed in Table 3 below, sensitivity and specificity with their high values show that the algorithm produces robust discrimination between healthy and cancerous images which implies that this method can be reliably used for the diagnosis of lung cancer by using CT scan images. Below there is a comparison between above used technique and some related works: Table 4. Comparison with related works Author Techniques performance Taher et al [5] FCM & HNN Sensitivity=83%, specificity=99%, and accuracy=98% Dandıl et al [7] ANN Sensitivity=92.3%, specificity=89.47%, and accuracy=90.63% Diaz et al [8] GA, SVM, and ANN Accuracy of 95.87% and 93.66 % respectively Lee et al [40] CNN 94.61% Accuracy
CONCLUSIONS
Lung cancer is One of the most dangerous and widespread cancers. This danger depends on the stage at which the disease discovered. For this purpose, we proposed in this study a new dataset of lung cancer CT scans, the IQ-OTH/NCCD dataset. This dataset was analyzed to develop an automatic technique for diagnosis of lung cancer. In this approach image processing which includes enhancement, segmentation, and feature extraction was first applied, followed by the selection of features and finally the classification of images by using SVM. Three different kernels were used for the training of SVM, linear, RBF and Polynomial. The best accuracy of 89.88% was achieved when using SVM with polynomial kernel and a combination of two extracted features, Gabor filter and GLCM. The associated performance metrics with their scores at or above 90% such as sensitivity and specificity suggest that the used method is reliable in separating healthy and malignant chest CT scans. This method is intended as a baseline for further testing and analysis of the new IQ-OTH/NCCD dataset. | 2021-03-27T20:05:51.540Z | 2021-03-01T00:00:00.000 | {
"year": 2021,
"sha1": "79808de858f8efe21868e782e42be1af624ffdb2",
"oa_license": "CCBYNC",
"oa_url": "http://ijeecs.iaescore.com/index.php/IJEECS/article/download/22853/14746",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "79808de858f8efe21868e782e42be1af624ffdb2",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
218816580 | pes2o/s2orc | v3-fos-license | Study on transmission characteristics of two-dimensional defective plasma photonic crystals
Several defective plasma photonic crystals were designed based on crystal edge dislocations, and their transmission characteristics were studied using CST software. The transmission characteristics of PPC with different defects in TM mode are mainly discussed. Compare defective PPCs to non-defective PPCs and other defective PPCs. Defective plasma photonic crystals with the largest changes in PPC electromagnetic characteristics were obtained. This type of defective photonic crystals can create a band gap in a certain frequency band and control its movement through defect parameters.
Introduction
The concept of photonic crystal was introduced by E. Yablonovitch and S. John in 1987. [1][2], it refers to materials that are periodically distributed in space by different media. According to the medium distribution form, photonic crystals can be divided into three types: one-dimensional, two-dimensional and three-dimensional. Due to the Bragg scattering effect, photonic crystals can limit the propagation of electromagnetic waves at certain frequencies, forming a band structure---photonic band gap [3][4]. Another feature of photonic crystals is the localization of photons, that is, when the periodic distribution of the medium changes, photons can tunnel in the photonic band gap to form a defect mode. [5]. the above two characteristics of photonic crystals can be used to make practical devices such as filters, optical waveguides, and optical transistors.
After the preparation of conventional photonic crystals, their electromagnetic properties are determined accordingly. In order to make the photonic crystals more controllable, Hojo and Mase introduced plasma into photonic crystals in 2004 and proposed the concept of plasma photonic crystals [6]. Plasma photonic crystal is a material in which the plasma and other media are periodically arranged. The plasma can be adjusted by external adjustments such as plasma density, temperature and other parameters, as well as the spatial distribution of the plasma [7], to achieve the purpose of adjusting the electromagnetic characteristics of the photonic crystal. After more than ten years of development, plasma photonic crystals have become a hot research topic in the electromagnetic field due to their important basic research value and huge application potential. The current research work of PPC is based on theory, simulation and experiment. PPC products are still far from practical applications.
In this paper, some two-dimensional defect plasma photonic crystal models based on edge dislocations are designed, and the CST software is used to simulate the model of different defect PPCs. The transmission characteristics of photonic crystals with different angles in TM mode and the defect PPC are obtained. The transmission characteristics have been investigated to some extent.
Calculation method
At present, there are several calculation methods for solving photonic crystals.
The Transfer Matrix Method (TMM) is an analytical method. Based on the Maxwell equations, iterative equations of the electric and magnetic fields are derived based on the continuity boundary conditions of the magnetic and electric fields to obtain plasma photons. Crystal reflection, transmission characteristics, and dispersion relationship [8].
The FDTD method is a commonly used method for calculating the dispersion relationship of plasma photonic crystals. This algorithm uses a basic Yee cell to mesh the computing area space, and the E and H field components at the nodes are alternately arranged in space and time. , Convert Maxwell's rotation equation with time variables into difference equations, and gradually advance the solution of electromagnetic fields in space on the time axis [9].
Calculation method
In this paper, a square lattice photonic crystal is used, and a 4 * 6 plasma column array is used as a medium, and the air in the middle of the plasma column is another medium. As shown in Figure 1: The diameter b of the plasma columns in the model is 26mm, and the column spacing a is 13mm. Plasma. The plasma in the model is described by the drude model, considering the specific plasma frequency wp the correspondent relative permittivity is given as: In the formula, the plasma density ω is 4.7 × 10 10 rad / s, and the plasma collision frequency is1.45 × 10 10 . The electromagnetic wave is incident from the right red waveguide along the positive direction of the z-axis. The boundary in the x-axis direction of the model is set as a periodic boundary to simulate the electromagnetic effect of an infinite-length plasma column. The boundaries in the y and z-axis directions are all absorption boundaries. The polarization direction of the electromagnetic wave is the X-axis direction. Figure 2 shows the plane graph of defective PPC. Figure. Figure. 2(d), Figure. 2(b) and Figure. 2(c) can be grouped together. Duplicate the middle two rows of plasma columns and open them at an angle, remove any other plasma columns they encounter. They're like special edge dislocations. Another group of defective PPC, Figure. 2(d), Figure. 2(e) and Figure. 2(f), made by the same process but there are no duplicate steps. In the edge dislocation PPC the values of c and d are 44mm and 16mm. The angles β in Figure. 2(d), Figure. 2(e) and Figure. Figure 3 is the overall picture of all model S21 parameters, in the figure, the trend of all curves from high frequency to low frequency is consistent, the transmittance is low in the low-frequency region, and as the frequency increases, the transmittance gradually increases. This is because when the frequency of the electromagnetic wave is lower than the plasmon resonance frequency, the plasma appears metallic. At this time, the plasma has a strong reflection effect and weak transmission; and when the frequency of the electromagnetic wave is higher than the plasmon resonance frequency, the plasma metal attenuation is reduced, and the transmission effect is enhanced. At the same time, we can see that in the frequency band higher than 10GHz, the s21 curves become stable and the gap is small, so we will look at the range of 2~10GHz. Figure 4. (a), we can see that the simple edge dislocation has a very limited effect on the transmission characteristics of the plasma photonic crystal. Between 4-8GHz, the reflection effect of edge dislocation defect crystals is slightly larger than that of non-defective crystals due to the addition of two plasma columns, so the transmission of defective crystals is smaller than that of non-defective crystals in this frequency range. In Figure 4 (b) we can get similar conclusions. However, as the opening angle in the right group is larger, the plasma columns are smaller, so it can be seen that as the angle increases, the transmittance of PPC is higher in the low frequency range. In groups b, c, and d, things are something different. As shown in Figure 5, in the range of 2 to 4 GHz, it can be clearly seen that there is a band gap on the PPC transmission curve of the 30° defect and the 45 ° defect, and the band gap moves with the change of the defect angle. In addition, according to the experience of the first two groups of controls, the plasma columns of the 2 (b) and 2 (C) groups are less than the non-defective PPC, and their transmittance should be greater than that of the non-defective plasma. The transmittance in the 4GHz range is significantly smaller than that of non-defective PPC. This shows that the defects introduced in this group have caused a large change in the transmission characteristics of PPC in the 2 ~ 4GHz.
Conclusion
In summary, I designed several PPC based on edge dislocation 2D defects and simulated their transmission characteristics using CST software. One of these defective PPCs greatly changes the transmission characteristics of the original PPC in a certain frequency band, so that the PPC can obtain the band gap in a certain frequency band, and the band gap can be moved by adjusting the defect parameters. At the same time, I learned that when building photonic crystals, it is necessary to use materials with large differences in dielectric constants as much as possible, and it is easier to obtain more significant experimental results. It has accumulated valuable experience for subsequent experiments. It also provides a theoretical reference for the design of photonic crystal defect mode devices. | 2020-04-16T09:18:32.850Z | 2020-04-15T00:00:00.000 | {
"year": 2020,
"sha1": "b2b703694e06414f11b7bb2e512124cf85c9f2ab",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/782/3/032036",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "a5be00692243a0c080aac40984aa55edfc9c117d",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
119079501 | pes2o/s2orc | v3-fos-license | Coexistent quantum and classical aspects of magnetization plateaux in alternating-spin chains
Magnetization process of ferrimagnetic Heisenberg chains of alternating spins are theoretically studied. The size scaling analysis with the exact diagonalization of finite systems for ($S$,$s$)=(3/2,1) and (2,1) indicates a multi-plateau structure in the ground-state magnetization curve for $S$ and $s$ $>1/2$. The first plateau at the spontaneous magnetization can be explained by a classical origin, that is the Ising gap. In contrast, the second or higher one must be originated to the quantization of the magnetization. It is also found that all the $2s$ plateaux, including the classical and quantum ones, appear even in the isotropic case with no bond alternation.
I. INTRODUCTION
Alternating spin chains with antiferromagnetic interactions have been attracting a lot of current interest. They behave like a gapped antiferromagnet based on the optical mode of the low-lying excitation, while they also exhibit a ferromagnetic aspect characterized by a spontaneous magnetization. The coexistence of the two aspects gives rise to various interesting crossover phenomena at low temperatures. (Yamamoto 2000) However, few quantum aspects have ever been reported for the systems. Then we consider an interesting phenomenon caused by a quantum mechanism, called quantization of magnetization, on the stage of the quantum ferrimagnetic chains. It would be observed as a plateau in the ground-state magnetization curve. Recently many theoretical (Oshikawa et al. 1997, Totsuka 1998, Tonegawa et al. 1996, Cabra et al. 1997, Cabra and Grynberg 1999and Sakai and Takahashi 1998 and experimental (Narumi et al. 1998 andShiramura et al. 1998) studies suggested the realization of the magnetization plateaux in various systems.
The previous works (Yamamoto andSakai 1998 andYamamoto 1998) by the present authors indicated an important role of the quantum fluctuation to stabilize the plateau against the planar anisotropy in the ferrimagnetic chain. It results in the existence of the plateau even in the XY model of the mixed spins 1, and 1/2. On the other hand, the classical mixed spin systems have the same plateau in the isotropic case. It implies that the plateau is originally based on a classical mechanism, although it is stabilized by a quantum effect. Thus it is difficult to say that the plateau symbolizes the quantum nature of the ferrimagnetic chains. In the present paper, other plateaux, essentially based on a quantum mechanism, are revealed to coexist with the above mentioned classical plateau, when both spins are lager than 1/2.
II. QUANTUM AND CLASSICAL PLATEAUX
The recent exact treatment (Oshikawa et al. 1997) for general quantum spin chains suggested that a magnetization plateau can appear based on the quantization of the magnetization under the condition, where S unit and m are the total spin and magnetization per unit cell. Note that the relation (1) is only a necessary condition. Thus it doesn't guarantee the existence of the plateau nor specify any mechanisms of its formation. The condition (1) is still valid for the mixed spin chains of S and s (S > s), described by the Hamiltonian with (S · s) α = S x s x + S y s y + αS z s z . In the isotropic case (α = 1) with no bond alternation (δ = 0), the system has a spontaneous magnetization m s ≡ S − s. Because of the antiferromagnetic gap the ground state with m s is so stable against the excitation increasing m that there appears a magnetization plateau at m = m s . (Kuramoto 1998) The previous works (Yamamoto andSakai 1998 andYamamoto 1998) on the most quantized system (S, s)=(1,1/2) suggested that the quantum fluctuation stabilizes the plateau against the XYlike anisotropy (α < 1) and the plateau phase extends to the Kosterlitz-Thouless phase boundary in the ferromagnetic region (α < 0). However, this plateau phase also includes the Ising limit (α → ∞) without any other boundaries. Thus it is difficult to identify some quantum effects on the plateau formation, because it cannot be clearly distinguished from the Ising gap based on a classical mechanism. In fact the classical spin (vector) model with the same amplitudes (S, s)=(1,1/2) described by the same Heisenberg Hamiltonian (2) also has a plateau at m = m s for α = 0. Then the plateau at m s should be called a classical plateau. On the other hand, the condition of the quantization (1) suggests that some other plateaux can appear at higher magnetization m = S − s + 1, S − s + 2, · · · , S + s − 1 for S > s > 1/2. These higher plateaux can never be explained by any classical mechanisms, because they cannot appear in the Ising model or classical Heisenberg model. Thus they should be called quantum plateaux, if they really appear. In the following sections, we perform some theoretical analyses for the systems of (S, s) = (3/2, 1) and (2,1) to justify the coexistence of the quantum and classical plateaux in the case of S > s > 1/2.
III. LOW-LYING EXCITATIONS
The optical mode of the low-lying excitations characterizes the feature of the initial plateau at m s . In Fig. 1 we show the excitation spectra of the systems (a)(3/2,1) and (b)(2,1) for α = 1 and δ = 0, derived from the three methods; the quantum Monte Carlo simulation (QMC), the modified spin wave theory and the perturbation from the decoupled dimer. The first one gives the most precise results and the last one is based on the dimer state described as , making use of the Schwinger boson representation: The excitation spectrum of each system has two branches characterizing the ferromagnetic (lower) and antiferromagnetic (upper) features, as well as the system (1,1/2). The calculated curves suggest that the spin wave is more suitable than the decoupled dimer to describe the behavior of the optical branch around its bottom (k = 0). It implies that the classical picture (spin wave excitation from the Néel order) is more effective than the quantum one (dimer-breaking excitation) to explain the origin of the initial plateau, as expected in the above argument.
IV. VARIATIONAL APPROACH
According to the condition of the quantization (1), the mixed-spin chains of (3/2,1) and (2,1) possibly have two plateaux at m = m s and m s + 1. In order to characterize these plateaux, we introduce a variational wave function for the ground state of the model (2) as where c N and c (l) VB are the mixing coefficients. Using the variational wave function, the ground-state phase diagram in the Hδ plane is obtained, as shown in Figs. 2 (a) and (b) for (3/2,1) and (2,1), respectively, where we restrict us on the Heisenberg point (α = 1). On the first step of the magnetization process for each system, there exists a crossover point δ c between the Néel (N) and double-bond dimer (DBD) states. In contrast, the second step toward the saturation (S) is always the single-bond dimer (SBD) state. These two steps before the saturation are expected to characterize the two plateaux. Thus the first plateau should be based on the classical Néel order, while the second one on the quantum Valence-Bond-Solid state, as far as we consider the case of small δ. The variational ground-state phase diagrams on the δH-plane with α = 1 for (a)(3/2,1) and (b)(2,1). Phases are denoted as N: Néel, DBD: double-bond dimer, SBD: single-bond dimer and S: saturation, respectively.
V. PHASE DIAGRAMS
In order to confirm the coexistence of the two plateaux even for α = 1 and δ = 0, we perform a size scaling analysis with the exact diagonalization of finite systems up to N = 12 to present the phase diagrams in the δα plane. E(N, M ) denotes the lowest energy in the subspace with a fixed magnetization M for the Hamiltonian respectively. (Sakai and Takahashi 1998) Thus the scaled quantity N ∆ N (m) is a good probe of the plateau. No size dependence means that the system is gapless. The scaled quantity of the system (3/2,1) at (a)m = 1/2 and (b)m = 3/2 is shown as a function of α for δ = 0 in Fig. 3. Fig. 3 (a) clearly shows that opening plateau around α = 1 vanishes at some critical value α c for m = 1/2 and a gapless phase lies in the region of α < α c . The scaled gap in Fig. 3 (b) also indicates the existence of the second plateau around α = 1, although the size dependence is much smaller than that of the first plateau. It implies that the second plateau is much smaller than the first one. To investigate the critical property around α c , we use the size scaling formula based on the conformal field theory (Cardy 1984, Blöte et al. 1986and Affleck 1986 and with the following notations: ǫ(m) is the ground state energy of the bulk system; v s is the sound velocity derived from the derivative of the dispersion curve at k = 0; c is the central charge; η is the critical exponent defined by the spin-correlation function s x 0s x r ∼ r −η , wheres i is some relevant spin operator. They are valid at gapless points. The calculated c and η of the systems (3/2,1) and (2,1) at m = m s and m = m s + 1 indicate the following properties: as α decreases from 1 with fixed δ, the first and second plateaux vanish at the different Kosterlitz-Thouless critical points α c1 and α c2 , respectively, where η is 1/4 in common; the gapless spin fluid phase characterized by c = 1 lies in the region α < α c1 (α c2 ) at m = m s (m = m s + 1). The universality of the phase boundary of both plateaux are the same as that of the unique plateau of the system (1,1/2) (Yamamoto and). Thus we determine the gapless-plateau phase boundary by η = 1/4 for both plateaux. Thus-obtained phase boundaries for the first and second plateaux are shown together as solid lines in Figs. 4 (a) and (b) for the systems (3/2,1) and (2,1), respectively. The phase diagrams obviously turn out the coexistence of the first (classical) and second (quantum) plateaux even at the most symmetric point (α = 1 and δ = 0). They also exhibit an interesting feature; the quantum plateau phase is larger than the classical one (α c1 ≤ α c2 independently of δ). It implies that the quantum plateau is more stable than the classical one against the planar anisotropy. FIG. 4. The ground-state phase diagrams on the αδ-plane for (a)(3/2,1) and (b)(2,1). Solid lines are the phase boundaries determined by η = 1/4. Dashed lines are the boundaries of the second plateaux determined by the level spectroscopy. A small difference between the two results for the second plateaux is due to the logarithmic size correction which should appear in the former method.
Such a slight size dependence of the scaled quantity N ∆ N (m) for the second plateau as shown in Fig. 3 (b) might make us doubt its existence for α = 1 and δ = 0. Then we perform another analysis, called level spectroscopy (Okamoto andNomura 1992 andNomura 1995), to convince of the existence of the second plateau. It is one of the most precise methods to estimate the Kosterlitz-Thouless phase boundary. Since the method detects the boundary as a level crossing point of the two relevant excitation gaps with the same scaling dimension, the result does not suffer from the dominant logarithmic size correction, which is quite serious for the Kosterlitz-Thouless transition. For the second plateau at m = S − s + 1, the two relevant gaps are given by where E 2 (L, M ) is the second eigenvalue in the same subspace as E(L,M) and M 2 is defined as M 2 ≡ (S −s+1)N . The two excitations have a common scaling dimension 2. The calculated ∆ 0 and ∆ 4 of the system (3/2,1) for δ = 0 are plotted versus α in Fig. 5. It suggests that the phase boundary is easily determined as a crossing point, almost independent of the system size. Thus-obtained boundaries for the second plateaux are shown as dashed curves in Figs. 4 (a) and (b). The results have a little deviation from the boundaries by η = 1/4, because the latter includes the logarithmic size correction. Anyway they lead to the same conclusion; the coexistence of the classical and quantum plateaux for α = 1 and δ = 0. FIG. 5. ∆0 and ∆4 of the system (3/2,1) with δ = 0 versus α. It indicates that the crossing point of the two gaps is almost independent of the system size.
The Néel-dimer crossover point in the first plateau indicated by the variational method in the previous section is not detected as any phase boundaries by these numerical analyses. It suggests that the Néel and dimer pictures cannot be distinguished clearly for the first plateau. In fact it is trivially revealed that in the δα phase diagram at m = m s the isotropic dimer point (α = 1 and δ = 1) is connected to the Ising limit (α → ∞ and δ = 0) via the Ising dimer limit (α → ∞ and δ = 1) through no phase transition or crossover. It implies that the first plateau always bears an aspect of the Ising gap even for large δ. The ground-state magnetization curves of the quantum system with δ = 0 at various values of α for (a)(3/2,1) and (b)(2,1).
VI. MAGNETIZATION CURVE
Finally we present the ground state magnetization curve in several cases of the systems (3/2,1) and (2,1). The curve is given by extrapolating H ± (N, M ) to the thermodynamic limit using the size scaling (Sakai and Takahashi 1998) based on the conformal field theory at gapless points and the Shanks transformation for plateaux. We show only the results of a suitable polynomial fitting to thus-obtained points, in Figs. 6 (δ = 0) and 7 (δ = 0.4), where the labels (a) and (b) indicate the systems (3/2,1) and (2,1), respectively. They visualize the coexistence of the classical and quantum plateaux at the Heisenberg point. These results also justify the above mentioned feature; the quantum plateau is smaller at the Heisenberg point but more stable against the XYlike anisotropy, rather than the cassical one. Therefore, if these systems lose a plateau due to the anisotropy, only the quantum one would survive. In order to clarify the difference in the mechanism of the gap formation between the first and second plateaux, we also present the magnetization curves of the classical Heisenberg spin systems described by the same Hamiltonian (2) with the same amplitudes (S, s) = (3/2, 1) and (2,1) in Figs. 8 (a) and (b), respectively. (The results are independent of δ.) The classical systems clearly have the first plateau in the isotropic case, while there appears no plateau corresponding to the second one. It also supports the quantum nature of the second plateau. These plateaux of the classical systems vanishes even for a slight anisotropy; α c = 0.980 and 0.943 for (S, s)=(3/2,1) and (2,1), repectively. In comparison with these critical values, the phase boundaries in the quantum systems in Figs. 4 (a) and (b) imply that quantum fluctuation stabilies even the first plateau. Thus, in geneal, the quantum effect is expected to toughen every field-induced gap against the planar anisotropy. Nevertheless the first and second plateaux should be distinguished, because the former appears even in the classical limit, while the latter doesn't exist until the spin is quantized.
VII. CONCLUDING REMARKS
The above investigations turn out the coexistence of the classical and quantum plateaux in the ground-state magnetization curve of the mixed spin chains of (3/2,1) and (2,1). The conclusion is easily generalized for (S,s) (S > s > 1/2); the magnetization curve has 2s plateaux and only the initial one is based on a classical mechanism, while the other ones originate in quantum correlations.
In most previous works on the magnetizaion plateau the gap formation were based on the bond polymerization. (Totsuka 1998, Tonegawa et al. 1996, Cabra et al. 1997, Cabra and Grynberg 1999 In contast, the present proposal of the plateau in ferrimagnetic chains is a pioneering trial to explore a novel mechanism of the fieldinduced gap associated to the spin polymerization. The bimetallic chains such as MM ′ (pbaOH)(H 2 O) 3 ·nH 2 O (Karn 1989 and Kahn et al. 1995) are good candidates to realize the spin plymerization. Unfortunately, most of them are in the case of M ′ =Cu, that is s = 1/2. In fact a few compounds with other metals were also synthesized, for example, MM ′ (EDTA)6H 2 O (MM ′ =CoNi, MnCo and MnNi). However, the case of MM ′ =CoNi yields (S, s)=(1(Ni),1/2(Co)) (Drillon et al. 1985) and MnCo a large Ising-like anisotropy. (Drillon et al. 1986) Thus they are not any suitable stages to search for the quantum plateau. Among the series of bimetallic chains, the most suitable compound might be MnNi(EDTA)6H 2 O, which is well described by the 1D spin-alternating Heisenberg model of (5/2,1). (Drillon et al. 1986) The magnetization measurement on it would be interesting to investigate a possible quantum plateau at m = 5/2, as well as the classical one at m = 3/2.
One of the most important remarks in the present work is the coexistence of the quantum and classical plateaux even at the most symmetric point (α = 1 and δ = 0). In order to examine it, compounds consisting of metals and stable organic radicals (Caneschi et al. 1989 andMarkosyan et al. 1998) might be a more ideal stage, because the organic radicals lead to entirely isotropic spin systems, rather than the bimetallic chains with inevitable Ising-like anisotropy. The metal-radical complex also has a lot of variations. The recently synthesized one {Mn(hfac) 2 } 3 (3R) 2 (Markosyan et al. 1998) has been investigated to realize the (5/2,3/2) spin chain. We hope the present calculations will stimulate not only further theoretical investigations, but also experimental explorations into the magnetization plateaux in ferrimagnets as spin-polymerized materials. | 2019-04-14T01:56:45.445Z | 2000-10-27T00:00:00.000 | {
"year": 2000,
"sha1": "543d57b1c46afca862c4668da7fe52d42c0bb2bf",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0010446",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "543d57b1c46afca862c4668da7fe52d42c0bb2bf",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.